<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: stephen_cagle</title><link>https://news.ycombinator.com/user?id=stephen_cagle</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 16:57:40 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=stephen_cagle" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by stephen_cagle in "Creating the Futurescape for the Fifth Element (2019)"]]></title><description><![CDATA[
<p>I'm blown away by the idea of not using Chris Tucker for Ruby Rhod. It is like imagining anyone but Hugh Jackman as Wolverine. They are basically perfect castings.</p>
]]></description><pubDate>Fri, 10 Apr 2026 02:29:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47712896</link><dc:creator>stephen_cagle</dc:creator><comments>https://news.ycombinator.com/item?id=47712896</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47712896</guid></item><item><title><![CDATA[New comment by stephen_cagle in "DRAM pricing is killing the hobbyist SBC market"]]></title><description><![CDATA[
<p>Last month I "panic bought" a $999 Macbook Mini (32G) so I could run small models, Image Generation, and Voice synthesis on it. I don't think I regret it yet, despite the fact that you can get a 16G for $599, which is honestly a much more efficient price per Gig.<p>I think it is interesting that, at least thus far, Apple has chosen not to raise the price of their comps despite presumably the price of RAM going up multiples.<p>Tipping point for me: It will be a pretty kickass media server for at least a decade.</p>
]]></description><pubDate>Thu, 02 Apr 2026 03:10:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47609554</link><dc:creator>stephen_cagle</dc:creator><comments>https://news.ycombinator.com/item?id=47609554</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47609554</guid></item><item><title><![CDATA[New comment by stephen_cagle in "Do your own writing"]]></title><description><![CDATA[
<p>Writing (unassisted) is probably the first step towards your own independent thoughts.<p>I'm reminded of that scene in "Ghost in the Shell" where some guy ask the Major why he is on the team (full of cyborgs) and she responds something along the line of "Because you are basically un-enhanced (maybe without a ghost?) and are likely to respond differently then the rest of us; Overspecialization is death."<p>I think a diversity of opinion is important for society. I'm worried that LLM's are going to group-think us into thinking the same way, believing the same things, reacting the same way.<p>I wonder if future children will need to be taught how to purposely have their own opinions; being so used to always asking others before even considering things on their own? The LLM will likely reach a better conclusion than you  would on your own, but there is value in diverging from the consensus and thinking your own thoughts.<p><a href="https://stephencagle.dev/posts-output/2025-10-14-you-should-write-poorly/" rel="nofollow">https://stephencagle.dev/posts-output/2025-10-14-you-should-...</a></p>
]]></description><pubDate>Tue, 31 Mar 2026 01:29:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47581726</link><dc:creator>stephen_cagle</dc:creator><comments>https://news.ycombinator.com/item?id=47581726</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47581726</guid></item><item><title><![CDATA[New comment by stephen_cagle in "The Cognitive Dark Forest"]]></title><description><![CDATA[
<p>I think the most interesting idea here is the idea of people purposely keeping secrets in order to maintain advantages.<p>Beliefs: At this time, I do not actually believe that LLM's can innovate in any real way. I'm not even clear if they can abstract. I think the most creative thing they can do is act as digital "nudgers" on combinatorial deterministic problems; illustrated by their performance on very specific geometry and chemistry problems.<p>Anyway, my point is that I think they may still need human beings to actually provide novel solutions to problems. To handle the unexpected. To simplify. LLM's can execute once they have been trained, but they cannot train themselves.<p>In the past, the saying in silicon valley was often "ideas are cheap". And there was some truth to that. Execution <i>was</i> far more difficult then the idea itself. Execution  was so much more difficult than "pure thought" that you could often publicize the algorithmic/process/whatever that you had and still offer a product/service/consultancy that made use of it. The execution was the valuable thing.<p>But LLM's execute at a cost that is fractional of human cost and multiples of human development speed. The idea hasn't increased in value, but the execution cost has decreased markedly. In this world, protecting the idea is far more valuable than it is in the previous world. You can't keep your competitors away by out executing them, but you can keep them away if you have some advantage that they do not understand.<p>And, I agree, that is quite worrisome. If people don't share knowledge then knowledge disseminate much more slowly as everyone has to independently learn things on their own. That is a frightening future.</p>
]]></description><pubDate>Sun, 29 Mar 2026 23:46:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47568691</link><dc:creator>stephen_cagle</dc:creator><comments>https://news.ycombinator.com/item?id=47568691</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47568691</guid></item><item><title><![CDATA[New comment by stephen_cagle in "Fear and denial in Silicon Valley over social media addiction trial"]]></title><description><![CDATA[
<p>Does anyone have a breakdown from the case itself about what particular features of these social media apps makes them threshold into the "addictive" classification?<p>- Infinite Scrolling?<p>- Play Next Video Automatically?<p>- Shorts?<p>- Matching to your peer group?<p>- Variable Reward?<p>- Social Reciprocity?<p>- Notifications?<p>- Gamification (Streaks)?<p>Was the case won on the argument that it is the aggregate of these things (and many more I am sure)? The power imbalance between the user and the company? Was it some particular subset of them that they rest their argument on? I'm just genuinely curious how you can win a very challenging case like this without inadvertently lassoing so many other industries that your arguments seem ludicrous?</p>
]]></description><pubDate>Sat, 28 Mar 2026 14:13:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47554812</link><dc:creator>stephen_cagle</dc:creator><comments>https://news.ycombinator.com/item?id=47554812</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47554812</guid></item><item><title><![CDATA[New comment by stephen_cagle in "What Young Workers Are Doing to AI-Proof Themselves"]]></title><description><![CDATA[
<p>Dark thoughts... Imagine a future where most human beings are just overseered by an LLM and we are just wearing AR work glasses. Barely aware of what (physical) work we are doing as we overlay our hands within the projections of our AR glasses. Every task  is decomposed into a set of small physical steps, you don't even think about what you are trying to actually accomplish, just follow the steps one at a time. I wonder if an entire fast food restaurant could be run in this fashion? No managers, no shift supervisors, just a skeleton crew doing one step of a task at a time.</p>
]]></description><pubDate>Mon, 23 Mar 2026 02:43:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47484935</link><dc:creator>stephen_cagle</dc:creator><comments>https://news.ycombinator.com/item?id=47484935</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47484935</guid></item><item><title><![CDATA[New comment by stephen_cagle in "What young workers are doing to AI-proof themselves"]]></title><description><![CDATA[
<p>I'm somewhat skeptical of this "enter the trades" movement. Actually, I am more skeptical of that statement than I am of LLM's replacing white collar work in general. I think parts of coding are being replaced quickly because they are the parts that don't require discernment. Trades likely contain just as many automatable and just as many discernment parts as white collar work. At this moment in history, the automatable parts are being automated in the knowledge based world. People think the physical world is somehow different, but with world models (along the full spectrum of what that means) the physical world will be just as trainable as the knowledge based world.<p>tldr; Just like knowledge work, most trade stuff is probably mostly repeated (i.e. very trainable) task with a small amount of taste and discernment applied. The repeated will be trainable, the discernment <i>may</i> be trainable. I don't think the physical world is necessarily any safer than the knowledge world.</p>
]]></description><pubDate>Sun, 22 Mar 2026 21:10:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47482188</link><dc:creator>stephen_cagle</dc:creator><comments>https://news.ycombinator.com/item?id=47482188</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47482188</guid></item><item><title><![CDATA[New comment by stephen_cagle in "Why I love NixOS"]]></title><description><![CDATA[
<p>Have you heard of any good projects for running isolated containers in NixOS that are cheaply derived from your own NixOS config? Because that is what I want. I want a computer where I can basically install every non stock app in its own little world, where it thinks "huh, that is interesting, I seem to be the only app installed on this system".<p>Basically, I want to be able to run completely unverified code off of the internet on my local machine, and know that the worst thing it can possibly due is trash its own container.<p>I feel like NixOS, is one path toward getting to that future.</p>
]]></description><pubDate>Sun, 22 Mar 2026 18:54:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47480799</link><dc:creator>stephen_cagle</dc:creator><comments>https://news.ycombinator.com/item?id=47480799</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47480799</guid></item><item><title><![CDATA[New comment by stephen_cagle in "Brute-forcing my algorithmic ignorance"]]></title><description><![CDATA[
<p>> Find Minimum in Rotated Sorted Array<p>I've seen that problem in an interview before, and I thought the solution I hit upon was pretty fun (if dumb).<p><pre><code>  class Solution:
      def findMin(self, nums: List[int]) -> int:
          class RotatedList():
              def __init__(self, rotation):
                  self.rotation = rotation
              def __getitem__(self, index):
                  return nums[(index + self.rotation) % len(nums)]
  
          class RotatedListIsSorted():
              def __getitem__(self, index) -> bool:
                  rotated = RotatedList(index)
                  print(index, [rotated[i] for i in range(len(nums))])
                  return rotated[0] < rotated[len(nums) // 2]
              def __len__(self):
                  return len(nums)
  
          rotation = bisect_left(RotatedListIsSorted(), True)
          print('rotation =>', rotation)
          return RotatedList(rotation)[0]

</code></pre>
I think it is really interesting that you can define "list like" things in python using just two methods. This is kind of neat because sometimes you can redefine an entire problem as actually just the questions of finding the binary search of a list of solutions to that problem; here you are looking for the leftmost point that it becomes True. Anyway, I often bomb interviews by trying out something goofy like this, but I don't know, when it works, it is glorious!<p>Good luck on your second round!</p>
]]></description><pubDate>Sun, 22 Mar 2026 17:49:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47480129</link><dc:creator>stephen_cagle</dc:creator><comments>https://news.ycombinator.com/item?id=47480129</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47480129</guid></item><item><title><![CDATA[New comment by stephen_cagle in "Ask HN: AI productivity gains – do you fire devs or build better products?"]]></title><description><![CDATA[
<p>> you start off checking every diff like a hawk, expecting it to break things, but honestly, soon you see it's not necessary most of the time.<p>My own experience...<p>I've tried approaching vibe coding in at least 3 different ways. At first I wrote a system that had specs (markdown files) where there is a 1 to 1 mapping between each spec to a matching python module. I only ever edited the spec, treating the code itself as an opaque thing that I ignore (though defined the intrefaces for). It kind of worked, though I realized how distinct the difference between a spec that communicates intent and a spec that specifies detail really is.<p>From this, I  felt that maybe I need to stay closer to the code, but just use the LLM as a bicycle of the mind. So I tried "write the code itself, and integrate an LLM into emacs so that you can have a discussion with the LLM about individual code, but you use it for criticism and guidance, not to actually generate code". It also worked (though I never wrote anything more then small snippets of Elisp with it). I learned more doing things this way, though I have the nagging suspicion that I was actually moving slower than I theoretically could have. I think this is another valid way.<p>I'm currently experimenting with a 100% vibe coded project (<a href="https://boltread.com" rel="nofollow">https://boltread.com</a>). I mostly just drive it through interaction on the terminal, with "specs" that kind of just act as intent (not specifications). I find the temptation to get out of the outside critic mode and into just looking at the code is quite strong. I have resisted it to date (I want to experiment with what it feels like to be a vibe coder who cannot program), to judge if I realistically need to be concerned about it. Just like LLM generated things in general, the project seems to get closer and closer to what I want, but it is like shaping mud, you can put detail into something, but it won't stay that way over time; its sharp detail will be reduced to smooth curves as you then switch to putting detail elsewhere. I am not 100% sure on how to deal with that issue.<p>My current thoughts is that we have failed to actually find a good way of switching from the "macro" (vibbed) to the "micro" (hand coded) view of LLM development. It's almost like we need modules (blast chambers?) for different parts of any software project. Where we can switch to doing things by hand (or at least with more intent) when necessary, and doing things by vibe when not. Striking the balance between those things that nets the greater output is quite challenging, and it may not even be that there is an optimal intersection, but simply that you are exchanging immediate change for future flexibility to the software?</p>
]]></description><pubDate>Sun, 22 Mar 2026 17:25:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47479850</link><dc:creator>stephen_cagle</dc:creator><comments>https://news.ycombinator.com/item?id=47479850</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47479850</guid></item><item><title><![CDATA[New comment by stephen_cagle in "Allow me to get to know you, mistakes and all"]]></title><description><![CDATA[
<p>I largely reached the same conclusion recently => <a href="https://stephencagle.dev/posts-output/2025-10-14-you-should-write-poorly/" rel="nofollow">https://stephencagle.dev/posts-output/2025-10-14-you-should-...</a></p>
]]></description><pubDate>Sun, 15 Mar 2026 06:13:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47384774</link><dc:creator>stephen_cagle</dc:creator><comments>https://news.ycombinator.com/item?id=47384774</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47384774</guid></item><item><title><![CDATA[New comment by stephen_cagle in "Let's discuss sandbox isolation"]]></title><description><![CDATA[
<p>I use KVM/QEMU on Linux. I have a set of scripts that I use to create a new directory with a VM project and that also installs a debian image for the VM. I have an ./pull_from_vm and ./push_to_vm that I use to pull and push the git code to and from the vm. As well as a ./claude to start claude on the vm and a ./emacs to initialize and start emacs on the vm after syncing my local .spacemacs directory to the vm (I like this because of customized emacs muscle memory and because I worry that emacs can execute arbitrary code if I use it to ssh to the VM client from my host).<p>I try not to run LLM's directly on my own host. The only exception I have is that I do use <a href="https://github.com/karthink/gptel" rel="nofollow">https://github.com/karthink/gptel</a> on my own machine, because it is just too damn useful. I hope I don't self own myself with that someday.</p>
]]></description><pubDate>Fri, 27 Feb 2026 22:26:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47186611</link><dc:creator>stephen_cagle</dc:creator><comments>https://news.ycombinator.com/item?id=47186611</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47186611</guid></item><item><title><![CDATA[New comment by stephen_cagle in "I baked a pie every day for a year"]]></title><description><![CDATA[
<p>Nothing at all. Just a comment on the internet. Taking a walk AND and baking a pie is even better.<p>I'm just making a slight point that walking is probably the simplest most effective thing you can do to improve almost every aspect of your life.</p>
]]></description><pubDate>Thu, 26 Feb 2026 20:23:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47171519</link><dc:creator>stephen_cagle</dc:creator><comments>https://news.ycombinator.com/item?id=47171519</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47171519</guid></item><item><title><![CDATA[New comment by stephen_cagle in "I baked a pie every day for a year"]]></title><description><![CDATA[
<p>Not to take anything from any other activity that someone embraces, but I imagine that for the majority of people in the developed world, taking a 1 hour walk every day would be the most "life changing" thing you could do.</p>
]]></description><pubDate>Thu, 26 Feb 2026 20:08:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47171369</link><dc:creator>stephen_cagle</dc:creator><comments>https://news.ycombinator.com/item?id=47171369</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47171369</guid></item><item><title><![CDATA[New comment by stephen_cagle in "Claws are now a new layer on top of LLM agents"]]></title><description><![CDATA[
<p>Biggest question I have is maybe... just maybe... LLM's would have had sufficient intelligence to handle micropayments. Maybe we  might not have gone down the mass advertising "you are the product" path?<p>Like, somehow I could tell my agent that I have a $20 a month budget for entertainment and a $50 a month budget for news, and it would just figure out how to negotiate with the nytimes and netflix and spotify (or what would have been their equivalent), which is fine. But would also be able to negotiate with an individual band who wants to directly sell their music, or a indie game that does not want to pay the Steam tax.<p>I don't know, just a "histories that might have been" thought.</p>
]]></description><pubDate>Sat, 21 Feb 2026 21:54:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47105222</link><dc:creator>stephen_cagle</dc:creator><comments>https://news.ycombinator.com/item?id=47105222</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47105222</guid></item><item><title><![CDATA[New comment by stephen_cagle in "IRS lost 40% of IT staff, 80% of tech leaders in 'efficiency' shakeup"]]></title><description><![CDATA[
<p>Good point, and kind of interesting in that as we keep cutting funding to the IRS, this ratio will probably get wider (which looks good, but is actually bad for what it implies).</p>
]]></description><pubDate>Thu, 19 Feb 2026 22:26:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47080518</link><dc:creator>stephen_cagle</dc:creator><comments>https://news.ycombinator.com/item?id=47080518</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47080518</guid></item><item><title><![CDATA[New comment by stephen_cagle in "Gemini 3.1 Pro"]]></title><description><![CDATA[
<p>Can't argue with that, I'll move my Bayesian's a little in your direction. With that said, are most other models able to do this? Also, did it write the solution itself or use a library like Eigen?<p>I <i>have</i> noticed that LLM's seem surprisingly good at translating from one (programming) language to another... I wonder if transforming a generic mathematical expression into an expression template is a similar sort of problem to them? No idea honestly.</p>
]]></description><pubDate>Thu, 19 Feb 2026 22:23:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47080474</link><dc:creator>stephen_cagle</dc:creator><comments>https://news.ycombinator.com/item?id=47080474</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47080474</guid></item><item><title><![CDATA[New comment by stephen_cagle in "Gemini 3.1 Pro"]]></title><description><![CDATA[
<p>Yeah, as evidenced by the birds (above), I think it is probably the best vision model at this time. That is a good idea, I should also use it for business cards as well I guess.</p>
]]></description><pubDate>Thu, 19 Feb 2026 22:11:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47080306</link><dc:creator>stephen_cagle</dc:creator><comments>https://news.ycombinator.com/item?id=47080306</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47080306</guid></item><item><title><![CDATA[New comment by stephen_cagle in "IRS lost 40% of IT staff, 80% of tech leaders in 'efficiency' shakeup"]]></title><description><![CDATA[
<p>Is that 415:1 the rate of return of an audit, or the expense:revenue ratio of the IRS as a whole? I remember hearing some time ago that the expense ratio was 11% for the IRS? But 415:1 is way way less than 11%.</p>
]]></description><pubDate>Thu, 19 Feb 2026 20:19:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47078674</link><dc:creator>stephen_cagle</dc:creator><comments>https://news.ycombinator.com/item?id=47078674</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47078674</guid></item><item><title><![CDATA[New comment by stephen_cagle in "Gemini 3.1 Pro"]]></title><description><![CDATA[
<p>I also worked at Google (on the original Gemini, when it was still Bard internally) and my experience largely mirrors this. My finding is that Gemini is pretty great for factual information and also it is the only one that I can reliably (even with the video camera) take a picture of a bird and have it tell me what the bird is. But it is just pretty bad as a model to help with development, myself and everyone I know uses Claude. The benchmarks are always really close, but my experience is that it does not translate to real world (mostly coding) task.<p>tldr; It is great at search, not so much action.</p>
]]></description><pubDate>Thu, 19 Feb 2026 20:10:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47078543</link><dc:creator>stephen_cagle</dc:creator><comments>https://news.ycombinator.com/item?id=47078543</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47078543</guid></item></channel></rss>