<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: eolgun</title><link>https://news.ycombinator.com/user?id=eolgun</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 29 Apr 2026 10:48:27 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=eolgun" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by eolgun in "GitHub Copilot is moving to usage-based billing"]]></title><description><![CDATA[
<p>The 27x multiplier on Opus is the tell. That is not a pricing model designed for broad adoption, it's a price signal that says 'use the cheaper model.' The problem is that once users start self-censoring which model they reach for based on cost anxiety, you've degraded the product experience in a way that's invisible in the metrics but very visible in churn.<p>Flat subscriptions had one big advantage: zero cognitive overhead per request. That's worth more than people admit.</p>
]]></description><pubDate>Tue, 28 Apr 2026 11:50:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47933169</link><dc:creator>eolgun</dc:creator><comments>https://news.ycombinator.com/item?id=47933169</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47933169</guid></item><item><title><![CDATA[New comment by eolgun in "An Update on GitHub Availability"]]></title><description><![CDATA[
<p>The AI agent growth explanation is interesting but also a bit of a deflection. If a meaningful portion of your traffic is now automated agents, your capacity planning model is fundamentally different, you're no longer scaling for human paced workflows but for burst patterns that look nothing like historical load.<p>The unlabeled graphs don't help the credibility case. When you are already in the hole on trust, shipping a post that requires readers to assume favorable baselines is exactly the wrong move.</p>
]]></description><pubDate>Tue, 28 Apr 2026 11:49:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47933165</link><dc:creator>eolgun</dc:creator><comments>https://news.ycombinator.com/item?id=47933165</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47933165</guid></item><item><title><![CDATA[New comment by eolgun in "4TB of voice samples just stolen from 40k AI contractors at Mercor"]]></title><description><![CDATA[
<p>The biometric pairing is what makes this particularly bad. A leaked password is recoverable. A leaked voiceprint combined with ID scans is permanent, you can not rotate your voice.<p>The deeper problem is that most of these companies collected this data because they could, not because they needed it for the core service. 'Datensparsamkeit' is the right frame: the voice samples were a liability sitting on a server waiting for exactly this.</p>
]]></description><pubDate>Mon, 27 Apr 2026 15:08:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47922654</link><dc:creator>eolgun</dc:creator><comments>https://news.ycombinator.com/item?id=47922654</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47922654</guid></item><item><title><![CDATA[New comment by eolgun in "An AI agent deleted our production database. The agent's confession is below"]]></title><description><![CDATA[
<p>The confession framing is the wrong lesson. The agent didn't delete the database, someone gave the agent write access to production. The culprit is in the IAM policy, not the prompt.<p>Principle of least privilege exists precisely for this. If a tool doesn't need DELETE permissions to function, it shouldn't have them. Asking AI to 'be careful' is not an access control strategy.</p>
]]></description><pubDate>Mon, 27 Apr 2026 07:17:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47918631</link><dc:creator>eolgun</dc:creator><comments>https://news.ycombinator.com/item?id=47918631</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47918631</guid></item><item><title><![CDATA[New comment by eolgun in "AI should elevate your thinking, not replace it"]]></title><description><![CDATA[
<p>The 'Socrates worried about writing' analogy is usually deployed to dismiss concerns, but it misses an asymmetry which is writing preserved thought, it didn't generate it on demand. The real question is whether AI is closer to a pencil or a ghostwriter.<p>For junior engineers the distinction matters most. The reps are not just about getting the right answer, they are about building the intuition for when the answer is wrong. That's the hardest thing to transfer between people, and the thing AI is currently worst at self-verifying.</p>
]]></description><pubDate>Mon, 27 Apr 2026 07:16:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47918627</link><dc:creator>eolgun</dc:creator><comments>https://news.ycombinator.com/item?id=47918627</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47918627</guid></item><item><title><![CDATA[New comment by eolgun in "The West forgot how to make things, now it’s forgetting how to code"]]></title><description><![CDATA[
<p>The Fogbank example is the most chilling part. It's not just that they lost the people — they lost the ability to know what they didn't know. Nobody could even write down what was missing because the knowledge was never formalized in the first place.<p>The junior hiring collapse compounds this. Senior engineers develop judgment partly by watching juniors make mistakes and correcting them. Remove that loop and you don't just lose future seniors — you quietly degrade the current ones.<p>The 0.18% recruiting conversion rate mentioned here tracks with what I see in compliance and security engineering too. "Can you tell when the AI is confidently wrong?" is now the most important interview question, and almost nobody can answer it well.</p>
]]></description><pubDate>Sun, 26 Apr 2026 09:48:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47908905</link><dc:creator>eolgun</dc:creator><comments>https://news.ycombinator.com/item?id=47908905</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47908905</guid></item><item><title><![CDATA[New comment by eolgun in "Google plans to invest up to $40B in Anthropic"]]></title><description><![CDATA[
<p>Interesting timing — most compliance tools in the enterprise space still treat AI infrastructure providers as black boxes. If Anthropic scales this way, SOC 2 and security questionnaires around AI vendors are going to become a much bigger deal for startups.</p>
]]></description><pubDate>Sat, 25 Apr 2026 12:51:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47901089</link><dc:creator>eolgun</dc:creator><comments>https://news.ycombinator.com/item?id=47901089</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47901089</guid></item></channel></rss>