<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: fudfomo</title><link>https://news.ycombinator.com/user?id=fudfomo</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 01 May 2026 12:38:22 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=fudfomo" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by fudfomo in "Coding after coders: The end of computer programming as we know it?"]]></title><description><![CDATA[
<p>Most of this thread is debating whether models are good or bad at writing code... however, I think a more important question is what we feed the AI with because that dramatically determines the quality of the output.<p>When your agent explores your codebase trying to understand what to build, it read schema files, existing routes, UI components etc... easily 50-100k tokens of implementation detail. It's basically reverse-engineering intent from code. With that level of ambiguous input, no wonder the results feel like junior work.<p>When you hand it a structured spec instead including data model, API contracts, architecture constraints etc., the agent gets 3-5x less context at much higher signal density. Instead of guessing from what was built it knows exactly what to build. Code quality improves significantly.<p>I've measured this across ~47 features in a production codebase with amedian ratio: 4x less context with specs vs. random agent code exploration. For UI-heavy features it's 8-25x. The agent reads 2-3 focused markdown files instead of grepping through hundreds of KB of components.<p>To pick up @wek's point about planning from above: devs who get great results from agentic development aren't better prompt engineers... they're better architects. They write the spec before the code, which is what good engineering always was... AI just made the payoff for that discipline 10x more visible.</p>
]]></description><pubDate>Fri, 13 Mar 2026 22:46:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47371005</link><dc:creator>fudfomo</dc:creator><comments>https://news.ycombinator.com/item?id=47371005</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47371005</guid></item><item><title><![CDATA[New comment by fudfomo in "Don't post generated/AI-edited comments. HN is for conversation between humans"]]></title><description><![CDATA[
<p>Highly appreciate this! It's what makes the difference: humans are not perfect which is why evolution works quite well.</p>
]]></description><pubDate>Thu, 12 Mar 2026 07:20:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47347513</link><dc:creator>fudfomo</dc:creator><comments>https://news.ycombinator.com/item?id=47347513</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47347513</guid></item><item><title><![CDATA[New comment by fudfomo in "Levels of Agentic Engineering"]]></title><description><![CDATA[
<p>It needs a canonical source of truth, something isolated agents can't provide easily. There are tools out there like specularis that help you do that and keep specs in sync.</p>
]]></description><pubDate>Wed, 11 Mar 2026 15:19:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47336769</link><dc:creator>fudfomo</dc:creator><comments>https://news.ycombinator.com/item?id=47336769</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47336769</guid></item><item><title><![CDATA[New comment by fudfomo in "Test Evals Are Not Enough"]]></title><description><![CDATA[
<p>Thanks for sharing. So does it keep specs canonical?</p>
]]></description><pubDate>Wed, 11 Mar 2026 15:15:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47336727</link><dc:creator>fudfomo</dc:creator><comments>https://news.ycombinator.com/item?id=47336727</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47336727</guid></item></channel></rss>