<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: 0xcafefood</title><link>https://news.ycombinator.com/user?id=0xcafefood</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 08 Apr 2026 12:55:05 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=0xcafefood" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by 0xcafefood in "AI didn't simplify software engineering: It just made bad engineering easier"]]></title><description><![CDATA[
<p>People have been thinking about that a long time though. For that objective, LLMs don't seem to open up any new capabilities. If that problem could be solved, with really clean abstractions that dramatically reduce context needed to understand one "module" at a time, sure LLMs will then be able to take that an run. But it's a fundamentally hard problem.</p>
]]></description><pubDate>Mon, 16 Mar 2026 02:40:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47394591</link><dc:creator>0xcafefood</dc:creator><comments>https://news.ycombinator.com/item?id=47394591</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47394591</guid></item><item><title><![CDATA[New comment by 0xcafefood in "Coding after coders: The end of computer programming as we know it?"]]></title><description><![CDATA[
<p>> no amount of "fix this slop" prompting can fix AI prose<p>What's the proof for that? What fundamental limitation of these large language models makes them unable to produce natural language? A lot of people see the high likelihood of ever increasing amounts of generated, no-effort content on the web as a real threat. You're saying that's impossible.</p>
]]></description><pubDate>Sat, 14 Mar 2026 17:16:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47378822</link><dc:creator>0xcafefood</dc:creator><comments>https://news.ycombinator.com/item?id=47378822</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47378822</guid></item><item><title><![CDATA[New comment by 0xcafefood in "Emacs and Vim in the Age of AI"]]></title><description><![CDATA[
<p>Because it is. Microsoft, Google, Anthropic, etc. all aim to own software production capabilities. It's quite clear.<p>Using free software and not giving away ones own abilities to create is as important now as it's ever been.</p>
]]></description><pubDate>Sat, 14 Mar 2026 16:48:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47378515</link><dc:creator>0xcafefood</dc:creator><comments>https://news.ycombinator.com/item?id=47378515</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47378515</guid></item><item><title><![CDATA[New comment by 0xcafefood in "Coding after coders: The end of computer programming as we know it?"]]></title><description><![CDATA[
<p>The code is also recognizable as slop to those who know how. Not the tropey "Not X, but Y" kind that's super easy to spot. But tons of repetition, deeply nested code, etc.<p>A counterpoint is that (maybe) nobody cares if the code is understandable, clean and maintainable. But NYT is explicitly in the business of selling ads surrounded by cheap copy just good enough to attract eyeballs. I suspect getting LLMs to write that is going to be far easier than getting LLMs to maintain large code bases autonomously.</p>
]]></description><pubDate>Sat, 14 Mar 2026 16:40:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47378434</link><dc:creator>0xcafefood</dc:creator><comments>https://news.ycombinator.com/item?id=47378434</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47378434</guid></item><item><title><![CDATA[New comment by 0xcafefood in "AI didn't simplify software engineering: It just made bad engineering easier"]]></title><description><![CDATA[
<p>The connection to Amdahl's law is totally on point. If you're just using LLMs as a faster way to get _your_ ideas down, but still want to ensure you validate and understand the output, you won't get the mythical 10x improvement so many seem to claim they're getting. And if you do want that 10x speedup, you have to forego the validation and understanding.</p>
]]></description><pubDate>Sat, 14 Mar 2026 16:07:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47378066</link><dc:creator>0xcafefood</dc:creator><comments>https://news.ycombinator.com/item?id=47378066</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47378066</guid></item><item><title><![CDATA[New comment by 0xcafefood in "Coding after coders: The end of computer programming as we know it?"]]></title><description><![CDATA[
<p>NYT doesn't like digital advertisers and the programmers who make that possible. They're directly in competition.</p>
]]></description><pubDate>Sat, 14 Mar 2026 15:54:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47377911</link><dc:creator>0xcafefood</dc:creator><comments>https://news.ycombinator.com/item?id=47377911</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47377911</guid></item><item><title><![CDATA[New comment by 0xcafefood in "Coding after coders: The end of computer programming as we know it?"]]></title><description><![CDATA[
<p>NYT has it out for digital advertisers, who directly compete with them. I do sense some schadenfreude here that the tech nerds who work at these places might be in trouble.<p>"Silicon Valley panjandrums spent the 2010s lecturing American workers in dying industries that they needed to “learn to code."<p>To copywriters at the NYT, LLMs are far better at stringing together natural language prose than large amounts of valid software. Get ready to supervise LLMs all day if you're not already.</p>
]]></description><pubDate>Sat, 14 Mar 2026 15:48:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47377858</link><dc:creator>0xcafefood</dc:creator><comments>https://news.ycombinator.com/item?id=47377858</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47377858</guid></item><item><title><![CDATA[Quint: Executable Specs for Reliable Systems]]></title><description><![CDATA[
<p>Article URL: <a href="https://quint-lang.org/">https://quint-lang.org/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47293476">https://news.ycombinator.com/item?id=47293476</a></p>
<p>Points: 4</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 08 Mar 2026 01:45:19 +0000</pubDate><link>https://quint-lang.org/</link><dc:creator>0xcafefood</dc:creator><comments>https://news.ycombinator.com/item?id=47293476</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47293476</guid></item><item><title><![CDATA[New comment by 0xcafefood in "Why developers using AI are working longer hours"]]></title><description><![CDATA[
<p>In my experience, code validation (unit testing, code review, manual testing, etc.) was more of a bottleneck than producing code for the most part. This means that faster code generation wouldn't produce significant gains in throughput unless the code validation speeds up too. In my workplace, I've seen evidence that the people showing the biggest productivity gains from AI coding are now shipping enormous commits that are barely getting any validation. Given the Zeitgeist, others are for some reason more lenient towards that than they normally would be (or should be).</p>
]]></description><pubDate>Sun, 08 Mar 2026 01:43:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47293457</link><dc:creator>0xcafefood</dc:creator><comments>https://news.ycombinator.com/item?id=47293457</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47293457</guid></item><item><title><![CDATA[New comment by 0xcafefood in "You're a Computer Science Major. Don't Panic About A.I"]]></title><description><![CDATA[
<p>I think this is for a different article currently trending on HN.</p>
]]></description><pubDate>Sun, 01 Mar 2026 22:02:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47211172</link><dc:creator>0xcafefood</dc:creator><comments>https://news.ycombinator.com/item?id=47211172</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47211172</guid></item><item><title><![CDATA[New comment by 0xcafefood in "You're a Computer Science Major. Don't Panic About A.I"]]></title><description><![CDATA[
<p><a href="https://archive.ph/Eo4Sd" rel="nofollow">https://archive.ph/Eo4Sd</a></p>
]]></description><pubDate>Sun, 01 Mar 2026 22:01:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47211170</link><dc:creator>0xcafefood</dc:creator><comments>https://news.ycombinator.com/item?id=47211170</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47211170</guid></item><item><title><![CDATA[New comment by 0xcafefood in "The Coming AI Cataclysm"]]></title><description><![CDATA[
<p>"As a friend who works in AI told me, AI heightens the contradictions. It is a boon to those with the motivation and background to cultivate knowledge but it spells total destruction for the system of universal education and credentialing. My worry is that we may run out of people with motivation and background to learn, know, and do. In the future, Gen X and millennial knowledge workers will be the human capital equivalent to pre-war steel. Just as particle detectors need steel forged before atmospheric nuclear testing gave all newly forged steel unacceptable background radiation, we will discover that even if your job mostly consists of interacting with LLMs, doing so well will require people who remember what it was like to read and interpret a document or contrast two ideas without asking an LLM to do it for you."<p>Wide-spread use of LLMs may exacerbate this problem, but it's already been one for a while now. The toxic combination of smartphones and attention-economy short-form video content are already robbing the young of focused effort and the benefits of an attention span longer than a few seconds. It's sad to see things may be about to get far, far worse though.</p>
]]></description><pubDate>Sun, 01 Mar 2026 21:53:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47211093</link><dc:creator>0xcafefood</dc:creator><comments>https://news.ycombinator.com/item?id=47211093</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47211093</guid></item><item><title><![CDATA[New comment by 0xcafefood in "Don't blame AI for your job woes"]]></title><description><![CDATA[
<p><a href="https://archive.ph/RsCHa" rel="nofollow">https://archive.ph/RsCHa</a></p>
]]></description><pubDate>Sun, 01 Mar 2026 20:12:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47210208</link><dc:creator>0xcafefood</dc:creator><comments>https://news.ycombinator.com/item?id=47210208</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47210208</guid></item><item><title><![CDATA[New comment by 0xcafefood in "The Eternal Promise: A History of Attempts to Eliminate Programmers"]]></title><description><![CDATA[
<p>Windows in 1998: this is the worst it's ever going to be.<p>Uber is 2010: this is the worst it's ever going to be.<p>There's some triumphalism here. What happens when training data becomes scarcer because open source as a paradigm was killed? What happens when investor cash flows elsewhere and training and inference need to become profitable on their own?</p>
]]></description><pubDate>Sun, 01 Mar 2026 19:54:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47210068</link><dc:creator>0xcafefood</dc:creator><comments>https://news.ycombinator.com/item?id=47210068</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47210068</guid></item><item><title><![CDATA[New comment by 0xcafefood in "Mythical Agent-Month"]]></title><description><![CDATA[
<p>My favorite piece: "The developers who thrive in this new agentic era won’t be the ones who run the most parallel sessions or burn the most tokens. They’ll be the ones who are able to hold their projects’ conceptual models in their mind, who are shrewd about what to build and what to leave out, and exercise taste over the enormous volume of output."</p>
]]></description><pubDate>Sun, 01 Mar 2026 19:42:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47209947</link><dc:creator>0xcafefood</dc:creator><comments>https://news.ycombinator.com/item?id=47209947</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47209947</guid></item><item><title><![CDATA[Mythical Agent-Month]]></title><description><![CDATA[
<p>Article URL: <a href="https://wesmckinney.com/blog/mythical-agent-month/">https://wesmckinney.com/blog/mythical-agent-month/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47209536">https://news.ycombinator.com/item?id=47209536</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Sun, 01 Mar 2026 18:53:41 +0000</pubDate><link>https://wesmckinney.com/blog/mythical-agent-month/</link><dc:creator>0xcafefood</dc:creator><comments>https://news.ycombinator.com/item?id=47209536</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47209536</guid></item><item><title><![CDATA[New comment by 0xcafefood in "AI Made Writing Code Easier. It Made Being an Engineer Harder"]]></title><description><![CDATA[
<p>Managers' jobs are more at risk here than the engineers'.</p>
]]></description><pubDate>Sun, 01 Mar 2026 18:48:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47209492</link><dc:creator>0xcafefood</dc:creator><comments>https://news.ycombinator.com/item?id=47209492</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47209492</guid></item><item><title><![CDATA[New comment by 0xcafefood in "Ape Coding [fiction]"]]></title><description><![CDATA[
<p>I can't tell if yourr comparison to plumbers who don't understand theory (Navier-Stokes) is supposed to apply to "ape coders" who write code by hand or to "vibe coders" who outsource their understanding.</p>
]]></description><pubDate>Sun, 01 Mar 2026 18:14:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47209182</link><dc:creator>0xcafefood</dc:creator><comments>https://news.ycombinator.com/item?id=47209182</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47209182</guid></item><item><title><![CDATA[New comment by 0xcafefood in "AI is not a coworker, it's an exoskeleton"]]></title><description><![CDATA[
<p>> the AI Employee is coming by the end of 2026 and the fully autonomous AI Company in 2027 sometime<p>We'll see! I'm skeptical.<p>> what's holding back the AI Employee are things like really effective long term context and memory management and some level of interface generality like browser or computer use and voice<p>These are pretty big hurdles. Assuming they're solved by the end of this year is a big assumption to make.</p>
]]></description><pubDate>Fri, 20 Feb 2026 15:33:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47089288</link><dc:creator>0xcafefood</dc:creator><comments>https://news.ycombinator.com/item?id=47089288</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47089288</guid></item><item><title><![CDATA[New comment by 0xcafefood in "Anthropic raises $30B in Series G funding at $380B post-money valuation"]]></title><description><![CDATA[
<p>Google's product management and discipline are absolute horsesh*t. But they have a moat and its extreme technical competence. They own their infra from the hardware (custom ASICs, their own data centers, global intranet, etc.) all the way up to the models and product platforms to deploy it in. To the extent that making LLMs work to solve real world problems is a technical problem, landing Gemini is absolutely in Google's wheelhouse.</p>
]]></description><pubDate>Fri, 13 Feb 2026 03:20:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=46998530</link><dc:creator>0xcafefood</dc:creator><comments>https://news.ycombinator.com/item?id=46998530</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46998530</guid></item></channel></rss>