<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: bendmorris</title><link>https://news.ycombinator.com/user?id=bendmorris</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 17:43:02 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=bendmorris" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by bendmorris in "I quit. The clankers won"]]></title><description><![CDATA[
<p>I think you should be very picky about generated PRs not as an act of sabotage but because very obviously generated ones tend to balloon complexity of the code in ways that makes it difficult for both humans and agents, and because superficial plausibility is really good at masking problems. It's a rational thing to do.<p>Eventually you are faced with company culture that sees review as a bottleneck stopping you from going 100x faster rather than a process of quality assurance and knowledge sharing, and I worry we'll just be mandated to stop doing them.</p>
]]></description><pubDate>Wed, 01 Apr 2026 13:28:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47600573</link><dc:creator>bendmorris</dc:creator><comments>https://news.ycombinator.com/item?id=47600573</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47600573</guid></item><item><title><![CDATA[New comment by bendmorris in "Ask HN: AI productivity gains – do you fire devs or build better products?"]]></title><description><![CDATA[
<p>It's disappointing that this is clearly being downvoted due to disagreement - it's a valid perspective. We have very little evidence of the overall impact of aggressively generating code "in the wild" and plenty of bad examples. No one knows what this ends up looking like as it continues to meet reality but plenty are taking a large productivity improvement as a given.</p>
]]></description><pubDate>Sun, 22 Mar 2026 17:09:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47479668</link><dc:creator>bendmorris</dc:creator><comments>https://news.ycombinator.com/item?id=47479668</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47479668</guid></item><item><title><![CDATA[New comment by bendmorris in "The L in "LLM" Stands for Lying"]]></title><description><![CDATA[
<p>>Here are some well known names who are now saying they regularly use LLM's for development. For many of these folks, that wasn't true 1-2 years ago:<p>This is a huge overstatement that isn't supported by your own links.<p>- Donald Knuth: the link is him acknowledging <i>someone else</i> solved one of his open problems with Claude. Quote: "It seems that I’ll have to revise my opinions about “generative AI” one of these days."<p>- Linus Torvalds: used it to write a tool in Python because "I know more about analog filters—and that’s not saying much—than I do about python" and he doesn't care to learn. He's using it as a copy-paste replacement, not to write the kernel.<p>- John Carmack: he's literally just opining on what he thinks will happen in the future.</p>
]]></description><pubDate>Thu, 05 Mar 2026 20:37:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47266956</link><dc:creator>bendmorris</dc:creator><comments>https://news.ycombinator.com/item?id=47266956</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47266956</guid></item><item><title><![CDATA[New comment by bendmorris in "The L in "LLM" Stands for Lying"]]></title><description><![CDATA[
<p>You're going to get a lot of "skill issue" comments but your experience basically matches mine. I've only found LLMs to be useful for quick demos where I explicitly didn't care about the quality of implementation. For my core responsibility it has never met my quality bar and after getting it there has not saved me time. What I'm learning is different people and domains have very different standards for that.</p>
]]></description><pubDate>Thu, 05 Mar 2026 16:09:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47263391</link><dc:creator>bendmorris</dc:creator><comments>https://news.ycombinator.com/item?id=47263391</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47263391</guid></item><item><title><![CDATA[New comment by bendmorris in "You don't have to"]]></title><description><![CDATA[
<p>>It should've been a lot shorter<p>Honestly I don't think so. An essay like this is more than just content, it's an experience for the reader. I value the time I got to spend with it and feel I came way with value that a summary or condensed version would just not have had.</p>
]]></description><pubDate>Mon, 02 Mar 2026 20:35:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47223684</link><dc:creator>bendmorris</dc:creator><comments>https://news.ycombinator.com/item?id=47223684</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47223684</guid></item><item><title><![CDATA[New comment by bendmorris in "You don't have to"]]></title><description><![CDATA[
<p>This a hilariously ironic parallel to the debate over whether code is an art or a science, referenced right in the article. It can be both.</p>
]]></description><pubDate>Mon, 02 Mar 2026 08:15:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47215169</link><dc:creator>bendmorris</dc:creator><comments>https://news.ycombinator.com/item?id=47215169</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47215169</guid></item><item><title><![CDATA[New comment by bendmorris in "You don't have to"]]></title><description><![CDATA[
<p>>I had actually just been told by management this last week that I need to become AI 'fluent' as part of future performance evaluations and I have been deeply conflicted about it.<p>I hear this and FWIW, if there aren't very specific things being asked of you, using AI as a stack overflow replacement as the OP admits to doing is as "AI fluent" as anything else in my book.</p>
]]></description><pubDate>Mon, 02 Mar 2026 08:12:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47215150</link><dc:creator>bendmorris</dc:creator><comments>https://news.ycombinator.com/item?id=47215150</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47215150</guid></item><item><title><![CDATA[New comment by bendmorris in "You don't have to"]]></title><description><![CDATA[
<p>>The rent-a-brain aspect is more acutely alarming. And I will be blunt here: It sure does seem like the prolonged use of LLMs can reliably turn certain people’s minds into mush...<p>>Stop me if you’ve heard this one before: “After [however long] using AI coding assistants, there’s no way I’m going back!” You know, I don’t doubt that this is true. Because I’m not sure some of the people who say this could go back. It reads like praise on the surface, but those same words betray a chilling sense of dependence.<p>Perhaps, very ironically, they did "have to."</p>
]]></description><pubDate>Mon, 02 Mar 2026 08:09:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47215132</link><dc:creator>bendmorris</dc:creator><comments>https://news.ycombinator.com/item?id=47215132</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47215132</guid></item><item><title><![CDATA[New comment by bendmorris in "What AI coding costs you"]]></title><description><![CDATA[
<p>I think throwaway use cases have very different requirements than products we expect to maintain and need to be treated differently. Go nuts with AI to generate a chart or a one off tool or whatever, if you don't care about deepening your skill to do those things yourself.</p>
]]></description><pubDate>Sat, 28 Feb 2026 19:54:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47199482</link><dc:creator>bendmorris</dc:creator><comments>https://news.ycombinator.com/item?id=47199482</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47199482</guid></item><item><title><![CDATA[New comment by bendmorris in "Cognitive Debt: When Velocity Exceeds Comprehension"]]></title><description><![CDATA[
<p>Someone taking over a project and working directly in it can build up their own deep understanding about it over time even if they didn't write it all. Documentation from the last expert can help, or just reading and changing things as you build up a mental model. But asking an LLM to change it for you will not arrive at the same place.</p>
]]></description><pubDate>Sat, 28 Feb 2026 19:51:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47199443</link><dc:creator>bendmorris</dc:creator><comments>https://news.ycombinator.com/item?id=47199443</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47199443</guid></item><item><title><![CDATA[New comment by bendmorris in "Cognitive Debt: When Velocity Exceeds Comprehension"]]></title><description><![CDATA[
<p>>Heck, I often don't remember anything about code I wrote six months ago. It might as well have been written by someone else.<p>This just isn't true at all in my experience. Do I remember every detail of code I haven't looked at for six months? No, but I can go back and recall pretty quickly how it's structured and find my way around. I'm much more able to do that with code I wrote and thought deeply about. It's like riding a bicycle - if you invested in building up your knowledge once, you can bring it back more easily.<p>LLMs can sometimes help you to understand someone else's code but they can also hallucinate and I think people gloss over how frequently this happens. If no one actually understands or can verify what it's saying, all I can say is good luck.</p>
]]></description><pubDate>Sat, 28 Feb 2026 19:44:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47199368</link><dc:creator>bendmorris</dc:creator><comments>https://news.ycombinator.com/item?id=47199368</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47199368</guid></item><item><title><![CDATA[New comment by bendmorris in "What AI coding costs you"]]></title><description><![CDATA[
<p>Completely resonate with this. There don't seem to be many of us, at least in my online bubble, but you're not alone.<p>I believe and hope eventually we'll come around to valuing people who have put in the work - not just to understand and review output but to make choices themselves and keep their knowledge and judgement sharp - when we fully realize the cost of not doing so.</p>
]]></description><pubDate>Sat, 28 Feb 2026 16:22:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47197134</link><dc:creator>bendmorris</dc:creator><comments>https://news.ycombinator.com/item?id=47197134</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47197134</guid></item><item><title><![CDATA[New comment by bendmorris in "Code has always been the easy part"]]></title><description><![CDATA[
<p>This is a pretty defeatist take. Stop doing that, and start doing what instead?</p>
]]></description><pubDate>Wed, 25 Feb 2026 04:21:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47147321</link><dc:creator>bendmorris</dc:creator><comments>https://news.ycombinator.com/item?id=47147321</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47147321</guid></item><item><title><![CDATA[New comment by bendmorris in "Ask HN: Why is everyone here so AI-hyped?"]]></title><description><![CDATA[
<p>Was this comment itself, somewhat ironically, AI generated? Why a separate line for nearly every sentence?</p>
]]></description><pubDate>Thu, 12 Feb 2026 22:17:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=46996065</link><dc:creator>bendmorris</dc:creator><comments>https://news.ycombinator.com/item?id=46996065</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46996065</guid></item><item><title><![CDATA[New comment by bendmorris in "After two years of vibecoding, I'm back to writing by hand [video]"]]></title><description><![CDATA[
<p>Language evolves; the term is now used to refer to a range of AI assisted coding. The concept has existed longer than the term.</p>
]]></description><pubDate>Sat, 24 Jan 2026 17:23:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=46745487</link><dc:creator>bendmorris</dc:creator><comments>https://news.ycombinator.com/item?id=46745487</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46745487</guid></item><item><title><![CDATA[New comment by bendmorris in "After two years of vibecoding, I'm back to writing by hand [video]"]]></title><description><![CDATA[
<p>Nothing that is possible is "too hard" if you're willing to put in some effort. The only question is whether you will learn to do it, or press a button, hope the LLM did it well, and let it forever remain "too hard."<p>Honestly, without judgment, I think this is just a fundamental difference in how people approach their craft. You either want to be capable yourself or you just want the results.</p>
]]></description><pubDate>Sat, 24 Jan 2026 17:17:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=46745417</link><dc:creator>bendmorris</dc:creator><comments>https://news.ycombinator.com/item?id=46745417</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46745417</guid></item><item><title><![CDATA[New comment by bendmorris in "Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant"]]></title><description><![CDATA[
<p>>This was a problem that wasn't even tractable without AI, and there's no "explosion of AI generated code".<p>People often say this when giving examples, but what specifically made the problem intractable?<p>Sometimes before beginning work on a problem, I dramatically overestimate how hard it will be (or underestimate how capable I am of solving it.)</p>
]]></description><pubDate>Thu, 22 Jan 2026 16:09:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46721165</link><dc:creator>bendmorris</dc:creator><comments>https://news.ycombinator.com/item?id=46721165</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46721165</guid></item><item><title><![CDATA[New comment by bendmorris in "Lessons from 14 years at Google"]]></title><description><![CDATA[
<p>The fixation with AI really harms the signal-to-noise ratio on HN lately. The author of this article very clearly used an LLM to generate much of it, which makes it read like the clickbait you see a ton of on LinkedIn. Then a commenter posts an LLM-generated bullet list summary of the LLM-generated article, which really adds nothing to the discussion.<p>Ultimately the author had some simple ideas that are worth sharing and discussing, but they're hidden behind so much non-additive slop.</p>
]]></description><pubDate>Mon, 05 Jan 2026 06:52:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46495992</link><dc:creator>bendmorris</dc:creator><comments>https://news.ycombinator.com/item?id=46495992</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46495992</guid></item><item><title><![CDATA[New comment by bendmorris in "US destroying its reputation as a scientific leader – European science diplomat"]]></title><description><![CDATA[
<p>>One environmental researcher NPR spoke to, whose employer receives federal funding, confirmed that they have been advised to avoid the terms "climate change," "sustainable" and "sustainability." Even "biodiversity" is of concern to some of their colleagues because it includes the word "diversity."<p>(Please don't just respond to the quote - lots of context in the full article.)<p><a href="https://www.npr.org/2025/04/14/nx-s1-5349473/trump-free-speech-science-research" rel="nofollow">https://www.npr.org/2025/04/14/nx-s1-5349473/trump-free-spee...</a><p>This language-based filtering began in the first term and has been widely reported.<p><a href="https://www.theguardian.com/us-news/2017/dec/16/cdc-banned-words-fetus-transgender-diversity" rel="nofollow">https://www.theguardian.com/us-news/2017/dec/16/cdc-banned-w...</a></p>
]]></description><pubDate>Tue, 23 Dec 2025 01:45:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46361480</link><dc:creator>bendmorris</dc:creator><comments>https://news.ycombinator.com/item?id=46361480</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46361480</guid></item><item><title><![CDATA[New comment by bendmorris in "US destroying its reputation as a scientific leader – European science diplomat"]]></title><description><![CDATA[
<p>In the previous Trump term "diversity related topics" included things like <i>biodiversity</i> which is an important area of research and should be apolitical. Not because of a shift in focus, but because of top-down orders to not fund anything related to "diversity."<p>Conservatives in the past have also tried to belittle research grants to justify eliminating them, such as "studying X about fruit flies." It might sound silly to a lay person but drosophila is an incredibly important model organism from which many discoveries have come.<p>The problem is a highly political, often careless or incompetent, and sometimes blatantly corrupt administration taking a sledgehammer instead of a scalpel to so-called "waste."<p>[1] <a href="https://www.biologicaldiversity.org/news/press_releases/2019/trump-budget-03-11-2019.php" rel="nofollow">https://www.biologicaldiversity.org/news/press_releases/2019...</a></p>
]]></description><pubDate>Mon, 22 Dec 2025 20:09:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46358407</link><dc:creator>bendmorris</dc:creator><comments>https://news.ycombinator.com/item?id=46358407</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46358407</guid></item></channel></rss>