<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: globnomulous</title><link>https://news.ycombinator.com/user?id=globnomulous</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 14:18:14 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=globnomulous" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by globnomulous in "I am leaving the AI party after one drink"]]></title><description><![CDATA[
<p>I should add: I'm not saying LLMs can do my job for me. I still find them tedious and clumsy. I do better work when I write my own code.</p>
]]></description><pubDate>Sun, 29 Mar 2026 20:34:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47567015</link><dc:creator>globnomulous</dc:creator><comments>https://news.ycombinator.com/item?id=47567015</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47567015</guid></item><item><title><![CDATA[New comment by globnomulous in "I am leaving the AI party after one drink"]]></title><description><![CDATA[
<p>> Unfortunately this doesn't fix a bigger problem... I just don't enjoy vibe coding as a craft. There's something special about sitting down in the morning with your coffee and taking on a difficult programming problem. You start writing some code, the solutions start to formalize in your mind, there's a strong back-and-forth effect where as you code, the concepts crystalize further... small wins fuel a wonderful dopamine hit experience... intellisense completions, compilation completions, page refreshes, etc. are now all replaced with dull moments often waiting for the agent to return its response, which you now read.<p>Agreed. For me, LLMs don't just reduce the kind of active learning and problem solving that make my job enjoyable; they change and replace it with a "skill" that barely merits the name. "Learning how to use AI" means learning how to use a product. That's worthless. It teaches nothing of durable value.<p>I'm also not in any way interested in using these tools to learn anything else. They can print out as much information as you want about this or that or that topic, and even if it's correct 100% of the time, you remain a passive consumer of information that the tool is chewing, digesting, regurgitating, and spitting into your mouth.<p>On the other hand, I'm also profoundly technically and intellectually bored. I can solve all of the problems I encounter in the codebase I work on. I can diagnose issues, refine build, shorten test pipelines, and mentor junior developer -- and I cannot imagine doing this for the rest of my working life. My brain will liquify and dribble out my ears long before I reach retirement age.<p>If what I were doing were more interesting and technically challenging, maybe I'd feel differently, but if LLMs kill off the kind of programming I do, I'm not sure anybody should grieve, particular if its death sends reasonably smart, curious people to fields where their efforts produce something of greater, or actual, value -- and doesn't wind up lining the pockets of the next generation of insufferable, bs-shoveling "thought leaders."<p>> I have been saying this to everyone -- what's your exit strategy?<p>Personally? I'm already preparing to sell our house. I'll keep my current job for as long as I can or for as long as makes sense, and then I'll go back to school for nursing, and become a psychiatric nurse practitioner.</p>
]]></description><pubDate>Sun, 29 Mar 2026 01:19:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47559598</link><dc:creator>globnomulous</dc:creator><comments>https://news.ycombinator.com/item?id=47559598</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47559598</guid></item><item><title><![CDATA[New comment by globnomulous in "Many SWE-bench-Passing PRs would not be merged"]]></title><description><![CDATA[
<p>I appreciate that your message is a good-natured, friendly tip. I don't mean for the following to crap on that. I just need to shout into the void:<p>If I have some time, the last thing I want to do with it is sharpen prompting skills. I can't imagine a worse or more boring use of my time on a computer or a skill I want less.<p>Every time I visit Hacker News I become more certain that I want nothing to do with either the future the enthusiasts think awaits us or the present that they think is building towards it.</p>
]]></description><pubDate>Thu, 12 Mar 2026 07:13:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47347466</link><dc:creator>globnomulous</dc:creator><comments>https://news.ycombinator.com/item?id=47347466</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47347466</guid></item><item><title><![CDATA[New comment by globnomulous in "Tell HN: I'm 60 years old. Claude Code has re-ignited a passion"]]></title><description><![CDATA[
<p>I fear this will be horribly self-indulgent, but I'll share it anyway:<p>I'd always been a computer person, but it wasn't until I'd reached my thirties that I realized I could make a career out of that interest. The joy of programming still gets me out of bed in the morning and sends me skipping happily to my desk in my home office. What I do wouldn't impress anybody at a technical level. I'm not an innovator. The world of software and tech would not suffer if I had never existed. But I like the guy I work for. I like the people I work with. I write stuff that lots of people use. I do it well enough that I can feel decently good about it.<p>And I'm watching all of what I enjoy in software as a career and craft gradually disappear. Upper management are now all True Believer AI zealots who know, just know, that AI is the future and therefore ensure that it is also the present. They've caused nothing but organizational chaos, shoved out knowledgeable people, in some misguided effort to remake the company in their image, and replaced them with, to me, obvious bullshit artists.<p>Engineering time and effort that might a few years ago have produced value and good experiences for users now produce mediocre "MCPs," used only internally, that turn out even more mediocre code and tests that don't test anything.<p>I don't have nearly the chops or talent you and your peers have. I never could have run with you guys or made the mark on the world that you did. What I do, and the processes I follow, are probably the exact stuff that drove you to retirement. Still, I enjoy what I do and hate that it's being taken from me and replaced with something I hate, overseen, in my company's case, by vapor merchants pretending to be visionaries/cutting-edge 'thought leaders.'<p>I'm glad some of us got to build things when the inmates ran the asylum, and I regret the money and 'progress' that strangled the life and joy out of it for you.<p>Just an aside: I've really enjoyed everything you've posted on HN and look forward to your comments. Thanks, and cheers.</p>
]]></description><pubDate>Sat, 07 Mar 2026 21:11:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47291537</link><dc:creator>globnomulous</dc:creator><comments>https://news.ycombinator.com/item?id=47291537</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47291537</guid></item><item><title><![CDATA[New comment by globnomulous in "Tell HN: I'm 60 years old. Claude Code has re-ignited a passion"]]></title><description><![CDATA[
<p>Thanks, I needed this.<p>There doesn't seem to be a place for me in the future of software/tech: I like sitting quietly, alone, solving problems, writing code, and reading it. I like in code much of what I like in art: the fruits of human labor and the results of human ingenuity. Being excited about AI/LLMs makes no sense to people like me. If you're excited because LLMs let you make something, great, good for you. Have fun.<p>If the tools become a mandatory part of the job, I'll change careers. Spending my days talking to chipper robots and describing what I want rather than making it myself sounds unbearable.</p>
]]></description><pubDate>Sat, 07 Mar 2026 20:34:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47291212</link><dc:creator>globnomulous</dc:creator><comments>https://news.ycombinator.com/item?id=47291212</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47291212</guid></item><item><title><![CDATA[New comment by globnomulous in "The Problem with LLMs"]]></title><description><![CDATA[
<p>> This is totally misleading to anyone with less familiarity with how LLMs work. They are only programs in as much as they perform inference from a fixed, stored, statistical model. It turns out that treating them theoretically in the same way as other computer programs gives a poor representation of their behaviour.<p>Can you share any reading on this?</p>
]]></description><pubDate>Thu, 12 Feb 2026 19:28:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=46993780</link><dc:creator>globnomulous</dc:creator><comments>https://news.ycombinator.com/item?id=46993780</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46993780</guid></item><item><title><![CDATA[New comment by globnomulous in "Task-free intelligence testing of LLMs"]]></title><description><![CDATA[
<p>I'm an LLM naysayer, and even I have no trouble seeing, or accepting, that they're much more than glorified spell checkers.</p>
]]></description><pubDate>Fri, 09 Jan 2026 02:51:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=46549562</link><dc:creator>globnomulous</dc:creator><comments>https://news.ycombinator.com/item?id=46549562</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46549562</guid></item><item><title><![CDATA[New comment by globnomulous in "AI sycophancy panic"]]></title><description><![CDATA[
<p>You know, it's funny. Your comment made me realize something about LLMs:<p>There's a famous line in Hesiod's <i>Theogony</i>. It appears early in the poem during Hesiod's encounter with the Muses on the slopes of Mt. Helicon, when they apparently gave him the gift of song. At this point in his narrative of the encounter, the Muses have just ridiculed shepherds like him ("mere bellies"), and then, while bragging about their great Zeus-given powers -- "we see things that were, things that are, and things that will be" -- they say "we know how to tell lies like the truth; we also know how to say things that are true, when we want to."<p>This is the ancient equivalent of my present-day encounters with the linguistic output of LLMs: what LLMs produce, when they produce language, isn't true or false; it just gives the appearance or truth or falsity -- and sometimes that appearance happens to overlap with statements that would be true or false if they'd been uttered by something with an internal life and a capacity for reasoning.<p>LLMs' linguistic output can have a weird, disorienting, uncanny-valley effect though. It gives us all the cues, signals, and evidence that normally our brains can reliably, correctly identify as markers of reasoning and thought -- but all the signals and cues are false and all the evidence is faked, and recognizibg the illusion can be a really challenging battle against oneself, because the illusion is just too convincing.<p>LLMs basically hijack automatic heuristics and cognitive processes that we can't turn off. As a result, it can be incredibly challenging even to recognize that an LLM-generated sentence that has all the cues of sense has <i>no actual sense at all</i>. The output may have the irresistibly convincing appearance of sense, as it would if it were uttered by a human being, but on closer inspection it turns out to be completely incoherent. And that inspection isn't automatic or always easy. It can be really challenging, requiring us to fight an uphill battle against our own brains.<p>Hesiod's expression "lies like the truth" captures this for me perfectly.</p>
]]></description><pubDate>Tue, 06 Jan 2026 10:19:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=46510661</link><dc:creator>globnomulous</dc:creator><comments>https://news.ycombinator.com/item?id=46510661</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46510661</guid></item><item><title><![CDATA[New comment by globnomulous in "The suck is why we're here"]]></title><description><![CDATA[
<p>I disagree. When you add an abstraction layer, the user of that layer continues to write code. That's not the case when people rely heavily on LLMs. They're at best reading and tweaking the model's output.<p>That's not the only way to use an LLM. One can instead write a piece of code and then ask the tool for analysis, but that's not the scenario that people like me are criticizing or concerned about -- and it's not how most people imagine LLMs will be used in the future, if models and tools continue to improve. People are predicting that the models will write the software. That's what people like me and the person I agreed with are criticizing and concerned about.<p>I'm uncomfortable with the idea not because it's outside of my area of comfort but because people don't understand code they read the way they understand code they write. Writing the code familiarizes the writer with the problem space (the pitfalls, for instance). When you haven't written it, and you've instead just read it, then you haven't worked through the problems. You don't know the problem space or the reasons for the choices that the author made.<p>To put this another way: you can learn to read a language or understand it by ear without learning to speak it. The skills are related, but they're separate. In turn, people acquire and develop the skills they practice: you don't learn to speak by reading. Junior engineers and young people who learn to code with AI, and don't write code themselves, will learn, in essence, how to read but not how to write or 'speak;' they'll learn how to talk to the AI models, and maybe how to read code, but not how to write software.</p>
]]></description><pubDate>Mon, 05 Jan 2026 22:40:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=46506135</link><dc:creator>globnomulous</dc:creator><comments>https://news.ycombinator.com/item?id=46506135</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46506135</guid></item><item><title><![CDATA[New comment by globnomulous in "The suck is why we're here"]]></title><description><![CDATA[
<p>I agree with the person you're answering. LLM-assisted coding is like reading a foreign language with a facing translation: most students who do this will make the mistake of thinking they've translated and understood the original text. They haven't. People are abysmal at maintaining an accurate mental accounting of attribution, authorship, and ownership.</p>
]]></description><pubDate>Sun, 04 Jan 2026 20:05:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=46491642</link><dc:creator>globnomulous</dc:creator><comments>https://news.ycombinator.com/item?id=46491642</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46491642</guid></item><item><title><![CDATA[New comment by globnomulous in "PNG in Chrome shows a different image than in Safari or any desktop app"]]></title><description><![CDATA[
<p>I don't follow. Could you explain? I also don't see on the website the text you quoted. (Your comment made me giggle though, which I appreciate.)</p>
]]></description><pubDate>Fri, 02 Jan 2026 17:19:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=46467020</link><dc:creator>globnomulous</dc:creator><comments>https://news.ycombinator.com/item?id=46467020</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46467020</guid></item><item><title><![CDATA[New comment by globnomulous in "What you need to know before touching a video file"]]></title><description><![CDATA[
<p>> to fastidious<p>Do you mean "too fussy?"</p>
]]></description><pubDate>Fri, 02 Jan 2026 16:34:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46466467</link><dc:creator>globnomulous</dc:creator><comments>https://news.ycombinator.com/item?id=46466467</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46466467</guid></item><item><title><![CDATA[New comment by globnomulous in "Karpathy on Programming: “I've never felt this much behind”"]]></title><description><![CDATA[
<p>> This sounds unbearable.<p>I can't see the original post because my browser settings break Twitter (I also haven't liked much of Karpathy's output), but I agree. I call this style of software development 'meeting-based programming,' because that seems to be the mental model that the designers of the tools are pursuing. This probably explains, in part, why c-suite/MBA  types are so excited about the tools: meetings are how they think and work.<p>In a way LLMs/chatbots and 'agents' are just the latest phase of a trend that the internet has been encouraging for decades: the elimination of mental privacy. I don't mean 'privacy' in an everyday sense -- i.e. things I keep to myself and don't share. I mean 'privacy' in a more basic sense: private experience -- sitting by oneself; having a mental space that doesn't include anybody else; simply spending time with one's own thoughts.<p>The internet encourages us to direct our thoughts and questions outward: look things up; find out what others have said; go to wikipedia; etc. This is, I think, horribly corrosive to the very essence of being a thinking, sentient being. It's also unsurprising, I guess. Humans are social animals. We're going to find ourselves easily seduced by anything that lets us replace private experience with social experience. I suppose it was only a matter of time until someone did this with programming tools, too.</p>
]]></description><pubDate>Mon, 29 Dec 2025 22:57:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46427010</link><dc:creator>globnomulous</dc:creator><comments>https://news.ycombinator.com/item?id=46427010</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46427010</guid></item><item><title><![CDATA[New comment by globnomulous in "KDE Plasma 6.8 Set to Drop X11 Support Completely"]]></title><description><![CDATA[
<p>Good bye, accessibility.</p>
]]></description><pubDate>Mon, 01 Dec 2025 11:09:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46106023</link><dc:creator>globnomulous</dc:creator><comments>https://news.ycombinator.com/item?id=46106023</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46106023</guid></item><item><title><![CDATA[New comment by globnomulous in "X210Ai is a new motherboard to upgrade ThinkPad X201/200"]]></title><description><![CDATA[
<p>Sorry for my cluelessness, but why is this laptop so popular?</p>
]]></description><pubDate>Mon, 01 Dec 2025 07:48:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=46104636</link><dc:creator>globnomulous</dc:creator><comments>https://news.ycombinator.com/item?id=46104636</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46104636</guid></item><item><title><![CDATA[New comment by globnomulous in "Every mathematician has only a few tricks (2020)"]]></title><description><![CDATA[
<p>Likewise. I don't always do this, but for problems that cost me much time or effort, I like to try to make sure that, if I wanted to reproduce a bug or problem, I'd know exactly how to write it.<p>Writing and understanding working correct software is, it turns out, a rather different skill from that of writing and understanding broken (or confusing) software. I'd also wager good money that the latter skill directly builds the former.</p>
]]></description><pubDate>Sat, 29 Nov 2025 14:59:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46088033</link><dc:creator>globnomulous</dc:creator><comments>https://news.ycombinator.com/item?id=46088033</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46088033</guid></item><item><title><![CDATA[New comment by globnomulous in "AI has a deep understanding of how this code works"]]></title><description><![CDATA[
<p>My expectations are those of any reasonable, sensible person who has a modicum of software-development experience and any manners at all.<p>Incidentally, my expectations are also exactly the same as every other person who has commented on your PRs and contributions to discussion.<p>My expectations, lastly,  are those of someone who evaluates job candidates and casts votes for and against hiring for my team.<p>Your website says repeatedly that you're open to work. Not only would I not hire you; I would do everything in my power to keep you out of my company and off my team. I'd wager good money that many others in this thread would, too.<p>If you have a problem with my expectations, you have a problem not with my expectations but with your own poor social skills and lack of professional judgment.</p>
]]></description><pubDate>Sat, 29 Nov 2025 14:42:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=46087908</link><dc:creator>globnomulous</dc:creator><comments>https://news.ycombinator.com/item?id=46087908</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46087908</guid></item><item><title><![CDATA[New comment by globnomulous in "AI has a deep understanding of how this code works"]]></title><description><![CDATA[
<p>> admitted was somewhat of a PR stunt.<p>You should be blocked, banned, and ignored.<p>> Now, what was your question?<p>Your attitude stinks. So does your complete lack of consideration for others.</p>
]]></description><pubDate>Thu, 27 Nov 2025 15:55:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46070417</link><dc:creator>globnomulous</dc:creator><comments>https://news.ycombinator.com/item?id=46070417</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46070417</guid></item><item><title><![CDATA[New comment by globnomulous in "AI has a deep understanding of how this code works"]]></title><description><![CDATA[
<p>It's worth asking yourself something: people have written substantial responses to your questions in this thread. Here you answered four paragraphs with two fucking lines referencing and repeating what you've already said. How do you expect someone to react? How can you expect anybody to take seriously anything you say, write, or commit when you obviously have so little ability, or willingness, to engage with others in a manner that shows respect and thought?<p>I really, truly don't understand. This isn't just about manners, mores, or self-reflection. The inability or unwillingness to think about your behavior or its likely reception are stupefying.<p>You need to stop 'contribiting' to public projects and stop talking to people in forums until you figure this stuff out.</p>
]]></description><pubDate>Thu, 27 Nov 2025 15:54:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=46070394</link><dc:creator>globnomulous</dc:creator><comments>https://news.ycombinator.com/item?id=46070394</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46070394</guid></item><item><title><![CDATA[New comment by globnomulous in "Migrating the main Zig repository from GitHub to Codeberg"]]></title><description><![CDATA[
<p>This isn't just "making mistakes." It's so profoundly obnoxious that I can't imagine what you've actually been doing during your apparently 30 years of experience as a software developer, such that you somehow didn't understand, or don't, why submitting these PRs is completely unacceptable.<p>The breezy "challenge me on this" and "it's just a proof of concept" remarks are infuriating. Pull requests are not conversation starters. They aren't for promoting something you think people should think about. The self-absorption and self-indulgence beggar belief.<p>Your homepage repeatedly says you're open to work and want someone to hire you. I can't imagine anybody looking at those PRs or your behavior in the discussions and concluding that you'd be a good addition to a team.<p>The cluelessness is mind-boggling.<p>It's so bad that I'm inclined to wonder whether you really are human -- or whether you're someone's stealthy, dishonest LLM experiment.</p>
]]></description><pubDate>Thu, 27 Nov 2025 14:01:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46069320</link><dc:creator>globnomulous</dc:creator><comments>https://news.ycombinator.com/item?id=46069320</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46069320</guid></item></channel></rss>