<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: bunderbunder</title><link>https://news.ycombinator.com/user?id=bunderbunder</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 14 Apr 2026 21:35:18 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=bunderbunder" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by bunderbunder in "A sufficiently detailed spec is code"]]></title><description><![CDATA[
<p>This is very much my experience from working with outsourced development. Almost by design, they tend to lack domain expertise or an intimate understanding of the cultures and engineering values of the company they're contracted out to.<p>This means that they will very quickly help you discover all the little details that seemed so obvious to you that you didn't even think to mention them, but were nonetheless critical to a successful implementation. The corollary to that is, the potential ROI of outsourcing is inversely proportional to how many of these little details your project has, and how important they are.<p>So far I've found LLM coding to be kind of the same. For projects where those details are relatively unimportant, they can save me a bunch of effort. But I would not want to let an LLM build and maintain something like an API or database schema. Doing a good job of those requires too much knowledge of expected usage patterns working through design tradeoffs. And they tend to be incredibly expensive to change after deployment so it pays to take your time and get your hands dirty.<p>I also kind of hate them for writing tests, for similar reasons. I know many people love them for it because writing tests isn't super happy fun times, but for my part I'm tired of dealing with LLM-generated test suites being so brittle that they actively hinder future development.</p>
]]></description><pubDate>Thu, 19 Mar 2026 15:23:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47441055</link><dc:creator>bunderbunder</dc:creator><comments>https://news.ycombinator.com/item?id=47441055</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47441055</guid></item><item><title><![CDATA[New comment by bunderbunder in "The L in "LLM" Stands for Lying"]]></title><description><![CDATA[
<p><a href="https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it" rel="nofollow">https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies...</a></p>
]]></description><pubDate>Thu, 05 Mar 2026 18:49:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47265599</link><dc:creator>bunderbunder</dc:creator><comments>https://news.ycombinator.com/item?id=47265599</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47265599</guid></item><item><title><![CDATA[New comment by bunderbunder in "The L in "LLM" Stands for Lying"]]></title><description><![CDATA[
<p>It's also, for example, the studies finding that when companies adopt AI employees' jobs get worse. More multitasking, more overtime, more burnout, more skills you're expected to learn (on your own time if necessary), more interpersonal conflict among colleagues. And this is <i>not</i> being offset by anything tangible like an increase in pay.<p>$20/month in return for measurable reductions in quality of life is not an amazing deal. It's "Heads I win, tails you lose."<p>Or maybe, if you're thinking of it as an enabler for a side hustle or some other project with a low probability of a high payoff, it can slightly more optimistically be regarded as a moderately expensive lottery ticket.<p>That's not pessimism; it's just a realistic understanding of how the tech industry actually works, informed by decades' worth of experience.</p>
]]></description><pubDate>Thu, 05 Mar 2026 16:32:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47263747</link><dc:creator>bunderbunder</dc:creator><comments>https://news.ycombinator.com/item?id=47263747</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47263747</guid></item><item><title><![CDATA[New comment by bunderbunder in "When AI writes the software, who verifies it?"]]></title><description><![CDATA[
<p>I have found that it works well as an open-endlessly dynamic process when you are doing the kind of work that the people who came up with Scrum did as their bread and butter: limited-term contract jobs that were small enough to be handled by a single pizza-sized team and whose design challenges mostly don’t stray too far outside the Cynefn clear domain.<p>The less any of those applies, the more costly it is to figure it out as you go along, because accounting for design changes can become something of a game of crack the whip. Iterative design is still important under such circumstances, but it may need to be a more thoughtful form of iteration that’s actively mindful about which kinds of design decisions should be front-loaded and which ones can be delayed.</p>
]]></description><pubDate>Tue, 03 Mar 2026 23:28:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47240623</link><dc:creator>bunderbunder</dc:creator><comments>https://news.ycombinator.com/item?id=47240623</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47240623</guid></item><item><title><![CDATA[New comment by bunderbunder in "When AI writes the software, who verifies it?"]]></title><description><![CDATA[
<p>But also, based on what I have heard of their headcount, they are not necessarily saving any money by vibecoding it - it seems like their productivity per programmer is still well within the historical range.<p>That isn’t necessarily a hit against them - they make an LLM coding tool and they should absolutely be dogfooding it as hard as they can. They need to be the ones to figure out how to achieve this sought-after productivity boost. But so far it seems to me like AI coding is more similar to past trends in industry practice (OOP, Scrum, TDD, whatever) than it is different in the only way that’s ever been particularly noteworthy to me: it massively changes where people spend their time, without necessarily living up to the hype about how much gets done in that time.</p>
]]></description><pubDate>Tue, 03 Mar 2026 23:08:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47240413</link><dc:creator>bunderbunder</dc:creator><comments>https://news.ycombinator.com/item?id=47240413</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47240413</guid></item><item><title><![CDATA[New comment by bunderbunder in "Why No AI Games?"]]></title><description><![CDATA[
<p>Just based on skimming the transcript, it sounds like it wasn't a D&D campagin, it was actually a roguelike CRPG that he vibecoded in Claude Code.<p>Still, he mentions some gross things, like how he "got bullied into" adding a game mechanic he didn't want by the LLM. It kept adding it, without being asked, and he finally just got tired of taking it back out again. The mechanic in question was a leveling system, so I imagine the LLM kept adding it because that's such a standard-issue element of dungeon crawler games. Which speaks to my main source of pessimism about AI in games: LLMs have a tendency to want to do standard, middle-of-the-road things, and will tend to fight you every step of the way when you try to involve them in an attempt to do something new and different.<p>But I imagine you'd run into a similar thing with D&D campaigns. Which, if true, raises the question: why would I need an LLM to generate <i>Dragonlance</i> and <i>Forgotten Realms</i> style quests when back issues of <i>Dungeon</i> magazine already give me more of that kind of material than I could reasonably get through in a lifetime, and probably in a much more polished form?</p>
]]></description><pubDate>Tue, 03 Mar 2026 20:35:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47238579</link><dc:creator>bunderbunder</dc:creator><comments>https://news.ycombinator.com/item?id=47238579</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47238579</guid></item><item><title><![CDATA[New comment by bunderbunder in "Why No AI Games?"]]></title><description><![CDATA[
<p>I'm not so sure.<p>In earlier text adventures (e.g., Infocom games), some portion of those constraints were due to the authors failing to anticipate legitimate ways that users would try to phrase things and account for them in the game. But that's not nearly such a problem in anything made since the late '90s, especially if you stick to xyzzy award winners.<p>The more essential reason for that constraint is that it's just good storytelling. The author of a work of IF has an idea they want to explore. That main idea could be narrative (<i>Photopia</i> or <i>Anchorhead</i>), or it could be a gameplay mechanic (<i>Savoir-Faire</i> or <i>Counterfeit Monkey</i>). But in any case, if your goal is to appreciate the creator's vision, those constraints are <i>critical</i> because they telegraph to you, the player, what you should and should not be exploring.<p>This isn't an idea that's specific to text adventures, either. The creators of the <i>Outer Wilds</i> deliberately made areas flat and boring when there wasn't anything there for the player to do to advance the story, specifically because they didn't want players wasting time on exploration that would ultimately prove to be pointless. This is also why open world games that <i>do</i> go for a more uniformly detailed world also need to hand-hold the player and tell them where they need to go every step of the way. Without that players would tend to get lost, lose their sense of progress, and ultimately end up bored.<p>I think that, because of this dynamic, using AI to flesh out the unimportant bits of the game would be a cardinal game design sin. Making bloat cheap and easy does not make it good. I just makes more of it.</p>
]]></description><pubDate>Tue, 03 Mar 2026 17:39:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47235880</link><dc:creator>bunderbunder</dc:creator><comments>https://news.ycombinator.com/item?id=47235880</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47235880</guid></item><item><title><![CDATA[New comment by bunderbunder in "Bus stop balancing is fast, cheap, and effective"]]></title><description><![CDATA[
<p>I also live in Chicago. The closest bus stop to my house is 2 blocks away, and the 2nd closest stop on that same line is 3 blocks away - just one block further in the direction I’m going.<p>I simply don’t believe that eliminating that closet stop would worsen my commute. When I’m leaving home, I would walk a block further, but probably 80+% of the time it would not increase the time I spend out in the elements because I’d just replace time standing at the bus stop with time walking to the next one. The only time it would hurt me is on the rare occasion that the bus passes me while I’m walking that extra block. (Pessimistically assuming 2 minutes to walk one block, and with buses coming every 10 minutes on average, is how I get 80%.) But I bet doing that all up and down the route would make the bus much more predictable. That closest stop is within the distance that cars back up from a traffic light at that next intersection when there’s traffic, and when the bus stops at my intersection it can often get pinned in the stop for a while when motorists aren’t in the mood to let the bus re-enter traffic. Multiply that phenomenon by, say, 20 extra stops and you get to some pretty unreliable service for people trying to get to work in the morning. I bet most of us would happily walk an extra block if it means we no longer have to leave for work half an hour early. 2 minutes extra walking on either end adds up to 4 minutes “wasted” time walking (I also am not sure I count walking as wasted time, by the way - physical activity is good for me) is a lot less than 30 minutes wasted time padding my commute to account for less reliable service.<p>And then when I’m coming home I get off at that stop that’s a block further away anyway. Because there’s a light at that intersection but not at the one where the close stop lies. I can easily spend more time waiting for a gap in traffic large enough to cross a busy street during the evening rush than it takes to walk that extra block.</p>
]]></description><pubDate>Thu, 26 Feb 2026 16:19:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47168122</link><dc:creator>bunderbunder</dc:creator><comments>https://news.ycombinator.com/item?id=47168122</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47168122</guid></item><item><title><![CDATA[New comment by bunderbunder in "A beginner's guide to split keyboards"]]></title><description><![CDATA[
<p>Try things and see for yourself. I know that’s not super satisfying advice, but everyone has a different experience with these things so there are no easy answers.<p>Start small. Don’t feel pressured to dive straight into the $300 keyboards. I have a fancy custom mechanical keyboard myself, but that’s because a few years back I decided it would be fun to get into using a more hackable keyboard. For a very long time I was more than content with the (sadly now discontinued) Microsoft Sculpt keyboard, which was one of the least expensive options.</p>
]]></description><pubDate>Fri, 20 Feb 2026 17:04:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47090655</link><dc:creator>bunderbunder</dc:creator><comments>https://news.ycombinator.com/item?id=47090655</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47090655</guid></item><item><title><![CDATA[New comment by bunderbunder in "A beginner's guide to split keyboards"]]></title><description><![CDATA[
<p>I’ve been a consistent split keyboard user for a quarter century now. My current daily driver is a Redox, which uses a columnar layout. I got into them when I first started having problems with tendinitis. I feel like they help, but I’m not sure what the science says about it.<p>Anyway, I’ve always hated that diagram because it’s so obviously hyperbolic. I also use standard keyboards on a daily basis, and while there are some posture differences, the bending to make hands perpendicular to the keyboard just does not happen. Comfortably placing your fingers on the home row requires angling your hands a bit because the fingers are all different lengths. Are there some posture differences? Sure. But from what I’ve seen they’re really quite minor.<p>What I would guess makes more of a difference is tenting. Which is admittedly only possible with a split design. But also, not all split keyboards do tent.<p>Also, and this one might be specific to my particular problem, moving keys the thumb strikes to a position that it can reach with less stretching has helped a lot. (I suspect that the space bar in particular might have been the source of most of my woes.) And that’s another variable that’s highly correlated with - but still not the same as - the keyboard being split.</p>
]]></description><pubDate>Fri, 20 Feb 2026 15:21:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47089121</link><dc:creator>bunderbunder</dc:creator><comments>https://news.ycombinator.com/item?id=47089121</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47089121</guid></item><item><title><![CDATA[New comment by bunderbunder in "Closing this as we are no longer pursuing Swift adoption"]]></title><description><![CDATA[
<p>That’s the main use case I can think of. It’s possible with other languages but may require more hacking.<p>I like monkey patching for testing legacy code. I like it less as a thing in production code because it can become a security and reliability problem.</p>
]]></description><pubDate>Thu, 19 Feb 2026 19:33:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47078079</link><dc:creator>bunderbunder</dc:creator><comments>https://news.ycombinator.com/item?id=47078079</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47078079</guid></item><item><title><![CDATA[New comment by bunderbunder in "Closing this as we are no longer pursuing Swift adoption"]]></title><description><![CDATA[
<p>Assuming you mean C (C++ is an 80s child), that’s trivially true because devices with an ObjC SDK are a strict subset of devices that are running on C.</p>
]]></description><pubDate>Thu, 19 Feb 2026 02:54:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47069316</link><dc:creator>bunderbunder</dc:creator><comments>https://news.ycombinator.com/item?id=47069316</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47069316</guid></item><item><title><![CDATA[New comment by bunderbunder in "Closing this as we are no longer pursuing Swift adoption"]]></title><description><![CDATA[
<p>In some ways ObjC’s and the NEXTSTEP API’s staying power is more impressive because they survived the failure of their relatively small patron organization. POSIX and C++ were developed at and supported by tech titans - the 1970s and 1980s equivalents of FAANG. Meanwhile back at the turn of the century we had all witnessed the demise of NeXT and many of us were anticipating the demise of Apple, and there was no particularly strong reason to believe that a union of the two would fare any better, let alone grow to become one of the A’s in FAANG.<p>I actually suspect that ObjC and the NeXT APIs played a big part in that success. I know they’ve fallen out of favor now, and for reasons I have to assume are good. But back in the early 2000s, the difference in how quickly I could develop a good GUI for OS X compared to what I was used to on Windows and GNOME was life changing. It attracted a bunch of developers to the platform, not just me, which spurred an accumulation of applications with noticeably better UX that, in turn, helped fuel Apple’s consumer sentiment revival.</p>
]]></description><pubDate>Thu, 19 Feb 2026 02:49:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47069279</link><dc:creator>bunderbunder</dc:creator><comments>https://news.ycombinator.com/item?id=47069279</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47069279</guid></item><item><title><![CDATA[New comment by bunderbunder in "Closing this as we are no longer pursuing Swift adoption"]]></title><description><![CDATA[
<p>It is, and that’s part of what I loved about it. But it’s also the kind of trick that can quickly become a source of chaos on a project with many contributors and a lot of contributor churn, like we tend to get nowadays. Because - and this was the real point of Dijkstra’s famous paper; GOTO was just the most salient   concrete example at the time -   control flow mechanisms tend to be inscrutable in proportion to their power.<p>And, much like what happened to GOTO 40 years ago, language designers have invented less powerful language features that are perfectly acceptable 90% solutions. e.g. nowadays I’d generally pick higher order functions or the strategy pattern over method swizzling because they’re more amenable to static analysis and easier to trace with typical IDE tooling.</p>
]]></description><pubDate>Thu, 19 Feb 2026 02:31:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47069176</link><dc:creator>bunderbunder</dc:creator><comments>https://news.ycombinator.com/item?id=47069176</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47069176</guid></item><item><title><![CDATA[New comment by bunderbunder in "Closing this as we are no longer pursuing Swift adoption"]]></title><description><![CDATA[
<p>For what it’s worth, ObjC is not Apple’s brainchild. It just came along for the ride when they chose NEXTSTEP as the basis for Mac OS X.<p>I haven’t used it in a couple decades, but I do remember it fondly. I also suspect I’d hate it nowadays. Its roots are in a language that seemed revolutionary in the 80s and 90s - Smalltalk - and the melding of it with C also seemed revolutionary at the time. But the very same features that made it great then probably (just speculating - again I haven’t used it in a couple decades) aren’t so great now because a different evolutionary tree leapfrogged ahead of it. So  most investment went into developing different solutions to the same problems, and ObjC, like Smalltalk, ends up being a weird anachronism that doesn’t play so nicely with modern tooling.</p>
]]></description><pubDate>Thu, 19 Feb 2026 00:55:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47068550</link><dc:creator>bunderbunder</dc:creator><comments>https://news.ycombinator.com/item?id=47068550</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47068550</guid></item><item><title><![CDATA[New comment by bunderbunder in "AI adoption and Solow's productivity paradox"]]></title><description><![CDATA[
<p>The process’s entire purpose is to exist and be followed, so that when they need to they can point to it and say, “We followed the process.”</p>
]]></description><pubDate>Thu, 19 Feb 2026 00:43:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47068480</link><dc:creator>bunderbunder</dc:creator><comments>https://news.ycombinator.com/item?id=47068480</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47068480</guid></item><item><title><![CDATA[New comment by bunderbunder in "AI adoption and Solow's productivity paradox"]]></title><description><![CDATA[
<p>And, hear me out here - perhaps for the sake of morale it makes sense to leave a smidge of the part of the job that actually attracts people to this profession in the first place on their plates. Otherwise we may find that, after the novelty wears off, we’re left with a net productivity dropoff because there’s not as much left to keep people motivated to do à good job of the remaining work.</p>
]]></description><pubDate>Thu, 19 Feb 2026 00:39:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47068444</link><dc:creator>bunderbunder</dc:creator><comments>https://news.ycombinator.com/item?id=47068444</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47068444</guid></item><item><title><![CDATA[New comment by bunderbunder in "Stephen Colbert going down swinging"]]></title><description><![CDATA[
<p>There is also an FCC angle that is relevant in that it concerns broadcast communications. And a “Streisand Effect” aspect that is perennially interesting to many hackers. And, relatedly, an angle concerning how newer media (YouTube) alters the communication landscape.</p>
]]></description><pubDate>Thu, 19 Feb 2026 00:26:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47068356</link><dc:creator>bunderbunder</dc:creator><comments>https://news.ycombinator.com/item?id=47068356</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47068356</guid></item><item><title><![CDATA[New comment by bunderbunder in "Stephen Colbert going down swinging"]]></title><description><![CDATA[
<p>I would guess It depends on how thoroughly the organization doing the censorship can exert control over information.<p>For example, I’ve been pretty impressed by the extent to which the Chinese government is able to influence public discourse within its borders when they want to. But China is also home to over 90% of the world’s Chinese speakers, and it has its own domestic social media industry with very few users from outside the country, the Great Firewall, less of a culture of anti-authoritarianism, etc.<p>I’m not sure how feasible it would be for the US to get to a comparable position. The US is nowhere close to being a supermajority of the world’s English speakers, and it might be hard for the government to impose an isolationist policy on the country’s tech industry without inciting a revolt by its tech oligarchs.</p>
]]></description><pubDate>Wed, 18 Feb 2026 18:41:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47064547</link><dc:creator>bunderbunder</dc:creator><comments>https://news.ycombinator.com/item?id=47064547</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47064547</guid></item><item><title><![CDATA[New comment by bunderbunder in "Stephen Colbert going down swinging"]]></title><description><![CDATA[
<p>Kind of fascinating that the first couple comments to be posted both start with a declaration of the author’s opinion of Colbert.<p>I read it as a pretty straightforward acknowledgment that we’ve reached a point in public discourse where people pay at least as much attention to who is making a point as we do to the actual point being made.<p>I wonder if finding common ground is even possible as long as that kind of tacit <i>ad hominem</i> remains baked into the way we think about public discourse.</p>
]]></description><pubDate>Wed, 18 Feb 2026 16:04:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47062482</link><dc:creator>bunderbunder</dc:creator><comments>https://news.ycombinator.com/item?id=47062482</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47062482</guid></item></channel></rss>