<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: lamasery</title><link>https://news.ycombinator.com/user?id=lamasery</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 08 Apr 2026 01:36:49 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=lamasery" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by lamasery in "Good Taste the Only Real Moat Left"]]></title><description><![CDATA[
<p>Gah, you're right of course. I was thinking of Fjällräven in particular (not that that's the only one) and got it mixed up.</p>
]]></description><pubDate>Tue, 07 Apr 2026 18:30:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47679416</link><dc:creator>lamasery</dc:creator><comments>https://news.ycombinator.com/item?id=47679416</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47679416</guid></item><item><title><![CDATA[New comment by lamasery in "Taste in the age of AI and LLMs"]]></title><description><![CDATA[
<p>The joke is that the person "saying" this is wearing their "I'm a rational, independent thinker!" tech uniform (expensive Nordic outdoors wear, so practical, so smart, so active, Vimes' boot theory, et c, not like those <i>clowns</i> in business wear, I'm interested in <i>practicality</i> not signaling, that's why I'm spending so much money signaling so hard about how rational I am).<p>They are visibly displaying a complete lack of personal taste, instead wearing the SV equivalent of an outdated-cut, off-the-rack navy blue (or even black, LOL) business suit.<p>The joke is that the message "good taste is what matters now" is being delivered by someone apparently, in a specifically SV sort of way, with a deficit of good taste.</p>
]]></description><pubDate>Tue, 07 Apr 2026 17:27:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47678616</link><dc:creator>lamasery</dc:creator><comments>https://news.ycombinator.com/item?id=47678616</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47678616</guid></item><item><title><![CDATA[New comment by lamasery in "Taste in the age of AI and LLMs"]]></title><description><![CDATA[
<p>"AI" tools I've got at work (and am mandated to use, complete with usage tracking) aren't a wide-open field of options like what someone experimenting on their own time might have, so I'm stuck with whatever they give me. The projects are brown-field, integrate with obscure industry-specific systems, are heavy with access-control blockers, are already in-flight with near-term feature completion expectations that leave little time for going back and filling in the stuff LLMs need to operate well (extensive test suites, say), and <i>must not</i> wreck the various databases they need to interact with, most of which exist only as a production instance.<p>I'm sure I could hack together some simple SaaS products with goals and features I'm defining myself in a weekend with these tools all on my own (no communication/coordination overhead, too!), though. I mean for an awful lot of potential products I could do that with just Rails and some gems and no LLM any time I liked over the last 15+ years or whatever, but now I could do it in Typescript or Rust or Go et c. with LLMs, for whatever that's worth. At work, with totally different constraints, the results are far less dramatic and I can't even feasibly <i>attempt</i> to apply some of the (reputedly) most-productive patterns of working with these things.<p>Meanwhile, LLMs are making all the code-adjacent stuff like slide decks, diagrams, and ticket trackers, incredibly spammy.<p>[EDIT] Actually, I think the question "why didn't Rails' extreme productivity boost in greenfield tiny-team or solo projects translate into vastly-more-productive development across all sectors where it might have been relevant, and how will LLMs do significantly better than that?" is one I'd like to see, say, a panel of learned LLM boosters address. Not in a shitty troll sort of way, I mean their exploration of why it might play out differently would actually be interesting to me.</p>
]]></description><pubDate>Tue, 07 Apr 2026 17:16:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47678463</link><dc:creator>lamasery</dc:creator><comments>https://news.ycombinator.com/item?id=47678463</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47678463</guid></item><item><title><![CDATA[New comment by lamasery in "Are We Idiocracy Yet?"]]></title><description><![CDATA[
<p>I don't read San Junipero as happy.<p>Black Mirror likes to show us the most-important thing as a kind of punctuation or statement of message even when it's not what <i>the episode has encouraged us to believe is the most important thing</i> (see also: the focus of the camera in the very first episode during A Certain Event—we've been primed for a grand, disgusting spectacle, and the camera chooses to show us <i>none</i> of that, and instead shows us something much more disgusting: the faces of people watching it, which is the <i>actual</i> show, and the point of the "artist" in the episode).<p>San Junipero ends by showing us the entirety of what is actually happening, for-real, which is an automated computer-maintenance system keeping itself running. It's highlighting the unreality of the virtual world, I think suggesting that even the apparent <i>experiences</i> we've been watching aren't happening in any real sense.<p>What's really happening? 100% of what's really happening? As you see. A computer system maintaining itself, to keep electricity flowing through its various circuits. Doing what? Doesn't matter, could be endlessly calculating digits of pi, that'd be just as much a "real" experience as what you've been so invested in. This is all that's really going on.</p>
]]></description><pubDate>Tue, 07 Apr 2026 16:31:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47677843</link><dc:creator>lamasery</dc:creator><comments>https://news.ycombinator.com/item?id=47677843</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47677843</guid></item><item><title><![CDATA[New comment by lamasery in "AI may be making us think and write more alike"]]></title><description><![CDATA[
<p>One of the ways I think the effect of LLMs on productivity (in software, anyway) will be tempered is that the work required to use them effectively & safely is all work we were <i>supposed</i> to be doing, but largely <i>were not</i>, at least not as completely as we aspired to. Exactly what you mention, much more detailed and thought-through feature requests, more-complete and higher-quality test suites, large and high-quality test datasets, documentation, thorough code review, all that stuff, <i>all</i> of it falling well under what we "should" be doing at every place I've ever worked, in terms both of quality and amount of it that we did.<p>They won't accelerate software development to the degree naïve analysis might suggest without <i>significantly</i> harming quality and reliability unless we start doing all those things we've been neglecting much better, which adds more work... with the result that I think our diverging paths here are "much worse software, made faster" or "software at least as good, with <i>better</i> supporting artifacts, but barely, if at all, faster to develop"</p>
]]></description><pubDate>Tue, 07 Apr 2026 14:45:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47676247</link><dc:creator>lamasery</dc:creator><comments>https://news.ycombinator.com/item?id=47676247</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47676247</guid></item><item><title><![CDATA[New comment by lamasery in "AI may be making us think and write more alike"]]></title><description><![CDATA[
<p>That quadrant is where basically all "Western" mainstream academia sits, and has for quite a long time, and they write an awful lot.<p>I am a little surprised that the influence of online "influencer"-speak and marketing, being so voluminous and evident in the things' writing styles, hasn't dragged them other directions, though. Nor the enormous amount of socially authoritarian social media posting. I suppose the former is so empty of actual philosophical content (or, indeed, anything of substance) that it might have little effect, but the latter... that's weird. Maybe they're down-ranking by tone (angrier = lower-rank) which would sharply elevate academic-style writing, assuring a tendency toward economically-left liberalism.</p>
]]></description><pubDate>Tue, 07 Apr 2026 14:32:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47676024</link><dc:creator>lamasery</dc:creator><comments>https://news.ycombinator.com/item?id=47676024</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47676024</guid></item><item><title><![CDATA[New comment by lamasery in "AI may be making us think and write more alike"]]></title><description><![CDATA[
<p>A ton of "incorrect" comma usage isn't even (historically) wrong, actually, it's just currently unfashionable.<p>There was a reaction in the last century against poor writers with poor taste over-using punctuation and writing ugly, long sentences. The result was stern advice to students to eliminate punctuation and cut sentences up into tiny bits. These same students came out of this process believing this was <i>correct</i> writing, not a straight-jacket put on them to keep them from hurting themselves. They unthinkingly cite Hemingway and borrow his clout, I suppose judging almost all writing before Hemingway and most after him, up until the 80s or 90s, as "bad" even when it's the work of masters. They blame the author when their stunted literacy (learning to write can hardly be separated from learning to read, at least at the more-advanced end of "to read") leaves them, as adults, struggling with texts once meant for children.</p>
]]></description><pubDate>Tue, 07 Apr 2026 14:22:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47675865</link><dc:creator>lamasery</dc:creator><comments>https://news.ycombinator.com/item?id=47675865</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47675865</guid></item><item><title><![CDATA[New comment by lamasery in "AI may be making us think and write more alike"]]></title><description><![CDATA[
<p>I think you're gonna struggle to find companies that aren't infested with this kind of thing.<p>Observing the effect of LLMs on the "business side" of things, I'm increasingly thinking of these as a kind of infection against which the MBA set and their acolytes have no immune response, and I think it's going to eat a <i>large</i> proportion of the benefit of LLMs to most businesses (possibly overwhelming it and actually harming productivity, will depend on how much better these tools get).<p>LLMs are awesome at bloating your slide decks while making them really slick and complete-looking. They're great at suggesting an entire set of features on a ticket you've just barely started writing ...but did you actually want all those? You end up with redundant or in-context-gibberish features that leave the person actually doing the work tracking down WTF actually matters. They are <i>adding</i> overhead to communication, so far, not just by puffing up and padding language (which isn't great either) but by adding <i>noise "content"</i> that can't be stripped out without talking to the person who created it and making sure that was actually just AI bullshit and not something they actually needed; that is, you can't just do the "LLM, summarize this" trick, because the author used an LLM to plan it, too, not just to pad-out and gussy-up something they actually thought through and wrote.<p>LLMs are letting people present very convincingly as having a more-complete understanding of what's going on than they really do in ways that are messing up productive work, I'm not sure business-folks are going to be generally capable of tamping this down because it is <i>so</i> in-line with the way they already operate (but on speed), and helps them so very much to look good to one another while saving tons of time. This isn't just the MBA set I accuse above, either, I'm noticing that this improbably-complete deck communication upward is becoming necessary to look competent (and to ladder-climb) as an IC.<p>Like, I'm only starting to think this through and really observing what's going on through this lens as I've only noticed it in the last few weeks, but the more I see the more alarming this is. I think this is going to be a little like the largely-wasteful "legibility" obsession of upper management, something enabled by computerization that they find <i>irresistible</i> and are pretty bad at employing judiciously and effectively, but probably a lot worse in terms of harm-to-productivity, and directly affecting and changing the behavior of far more layers of an organization. They never (businesses as a whole, to anthropomorphize a bit) gained wisdom with their new powers to burn resources chasing legibility, and this is starting to look like another thing they just <i>will not</i> be able to use (internally! I don't even mean for actually producing external-facing results!) with restraint and taste.</p>
]]></description><pubDate>Tue, 07 Apr 2026 14:05:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47675595</link><dc:creator>lamasery</dc:creator><comments>https://news.ycombinator.com/item?id=47675595</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47675595</guid></item></channel></rss>