<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: libraryofbabel</title><link>https://news.ycombinator.com/user?id=libraryofbabel</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 21 Apr 2026 12:54:22 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=libraryofbabel" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by libraryofbabel in "How a subsea cable is repaired (2021)"]]></title><description><![CDATA[
<p>Stephenson’s piece is a classic, but it was written in 1996, when things were very different in the tech industry and geopolitically. Much more up to date (and with an explicit debt to Stephenson) is Samanth Subramanian, <i>The Web Beneath The Waves: The Fragile Cables that Connect our World</i>. Well worth a read to see what’s changed since Stephenson.</p>
]]></description><pubDate>Tue, 21 Apr 2026 10:20:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47846867</link><dc:creator>libraryofbabel</dc:creator><comments>https://news.ycombinator.com/item?id=47846867</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47846867</guid></item><item><title><![CDATA[New comment by libraryofbabel in "Issue: Claude Code is unusable for complex engineering tasks with Feb updates"]]></title><description><![CDATA[
<p>That's interesting research, but I think a more important reason that you don't have access to them (not even via the bare Anthropic api) is to prevent distillation of the model by competitors (using the output of Anthropic's model to help train a new model).</p>
]]></description><pubDate>Tue, 07 Apr 2026 03:55:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47670584</link><dc:creator>libraryofbabel</dc:creator><comments>https://news.ycombinator.com/item?id=47670584</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47670584</guid></item><item><title><![CDATA[New comment by libraryofbabel in "Eight years of wanting, three months of building with AI"]]></title><description><![CDATA[
<p>Agree. This is such a good balanced article. The only things that still make the insights difficult to apply to professional software development are: this was greenfield work and it was a solo project. But that’s hardly the author’s fault. It would however be fantastic to see more articles like this about how to go all in on AI tools for brownfield projects involving more than one person.<p>One thing I will add: I actually don’t think it’s wrong to start out building a vibe coded spaghetti mess for a project like this… <i>provided</i> you see it as a prototype you’re going to learn from and then throw away. A throwaway prototype is immensely useful because it helps you figure out what you want to build in the first place, before you step down a level and focus on closely guiding the agent to actually build it.<p>The author’s mistake was that he thought the horrible prototype would evolve into the real thing. Of course it could not. But I suspect that the author’s final results when he did start afresh and build with closer attention to architecture were <i>much</i> better because he has learned more about the requirements for what he wanted to build from that first attempt.</p>
]]></description><pubDate>Sun, 05 Apr 2026 16:09:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47650843</link><dc:creator>libraryofbabel</dc:creator><comments>https://news.ycombinator.com/item?id=47650843</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47650843</guid></item><item><title><![CDATA[New comment by libraryofbabel in "The revenge of the data scientist"]]></title><description><![CDATA[
<p>I agree with you take the there isn’t a lot of specialist work for data scientists to do with using off-the-shelf LLMs that can’t be done by an engineer. As an AI-aware software engineer myself… this stuff wasn’t that hard to pick up. Even a lot of the work on the Evals side (creating an LLM judge etc.) isn’t that hard and doesn’t require serious ML or stats.<p>But aren’t there still plenty of opportunities for building ML models beyond LLMs, albeit a bit less sexy now? It’s not like you can run a business process like (say) AirBnB’s search rankings or Uber’s driver marching algorithms on an LLM; you need to build a custom model for that. Or am I missing something here? Or is that point that those opportunities are still there, but the pond has shrunk because so much new work is now LLM-related? I buy that.</p>
]]></description><pubDate>Wed, 01 Apr 2026 23:40:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47608113</link><dc:creator>libraryofbabel</dc:creator><comments>https://news.ycombinator.com/item?id=47608113</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47608113</guid></item><item><title><![CDATA[New comment by libraryofbabel in "Leviathan (1651)"]]></title><description><![CDATA[
<p>Oh for sure, both thinkers were products of the English Civil War and its aftermath (and see my comment below about reading Quentin Skinner for all the context on Hobbes). I’d add that Locke (who was writing later than Hobbes) was all wrapped up in the 1688 “Glorious Revolution” too.<p>But some works transcend the specific details of their historical origins and authorship and contain ideas that echo down the centuries. Locke’s ideas were instrumental in founding the United States and feed into much of modern liberalism. And I can read Hobbes here today in the 21st century and still find the pessimistic core of his book powerful and relevant, even while ignoring much of the book because it’s full of the parochial concerns of 17th century England. That was really what I was getting at: not “this is the exact meaning of these works in the 17th century”, but “here is the tension of ideas these books bequeathed to us.”</p>
]]></description><pubDate>Wed, 18 Mar 2026 21:16:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47431540</link><dc:creator>libraryofbabel</dc:creator><comments>https://news.ycombinator.com/item?id=47431540</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47431540</guid></item><item><title><![CDATA[New comment by libraryofbabel in "Leviathan (1651)"]]></title><description><![CDATA[
<p>GP here, I agree with you, my characterizations were both pretty casual to the point of flippancy. I could write y’all a deeper essay on this stuff, but hey, I have LLMs to herd, the 17th century wasn’t my period anyway, and there is already a massive amount of insightful writing about these two thinkers to dive into.<p>I would say Hobbes in particular is a complex and difficult and frankly eccentric thinker; don’t make the mistake of believing you understand him; he is <i>weird</i>. If you really want to grok the guy in the context of his culture and historical moment, you should just read Quentin Skinner. That’s hardcore intellectual history though; for the basics I’d just go for the clear and brief and informative Oxford <i>Hobbes: A Very Short Introduction</i>.</p>
]]></description><pubDate>Wed, 18 Mar 2026 12:06:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47424657</link><dc:creator>libraryofbabel</dc:creator><comments>https://news.ycombinator.com/item?id=47424657</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47424657</guid></item><item><title><![CDATA[New comment by libraryofbabel in "Leviathan (1651)"]]></title><description><![CDATA[
<p>Just because I prefer Locke to Hobbes if you forced me to choose doesn't mean I'm some sort of anti-regulation libertarian. Far from it. But if you actually read Hobbes you will see that:<p>* He thinks everyone should be compelled to worship in the state-sanctioned religion<p>* Censorship of publications, teaching, etc. is necessary because ideas can be dangerous.<p>* Separation of powers (e.g. between executive, legislature, judiciary) is bad; he wants a single unitary sovereign with unlimited power.<p>* The sovereign is above the law<p>* Resisting a tyrannical sovereign is bad<p>...and that's why I'd pick Locke over Hobbes. And I think most of us would too.</p>
]]></description><pubDate>Wed, 18 Mar 2026 07:05:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47422450</link><dc:creator>libraryofbabel</dc:creator><comments>https://news.ycombinator.com/item?id=47422450</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47422450</guid></item><item><title><![CDATA[New comment by libraryofbabel in "Leviathan (1651)"]]></title><description><![CDATA[
<p>Oh totally. I actually don’t like Locke’s position much either, he’s too libertarian for my taste (I would like the state to provide healthcare &c &c). But if I had to choose I’d choose Locke over Hobbes. Hobbes is… real dark.</p>
]]></description><pubDate>Wed, 18 Mar 2026 05:04:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47421773</link><dc:creator>libraryofbabel</dc:creator><comments>https://news.ycombinator.com/item?id=47421773</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47421773</guid></item><item><title><![CDATA[New comment by libraryofbabel in "Leviathan (1651)"]]></title><description><![CDATA[
<p>As an ex historian I love how this famous 350yo work of political philosophy is just sitting at #7 on HN with absolutely no context on why it was submitted.<p>The great debate of political philosophy coming out of the 17th century was between Hobbes (anarchy is horrible, humans aren’t nice to each other, best to give up your freedoms to a strong sovereign/state for protection) and Locke (liberty is best, people are reasonable, limit government). I will say that like most of us I probably side more with Locke but as a pessimist about human nature I find Hobbes’s argument fascinating too.</p>
]]></description><pubDate>Wed, 18 Mar 2026 04:55:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47421727</link><dc:creator>libraryofbabel</dc:creator><comments>https://news.ycombinator.com/item?id=47421727</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47421727</guid></item><item><title><![CDATA[New comment by libraryofbabel in "A Plain Anabaptist Story: The Hutterites"]]></title><description><![CDATA[
<p>The Munster siege is also a centerpiece of the novel “Q”, which is well worth a read, especially if you enjoy all things Anabaptist.</p>
]]></description><pubDate>Mon, 16 Mar 2026 08:11:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47396285</link><dc:creator>libraryofbabel</dc:creator><comments>https://news.ycombinator.com/item?id=47396285</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47396285</guid></item><item><title><![CDATA[New comment by libraryofbabel in "LLM Architecture Gallery"]]></title><description><![CDATA[
<p>Thanks for the note about Qwen3.5. I should keep up with this more. If only it were more relevant to my day to day work with LLMs!<p>I did consider MoEs but decided (pretty arbitrarily) that I wasn’t going to count them as a truly fundamental change. But I agree, they’re pretty important. There’s also RoPE too, perhaps slightly less of a big deal but still a big difference from the earlier models. And of course lots of brilliant inference tricks like speculative decoding that have helped make big models more usable.</p>
]]></description><pubDate>Mon, 16 Mar 2026 05:58:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47395592</link><dc:creator>libraryofbabel</dc:creator><comments>https://news.ycombinator.com/item?id=47395592</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47395592</guid></item><item><title><![CDATA[New comment by libraryofbabel in "LLM Architecture Gallery"]]></title><description><![CDATA[
<p>This is great - always worth reading anything from Sebastian. I would also <i>highly</i> recommend his Build an LLM From Scratch book. I feel like I didn’t really understand the transformer mechanism until I worked through that book.<p>On the LLM Architecture Gallery, it’s interesting to see the variations between models, but I think the 30,000ft view of this is that in the last seven years since GPT-2 there have been a lot of improvements to LLM architecture but no <i>fundamental</i> innovations in that area. The best open weight models today still look a lot like GPT-2 if you zoom out: it’s a bunch of attention layers and feed forward layers stacked up.<p>Another way of putting this is that astonishing improvements in capabilities of LLMs that we’ve seen over the last 7 years have come mostly from scaling up and, critically, from new <i>training</i> methods like RLVR, which is responsible for coding agents going from barely working to amazing in the last year.<p>That’s not to say that architectures aren’t interesting or important or that the improvements aren’t useful, but it is a little bit of a surprise, even though it shouldn’t be at this point because it’s probably just a version of the Bitter Lesson.</p>
]]></description><pubDate>Mon, 16 Mar 2026 00:08:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47393509</link><dc:creator>libraryofbabel</dc:creator><comments>https://news.ycombinator.com/item?id=47393509</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47393509</guid></item><item><title><![CDATA[New comment by libraryofbabel in "Yann LeCun raises $1B to build AI that understands the physical world"]]></title><description><![CDATA[
<p>Thanks for saying this. It never ceases to amaze me how many people still talk about LLMs like it’s 2023, completely ignoring the RLVR revolution that gave us models like Opus that can one-shot huge chunks of works-first-time code for novel use cases. Modern LLMs aren’t <i>just</i> trained to guess the next token, they are trained to <i>solve tasks</i>.</p>
]]></description><pubDate>Tue, 10 Mar 2026 20:11:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47328212</link><dc:creator>libraryofbabel</dc:creator><comments>https://news.ycombinator.com/item?id=47328212</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47328212</guid></item><item><title><![CDATA[New comment by libraryofbabel in "Self-Portrait by Ernst Mach (1886)"]]></title><description><![CDATA[
<p>The article mentions Mach numbers, but it leaves out what is most interesting about Mach’s place in the history of science, which is as a bridge to Einstein and General Relativity. Essentially Einstein read Mach and took a bunch of mind-bendingly profound but vague philosophical ideas like Mach’s Principle[0] and put together General Relativity out of it. And this self portrait gives that side of Mach too - the philosopher obsessed with phenomenology and how local perception relates to the large scale universe out there.<p>[0] <a href="https://en.wikipedia.org/wiki/Mach%27s_principle" rel="nofollow">https://en.wikipedia.org/wiki/Mach%27s_principle</a></p>
]]></description><pubDate>Sat, 07 Mar 2026 16:53:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47289297</link><dc:creator>libraryofbabel</dc:creator><comments>https://news.ycombinator.com/item?id=47289297</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47289297</guid></item><item><title><![CDATA[New comment by libraryofbabel in "Tell HN: I'm 60 years old. Claude Code has ignited a passion again"]]></title><description><![CDATA[
<p>I think your comment really captures some of the reasons behind the differences between people’s reactions to Claude pretty well.<p>I will add though, on 2 and 3, during most of the <i>coding</i> I do in my day job as a staff engineer, it’s pretty rare for me to encounter deeply interesting puzzles and really interesting things to learn. It’s not like I’m writing a compiler or and OS kernel or something; this is web dev and infra at a mid sized company. For 95% of coding tasks I do I’ve seen some variation already before and they are boring. It’s nice to have Claude power through them.<p>On system design and architecture, the problems still tend to be a bit more novel. I still learn things there. Claude is helpful, but not as helpful as it is for the code.<p>I do get the sense that some folks enjoy solving variations of familiar programming puzzles over and over again, and Claude kills that for them. That’s not me at all. I like novelty and I hate solving the same thing twice. Different tastes, I guess.</p>
]]></description><pubDate>Sat, 07 Mar 2026 01:46:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47283544</link><dc:creator>libraryofbabel</dc:creator><comments>https://news.ycombinator.com/item?id=47283544</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47283544</guid></item><item><title><![CDATA[New comment by libraryofbabel in "Tech employment now significantly worse than the 2008 or 2020 recessions"]]></title><description><![CDATA[
<p>> In my experience, tech employment is incredibly bimodal right now. Top candidates are commanding higher salaries than ever, but an "average" developer is going to have an extremely hard time finding a position.<p>That sounds good for many of us (and don’t we all like to think we’re top candidates here on HN…) but is there any data to back this up? Or it just anecdata (not to dismiss anecdata, still useful info).</p>
]]></description><pubDate>Fri, 06 Mar 2026 20:35:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47280746</link><dc:creator>libraryofbabel</dc:creator><comments>https://news.ycombinator.com/item?id=47280746</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47280746</guid></item><item><title><![CDATA[New comment by libraryofbabel in "Speculative Speculative Decoding (SSD)"]]></title><description><![CDATA[
<p>This is interesting stuff. I wonder if these sorts of tricks are already in use at the big labs.<p>Incidentally, I would recommend trying implementing speculative decoding yourself if you <i>really</i> want to understand LLM inference internals (that, and KV caching of course). I tried it over the Christmas holidays and it was a wonderful learning experience. (And hard work, especially because I forced myself to do it by hand without coding agent assistance.)</p>
]]></description><pubDate>Wed, 04 Mar 2026 07:14:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47244209</link><dc:creator>libraryofbabel</dc:creator><comments>https://news.ycombinator.com/item?id=47244209</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47244209</guid></item><item><title><![CDATA[New comment by libraryofbabel in "Dan Simmons, author of Hyperion, has died"]]></title><description><![CDATA[
<p>Oh, boy. The Shrike. That thing still haunts me in a way that no other monster or alien across all of Sci-fi or fantasy really does. It's something about the <i>inscrutability</i> of it, especially in the first novel (still my favorite) where its purpose and backstory haven't been revealed. Sure, it's scary, but I think the mystery of its motives - and its ability to unpredictably act apparently benevolently sometimes - is where the real terror lies.</p>
]]></description><pubDate>Fri, 27 Feb 2026 22:28:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47186647</link><dc:creator>libraryofbabel</dc:creator><comments>https://news.ycombinator.com/item?id=47186647</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47186647</guid></item><item><title><![CDATA[New comment by libraryofbabel in "This time is different"]]></title><description><![CDATA[
<p>I’d rather talk about the history of steam engines than AI today, so: let’s just say it sounds like at some time in the past you saw a clunky inefficient Newcomen steam engine pumping water out of a coal mine, and you hated it, and now you think that’s all steam engines are or can be or can do: they’re loud and annoying and they’re just for pumping coal mines. Then one day someone tells you they’re powering mechanized looms in cotton mills and you flat out deny it and you don’t even want to go into the mill to take a look, because you hated that first steam engine so much.<p>It’s right there. You can go and see it any time, doing the things you don’t think it’s capable of doing. Just a little curiosity is all you need.</p>
]]></description><pubDate>Fri, 27 Feb 2026 08:39:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47178136</link><dc:creator>libraryofbabel</dc:creator><comments>https://news.ycombinator.com/item?id=47178136</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47178136</guid></item><item><title><![CDATA[New comment by libraryofbabel in "This time is different"]]></title><description><![CDATA[
<p>Ex historian here, now engineer. I would gently suggest you’re underestimating the magnitude of some of the transformations wrought by the technologies that OP mentioned for the people that lived through them. Particularly for the steam engine and the broader Industrial Revolution  around 1800: not for nothing have historians called that the greatest transformation in human life recorded in written documents.<p>If you think, hey but people had a “job” in 1700, and they had a “job” in 1900, think again. Being a peasant (majority of people in Europe in 1700) and being an urban factory worker in 1900 were fundamentally different ways of life. They only look superficially similar because we did not live the changes ourselves. But read the historical sources enough and you will see.<p>I would go as far as to say that the peasant in 1700 did not have a “job” at all in the sense that we now understand; they did not work for wages and their relationship to the wider economy was fundamentally different. In some sense industrialization <i>created</i> the era of the “job” as a way for most working-age people to participate in economic life. It’s not an eternal and unchanging condition of things, and it could one day come to an end.<p>It’s too early to say if AI will be a technology like this, I think. But it may be. Sometimes technologies <i>do</i> transform the texture of human life. And it is not possible to be sure what those will be in the early stages: the first steam engines were extremely inefficient and had very few uses. It took decades for it to be clear that they had, in fact, changed everything. That may be true of AI, or it may not. It is best to be openminded about this.</p>
]]></description><pubDate>Fri, 27 Feb 2026 04:17:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47176432</link><dc:creator>libraryofbabel</dc:creator><comments>https://news.ycombinator.com/item?id=47176432</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47176432</guid></item></channel></rss>