<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: lsy</title><link>https://news.ycombinator.com/user?id=lsy</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 08 Apr 2026 10:33:29 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=lsy" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by lsy in "AI made coding more enjoyable"]]></title><description><![CDATA[
<p>At what point do LLMs enable bad engineering practices, if instead of working to abstract or encapsulate toilsome programming tasks we point an expensive slot machine at them and generate a bunch of verbose code and carry on? I'm not sure where the tradeoff leads if there's no longer a pain signal for things that need to be re-thought or re-architected. And when anyone does create a new framework or abstraction, it doesn't have enough prior art for an LLM to adeptly generate, and fails to gain traction.</p>
]]></description><pubDate>Thu, 19 Feb 2026 16:56:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47075930</link><dc:creator>lsy</dc:creator><comments>https://news.ycombinator.com/item?id=47075930</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47075930</guid></item><item><title><![CDATA[New comment by lsy in "Ask HN: AI Depression"]]></title><description><![CDATA[
<p>It's easy to get this way with enough scrolling, try to focus on the things around you in real life. If you aren't reading LinkedIn or HN, how much do you actually hear about AI in day-to-day life? If someone at work directly asks you to do something using AI, you might make some effort to do it. But otherwise let the news and hype cycle play out. You don't need to anticipate or keep abreast of where people think things will be in ten years... they are almost certainly wrong. Think of LinkedIn and HN as entertainment at best. Work on personal coding projects without AI, build relationships with non-tech people, go outside.</p>
]]></description><pubDate>Fri, 13 Feb 2026 16:44:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47004722</link><dc:creator>lsy</dc:creator><comments>https://news.ycombinator.com/item?id=47004722</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47004722</guid></item><item><title><![CDATA[New comment by lsy in "Humanity's Last Programming Language"]]></title><description><![CDATA[
<p>It’s notable that <i>just</i> the English “implementation” of FizzBuzz here is longer and more ambiguous than the naive Python implementation, never mind the boilerplate (which itself is <i>also</i> longer than the Python).<p>The explosion of frameworks and YAML tools the author describes can be attributed to the fact that English is an extremely poor language for program specification, and requires all kinds of guardrails and annotation to accomplish the same specificity as a typical computer program.</p>
]]></description><pubDate>Wed, 11 Feb 2026 15:40:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=46976309</link><dc:creator>lsy</dc:creator><comments>https://news.ycombinator.com/item?id=46976309</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46976309</guid></item><item><title><![CDATA[New comment by lsy in "The Abstraction Rises"]]></title><description><![CDATA[
<p>LLM coding isn't a new level of abstraction. Abstractions are (semi-)reliable ways to manage complexity by creating building blocks that represent complex behavior, that are useful for reasoning about outcomes.<p>Because model output can vary widely from invocation to invocation, let alone model to model, prompts aren't reliable abstractions. You can't send someone all of the prompts for a vibecoded program and know they will get a binary with generally the same behavior. An effective programmer in the LLM age won't be saving mental energy by reasoning about the prompts, they will be fiddling with the prompts, crossing their fingers that it produces workable code, then going back to reasoning about the code to ensure it meets their specification.<p>What I think the discipline is going to find after the dust settles is that traditional computer code <i>is</i> the "easiest" way to reason about computer behavior. It requires some learning curve, yes, but it remains the highest level of real "abstraction", with LLMs being more of a slot machine for saving the typing or some boilerplate.</p>
]]></description><pubDate>Tue, 10 Feb 2026 16:27:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=46962218</link><dc:creator>lsy</dc:creator><comments>https://news.ycombinator.com/item?id=46962218</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46962218</guid></item><item><title><![CDATA[New comment by lsy in "I miss thinking hard"]]></title><description><![CDATA[
<p>I think the analogy to high level programming languages misunderstands the value of abstraction and notation. You can’t reason about the behavior of an English prompt because English is underspecified. The value of code is that it has a fairly strong semantic correlation to machine operations, and reasoning about high level code is equivalent to reasoning about machine code. That’s why even with all this advancement we continue to check in code to our repositories and leave the sloppy English in our chat history.</p>
]]></description><pubDate>Wed, 04 Feb 2026 08:13:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=46882969</link><dc:creator>lsy</dc:creator><comments>https://news.ycombinator.com/item?id=46882969</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46882969</guid></item><item><title><![CDATA[New comment by lsy in "AGI fantasy is a blocker to actual engineering"]]></title><description><![CDATA[
<p>It's disheartening that a potentially worthwhile discussion — should we invest engineering resources in LLMs as a normal technology rather than as a millenarian fantasy? — has been hijacked by a (at this writing) 177-comment discussion on a small component of the author's argument. The author's argument is an important one that hardly hinges at all on water usage specifically, given the vast human and financial capital invested in LLM buildout so far.</p>
]]></description><pubDate>Fri, 14 Nov 2025 18:19:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=45929899</link><dc:creator>lsy</dc:creator><comments>https://news.ycombinator.com/item?id=45929899</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45929899</guid></item><item><title><![CDATA[New comment by lsy in "Delivery is killing restaurant culture"]]></title><description><![CDATA[
<p>Going to a popular restaurant that accepts app delivery orders (or a grocery store in a neighborhood where people prefer to pay for delivery) is an objectively bad experience. The kitchen or checkout line is backed up with delivery orders, there are a bunch of delivery drivers double-parked or loitering near the front, and due not to any moral failing but rather what must be a crushing grind, the drivers are for the most part rushed and inconsiderate of the staff or other customers.<p>The class of people who order delivery regularly are generally trading the short-term reward of convenient food for way more money than makes sense, too little of that money benefits the class of people who do the delivering, and as the article points out, it is essentially harming the business it's being ordered from.<p>I would love to see more restaurants and stores declining to support this kind of system. While there may be some marginal profit now, in the long run the race to the bottom is going to mean fewer sustainable businesses.</p>
]]></description><pubDate>Tue, 28 Oct 2025 18:30:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=45736836</link><dc:creator>lsy</dc:creator><comments>https://news.ycombinator.com/item?id=45736836</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45736836</guid></item><item><title><![CDATA[New comment by lsy in "What is intelligence? (2024)"]]></title><description><![CDATA[
<p>I feel like this needs an editor to have a chance of reaching almost anyone… there are ~100 section/chapter headings that seem to have been generated through some kind of psychedelic free association, and each section itself feels like an artistic effort to mystify the reader with references, jargon, and complex diagrams that are only loosely related to the text. And all wrapped here in a scroll-hijack that makes it even harder to read.<p>The effect is that it's unclear at first glance what the argument even <i>might</i> be, or which sections might be interesting to a reader who is not planning to read it front-to-back. And since it's apparently six hundred pages in printed form, I don't know that many will read it front-to-back either.</p>
]]></description><pubDate>Sat, 25 Oct 2025 09:24:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=45702471</link><dc:creator>lsy</dc:creator><comments>https://news.ycombinator.com/item?id=45702471</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45702471</guid></item><item><title><![CDATA[New comment by lsy in "The AI-collapse pre-mortem"]]></title><description><![CDATA[
<p>It's interesting to call this a pre-mortem as it seems mainly organized around thinking positively past the imperfections of the technology. It's like a pre-mortem for the housing crisis that focuses on the benefits of subprime mortgage lending.<p>What I'd expect to see is an analysis of how to address or prevent the same situation as previous bubbles: that society has allocated resources to a specific investment that are <i>far</i> in excess of what that investment can fundamentally be expected to return. How can we avoid thinking sloppily about this technology, or getting taken in by hucksters' just-so stories of its future impact? How can we successfully identify use-cases where revenues exceed investment? When the next exciting tech comes around, how can we harness it well as a society without succumbing to irrational exuberance?</p>
]]></description><pubDate>Fri, 24 Oct 2025 20:26:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=45698721</link><dc:creator>lsy</dc:creator><comments>https://news.ycombinator.com/item?id=45698721</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45698721</guid></item><item><title><![CDATA[New comment by lsy in "Cyborgs vs. rooms, two visions for the future of computing"]]></title><description><![CDATA[
<p>I think this leaves out what is probably the most likely future for this technology, having a similar destiny to most technologies as a <i>tool</i>. Both of these visions assume (I think incorrectly) a trend towards ubiquity, where either every interaction <i>you</i> as a person have is mediated by computers, or where within a certain "room" every interaction <i>anyone</i> has is mediated by computers.<p>But it seems more likely that like other technologies developed by humanity, we will see that computers are not efficient for, or extensible to, every task, and people will naturally tend to reach for computers where they are helpful and be disinclined to do so when they aren't helpful. Some computers will be in rooms, some will get carried around or worn, some will be integrated into infrastructure.<p>Similar to the automobile, steam powered motors, and electricity, we may predict a future where the technology totally pervades our lives, but in reality we eventually develop a sort of infrastructure that delimits the tool's use to a certain extent, whether it is narrow or wide. If that's the case then the work for the field is less about shoving the tech into every interaction, and more about developing better abstractions to allow people to use compute in an empowering rather than a disempowering way.</p>
]]></description><pubDate>Wed, 22 Oct 2025 20:33:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=45674792</link><dc:creator>lsy</dc:creator><comments>https://news.ycombinator.com/item?id=45674792</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45674792</guid></item><item><title><![CDATA[New comment by lsy in "Palma 2 Pro"]]></title><description><![CDATA[
<p>No doubt it's a profit margin game, but I wish the big e-reader companies (Kindle, Kobo) would take a foray into this form factor. The friction of navigating through an Android interface into an app is just enough to negate the convenience benefit of a pocketable device. But the mainstream e-readers are unfortunately just big enough to require a jacket or a bag to carry them in.</p>
]]></description><pubDate>Tue, 21 Oct 2025 19:41:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=45660668</link><dc:creator>lsy</dc:creator><comments>https://news.ycombinator.com/item?id=45660668</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45660668</guid></item><item><title><![CDATA[New comment by lsy in "Packing the world for longest lines of sight"]]></title><description><![CDATA[
<p>I'm sure it's nearly an academic distinction, but:<p>> Basically, for any given region, we find its highest point and assume that there is a perfectly placed sibling peak of the same height that is mutually visible.<p>Shouldn't you always add 335km to the horizon distance to account for the possibility of Everest (i.e. a <i>taller</i> sibling peak) being on the other side of the horizon?</p>
]]></description><pubDate>Wed, 08 Oct 2025 04:46:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=45512179</link><dc:creator>lsy</dc:creator><comments>https://news.ycombinator.com/item?id=45512179</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45512179</guid></item><item><title><![CDATA[New comment by lsy in "Macintosh System 7 Ported To x86 With LLM Help in 3 days"]]></title><description><![CDATA[
<p>Impressive that this was done in 3 days at all, but to anyone who is familiar at all with System 7's appearance, the screenshot is almost comically "off" and gives away that this is not a straight port so much as some kind of clean-room reimplementation. The attached paper is more reserved, calling this a "bootable prototype".</p>
]]></description><pubDate>Tue, 30 Sep 2025 03:56:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=45421750</link><dc:creator>lsy</dc:creator><comments>https://news.ycombinator.com/item?id=45421750</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45421750</guid></item><item><title><![CDATA[New comment by lsy in "Defeating Nondeterminism in LLM Inference"]]></title><description><![CDATA[
<p>Fixing "theoretical" nondeterminism for a totally closed individual input-output pair doesn't solve the two "practical" nondeterminism problems, where the exact same input gives different results given different preceding context, and where a slightly transformed input doesn't give a correctly transformed result.<p>Until those are addressed, closed-system nondeterminism doesn't really help except in cases where a lookup table would do just as well. You can't use "correct" unit tests or evaluation sets to prove anything about inputs you haven't tested.</p>
]]></description><pubDate>Wed, 10 Sep 2025 18:34:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=45201848</link><dc:creator>lsy</dc:creator><comments>https://news.ycombinator.com/item?id=45201848</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45201848</guid></item><item><title><![CDATA[New comment by lsy in "'World Models,' an old idea in AI, mount a comeback"]]></title><description><![CDATA[
<p>A world model itself, in its particulars, isn't as important as the tacit understanding that the "world model" is necessarily incomplete and subordinate to the world itself, that there are sensory inputs from the world that would indicate you should adjust your world model, and the capacity and commitment to adjust that model in a way that maintains a level of coherence. With those things you don't need a complex model, you could start with a very simple but flexible model that would be adjusted over time by the system.<p>But I don't think we have a hint of a proposal for how to incorporate even the first part of that into our current systems.</p>
]]></description><pubDate>Tue, 02 Sep 2025 22:53:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=45110163</link><dc:creator>lsy</dc:creator><comments>https://news.ycombinator.com/item?id=45110163</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45110163</guid></item><item><title><![CDATA[New comment by lsy in "Why deterministic output from LLMs is nearly impossible"]]></title><description><![CDATA[
<p>There are two additional aspects that are even more critical than the implementation details here:<p>- Typical LLM usage involves the accretion of context tokens from previous conversation turns. The likelihood that you will type prompt A twice but all of your previous context will be the same is low. You could reset the context, but accretion of context is often considered a <i>feature</i> of LLM interaction.<p>- Maybe more importantly, because the LLM abstraction is statistical, getting the correct output for e.g. "3 + 5 = ?" does not guarantee you will get the correct output for <i>any other pair of numbers</i>, even if all of the outputs are invariant and deterministic. So even if the individual prompt + output relationship is deterministic, the usefulness of the model output may "feel" nondeterministic between inputs, or have many of the same bad effects as nondeterminism. For the article's list of characteristics of deterministic systems, per-input determinism only solves "caching", and leaves "testing", "compliance", and "debuggability" largely unsolved.</p>
]]></description><pubDate>Mon, 11 Aug 2025 19:42:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=44868578</link><dc:creator>lsy</dc:creator><comments>https://news.ycombinator.com/item?id=44868578</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44868578</guid></item><item><title><![CDATA[New comment by lsy in "GPTs and Feeling Left Behind"]]></title><description><![CDATA[
<p>I think the wide variance in responses here is explainable by tool preference and the circumstance of what you want to work on. You might also have felt "behind" not knowing or wanting to use Dreamweaver, or React, or Ruby on Rails, or Visual Studio + .NET, all tools that allowed developers at the time to accelerate their tasks greatly. But you'll note that probably most programmers today who are successful never learned those tools, so the fact that they accelerated certain tasks didn't result in a massive gap between users and non-users.<p>People shouldn't worry about getting "left behind" because influencers and bloggers are overindexing on specific tech rather than more generalist skills. At the end of the day the learning curve on these things is not that steep - that's why so many people online can post about it. When the need arises and it makes sense, the IDE/framework/tooling du jour will be there and you can learn it then in a few weeks. And if past is prologue in this industry, the people who have spent all their time fiddling with version N will need to reskill for version N+1 anyways.</p>
]]></description><pubDate>Sun, 10 Aug 2025 04:35:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=44852784</link><dc:creator>lsy</dc:creator><comments>https://news.ycombinator.com/item?id=44852784</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44852784</guid></item><item><title><![CDATA[New comment by lsy in "How I keep up with AI progress"]]></title><description><![CDATA[
<p>If you have a decent understanding of how LLMs work (you put in basically every piece of text you can find, get a statistical machine that models text really well, then use contractors to train it to model text in conversational form), then you probably don't need to consume a big diet of ongoing output from PR people, bloggers, thought leaders, and internet rationalists. That seems likely to get you going down some millenarian path that's not helpful.<p>Despite the feeling that it's a fast-moving field, most of the differences in actual models over the last years are in degree and not kind, and the majority of ongoing work is in tooling and integrations, which you can probably keep up with as it seems useful for your work. Remembering that it's a model of text and is ungrounded goes a long way to discerning what kinds of work it's useful for (where verification of output is either straightforward or unnecessary), and what kinds of work it's not useful for.</p>
]]></description><pubDate>Fri, 18 Jul 2025 19:42:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=44608975</link><dc:creator>lsy</dc:creator><comments>https://news.ycombinator.com/item?id=44608975</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44608975</guid></item><item><title><![CDATA[New comment by lsy in "All AI models might be the same"]]></title><description><![CDATA[
<p>The example given for inverting an embedding back to text doesn't help the idea that this effect is reflecting some "shared statistical model of reality": What would be the plausible whalesong mapping of "Mage (foaled April 18, 2020) is an American Thoroughbred racehorse who won the 2023 Kentucky Derby"?<p>There isn't anything core to reality about Kentucky, its Derby, the Gregorian calendar, America, horse breeds, etc. These are all cultural inventions that happen to have particular importance in global human culture because of accidents of history, and are well-attested in training sets. At best we are seeing some statistical convergence on training sets because everyone is training on the same pile and scraping the barrel for any differences.</p>
]]></description><pubDate>Thu, 17 Jul 2025 23:10:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=44599338</link><dc:creator>lsy</dc:creator><comments>https://news.ycombinator.com/item?id=44599338</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44599338</guid></item><item><title><![CDATA[New comment by lsy in "LLM Inevitabilism"]]></title><description><![CDATA[
<p>I think two things can be true simultaneously:<p>1. LLMs are a new technology and it's hard to put the genie back in the bottle with that. It's difficult to imagine a future where they don't continue to exist in some form, with all the timesaving benefits and social issues that come with them.<p>2. Almost three years in, companies investing in LLMs have not yet discovered a business model that justifies the massive expenditure of training and hosting them, the majority of consumer usage is at the free tier, the industry is seeing the first signs of pulling back investments, and model capabilities are plateauing at a level where most people agree that the output is trite and unpleasant to consume.<p>There are many technologies that have seemed inevitable and seen retreats under the lack of commensurate business return (the supersonic jetliner), and several that seemed poised to displace both old tech and labor but have settled into specific use cases (the microwave oven). Given the lack of a sufficiently profitable business model, it feels as likely as not that LLMs settle somewhere a little less remarkable, and hopefully less annoying, than today's almost universally disliked attempts to cram it everywhere.</p>
]]></description><pubDate>Tue, 15 Jul 2025 05:33:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=44568114</link><dc:creator>lsy</dc:creator><comments>https://news.ycombinator.com/item?id=44568114</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44568114</guid></item></channel></rss>