<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: adamzwasserman</title><link>https://news.ycombinator.com/user?id=adamzwasserman</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 03 May 2026 21:44:58 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=adamzwasserman" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by adamzwasserman in "Show HN: HNswered – watches for replies to your Hacker News posts and comments"]]></title><description><![CDATA[
<p>I know I did done:<p><a href="https://news.ycombinator.com/item?id=46240221">https://news.ycombinator.com/item?id=46240221</a><p>I built a Chrome extension to solve a problem I kept having: losing track of conversations on HN.
The threads page is a firehose. Someone replies to your comment, you miss it, the conversation dies. Or you revisit a thread and can't remember which comments you've already read.<p>HN Reader does three things:<p>1. Hides stories you've seen – Checkbox next to each story. Check it to dim. Helps filter the front page to stuff you haven't looked at yet. 2. Collapsible comments that remember – Click the arrow to collapse a comment thread. Come back later and it stays collapsed, unless someone added a new reply, then it auto-expands with a "NEW" badge. 3. Highlights your conversations – On your threads page, badges show "you", "replied to you", and "you replied" so you can instantly spot active conversations.<p>That last one is what made me build this. I was missing replies buried in long threads. Now I just glance at my threads page and the blue "replied to you" badges jump out.<p>Everything stays in local storage. No server, no account, no tracking. Auto-cleans old data when storage gets full.<p>GitHub: <a href="https://github.com/adamzwasserman/hnreader" rel="nofollow">https://github.com/adamzwasserman/hnreader</a><p>Works in Chrome, Arc, and any Chromium browser. Load it unpacked from the repo.<p>Feedback welcome – especially on what other HN reading problems you'd want solved.</p>
]]></description><pubDate>Fri, 24 Apr 2026 22:21:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47896533</link><dc:creator>adamzwasserman</dc:creator><comments>https://news.ycombinator.com/item?id=47896533</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47896533</guid></item><item><title><![CDATA[New comment by adamzwasserman in "Douglas Lenat's Automated Mathematician Source Code"]]></title><description><![CDATA[
<p>I for one think it is hight time that we restore the study of quantum bogodynamics to its rightful place in the pantheon of human achievements.<p>The bogon flux must NOT be interrupted.</p>
]]></description><pubDate>Mon, 30 Mar 2026 14:50:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47575096</link><dc:creator>adamzwasserman</dc:creator><comments>https://news.ycombinator.com/item?id=47575096</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47575096</guid></item><item><title><![CDATA[New comment by adamzwasserman in "George R. R. Martin Is "Not in the Mood" to Finish the Winds of Winter"]]></title><description><![CDATA[
<p>I love it!<p>You don't own him.</p>
]]></description><pubDate>Sat, 21 Mar 2026 23:05:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47472464</link><dc:creator>adamzwasserman</dc:creator><comments>https://news.ycombinator.com/item?id=47472464</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47472464</guid></item><item><title><![CDATA[New comment by adamzwasserman in "Python: The Optimization Ladder"]]></title><description><![CDATA[
<p>Correction. I copied some incorrect values from my test harness. So Honest Python does NOT beat Dishonest Swift.<p>But it does beat the pants off of JS/TS on V8 which is quite the surprise.<p>Also in the surprise category is that Honest Java is more than 2x faster than dishonest c++.</p>
]]></description><pubDate>Sun, 15 Mar 2026 15:12:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47388186</link><dc:creator>adamzwasserman</dc:creator><comments>https://news.ycombinator.com/item?id=47388186</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47388186</guid></item><item><title><![CDATA[New comment by adamzwasserman in "Python: The Optimization Ladder"]]></title><description><![CDATA[
<p>The dynamism exists to support the object model. That's the actual dependency. Monkey-patching, runtime class mutation, vtable dispatch. These aren't language features people asked for. They're consequences of building everything on mutable objects with identity.<p>Strip the object model. Keep Python.<p>You get most of the speed back without touching a compiler, and your code gets easier to read as a side effect.<p>I built a demo: Dishonest code mutates state behind your back; Honest code takes data in and returns data out. Classes vs pure functions in 11 languages, same calculation. Honest Python beats compiled C++ and Swift on the same problem. Not because Python is fast, but because the object model's pointer-chasing costs more than the Python VM overhead.<p>Don't take my word for it. It's dockerized and on GitHub. Run it yourself: honestcode.software, hit the Surprise! button.</p>
]]></description><pubDate>Sat, 14 Mar 2026 21:01:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47381127</link><dc:creator>adamzwasserman</dc:creator><comments>https://news.ycombinator.com/item?id=47381127</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47381127</guid></item><item><title><![CDATA[New comment by adamzwasserman in "Lego's 0.002mm specification and its implications for manufacturing (2025)"]]></title><description><![CDATA[
<p>The reason this is impressive has less to do with the tolerances themselves and more to do with backward compatibility across decades at scale. That's the genuinely hard part.<p>The history here is deeper than most people realize. The United States spent fifty years (roughly 1800 to 1853) at the Springfield and Harper's Ferry armories trying to achieve what LEGO now does routinely: parts manufactured to tight enough tolerances that they are truly interchangeable without fitting. In 1853, a visiting British inspector randomly selected ten muskets made in ten different years, disassembled them, mixed the parts, and reassembled ten functional muskets using only a screwdriver. Tolerances of a thousandth of an inch. It was considered impossible by most of the engineering establishment of the time.<p>The way they got there was by building machines, then using the parts those machines made to build better machines, then using those improved parts to build even better machines. A virtuous circle of transferring skill from human hands to tooling. This is the actual origin story of what historians call the American System of Manufacture, and it's the foundation the entire modern automotive supply chain sits on.<p>So yes, any competent injection molder holds tight tolerances today. But that's precisely the point: the reason it seems unremarkable now is that two centuries of compounding precision made it so</p>
]]></description><pubDate>Wed, 11 Mar 2026 16:27:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47337700</link><dc:creator>adamzwasserman</dc:creator><comments>https://news.ycombinator.com/item?id=47337700</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47337700</guid></item><item><title><![CDATA[New comment by adamzwasserman in "DOGE employee stole Social Security data and put it on a thumb drive"]]></title><description><![CDATA[
<p>according to the WaPo, this guy was such a L33T hacker that 'he needed help transferring data from a thumb drive “to his personal computer"'.<p>Ooookay</p>
]]></description><pubDate>Wed, 11 Mar 2026 01:01:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47330699</link><dc:creator>adamzwasserman</dc:creator><comments>https://news.ycombinator.com/item?id=47330699</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47330699</guid></item><item><title><![CDATA[New comment by adamzwasserman in "After outages, Amazon to make senior engineers sign off on AI-assisted changes"]]></title><description><![CDATA[
<p>I'm sure they are going to have a ball reading through thousands of lines of AI slop.</p>
]]></description><pubDate>Wed, 11 Mar 2026 00:33:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47330530</link><dc:creator>adamzwasserman</dc:creator><comments>https://news.ycombinator.com/item?id=47330530</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47330530</guid></item><item><title><![CDATA[New comment by adamzwasserman in "Billion-Parameter Theories"]]></title><description><![CDATA[
<p>This is the strongest point in the thread. The article treats poverty, climate, and markets as though the obstacle is insufficient model capacity. But these systems contain agents with values and motivations who actively resist interventions. A billion-parameter model of a system whose components are trying to game the model will never be a theory of that system. The agents will simply route around it.<p>More broadly, the article assumes that scaling model capacity will eventually bridge the gap between prediction and understanding. I have pre-registered experiments on OSF.io that falsify the strong scaling hypothesis for LLMs: past a certain point, additional parameters buy you better interpolation within the training distribution without improving generalization to novel structure. This shouldn't surprise anyone. If the entire body of science has taught us anything at all, it is that regularity is only ever achieved at the price of generality. A model that fits everything predicts nothing.<p>The author gestures at mechanistic interpretability as the path from oracle to science. But interpretability research keeps finding that what these models learn are statistical regularities in training data, not causal structure. Exactly what you'd expect from a compression algorithm. The conflation of compression with explanation is doing a lot of quiet work in this essay.</p>
]]></description><pubDate>Wed, 11 Mar 2026 00:26:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47330490</link><dc:creator>adamzwasserman</dc:creator><comments>https://news.ycombinator.com/item?id=47330490</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47330490</guid></item><item><title><![CDATA[New comment by adamzwasserman in "AI Will Never Be Conscious"]]></title><description><![CDATA[
<p>That's true in the same sense that agriculture and nuclear weapons are extensions of evolutionary pressure. Everything humans produce is, by definition. But it empties the term of any useful meaning.<p>The distinction that matters: evolutionary pressure operates through differential survival across generations, where the organism has skin in the game. AI models are optimized via gradient descent on loss functions that humans define. That's artificial selection toward human objectives, not evolutionary pressure in any meaningful sense. The model has no stake in the outcome. Nothing is at risk for it.<p>You actually make this point yourself in your second sentence: they "lack both agency and consciousness, and do not experience this pressure." I agree completely. But that's precisely why the first sentence doesn't do any work. If they don't experience it, then calling it evolutionary pressure is metaphorical at best. And the metaphor obscures the exact gap we should be paying attention to: the absence of anything at stake.</p>
]]></description><pubDate>Sun, 01 Mar 2026 18:02:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47209049</link><dc:creator>adamzwasserman</dc:creator><comments>https://news.ycombinator.com/item?id=47209049</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47209049</guid></item><item><title><![CDATA[New comment by adamzwasserman in "AI Will Never Be Conscious"]]></title><description><![CDATA[
<p>To clarify: I'm not talking about morals specifically. I mean value in the broader sense of spontaneously assigning relative importance to things, producing a hierarchy that drives action.<p>You're thirsty. There's pond water in the forest and clean well water in the town square, but you're an escaped prisoner. Suddenly the value hierarchy flips: safety trumps water quality. You do this instantly, with incomplete information, integrating survival, context, and preference in a way that no one programmed into you.<p>Morality is one expression of this capacity, but so is aesthetic judgment, risk assessment, curiosity, and the decision to walk down a dark alley or not. The trolley problem is just a dramatic example. The mundane examples are actually more telling, because we do them thousands of times a day without noticing.<p>No current AI has any form of this. It has no mechanism for deciding that anything matters more than anything else except through weightings that were derived from human-generated training data. It borrows our value hierarchies statistically. It doesn't have its own.</p>
]]></description><pubDate>Sun, 01 Mar 2026 17:58:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47209022</link><dc:creator>adamzwasserman</dc:creator><comments>https://news.ycombinator.com/item?id=47209022</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47209022</guid></item><item><title><![CDATA[New comment by adamzwasserman in "AI Will Never Be Conscious"]]></title><description><![CDATA[
<p>The interesting version of the argument isn't about substrate: it's about motivation.<p>Present the trolley problem to GPT-4 and it gives you a philosophy survey answer.<p>Present it to a human and their palms sweat. The gap isn't computation, it's that humans are value-making machines shaped by millions of years of selection pressure.<p>Pollan lands on the wrong argument (biology vs. silicon) when the real one is: where do the values come from, and can they emerge without a reproductive lineage that stakes survival on getting them right?</p>
]]></description><pubDate>Tue, 24 Feb 2026 17:23:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47139803</link><dc:creator>adamzwasserman</dc:creator><comments>https://news.ycombinator.com/item?id=47139803</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47139803</guid></item><item><title><![CDATA[New comment by adamzwasserman in "AI Will Never Be Conscious"]]></title><description><![CDATA[
<p>The substrate argument is the wrong hill for Pollan to die on. The stronger version isn't "meat vs. silicon" — it's that brains are value-making machines operating under evolutionary pressure, and no current AI architecture has anything analogous to that. You can simulate the outputs of valuation without having the mechanism. The question isn't whether consciousness can exist in another substrate, it's whether you can get there without the thing that actually drives human cognition: spontaneous assignment of moral and survival value with no prior programming.</p>
]]></description><pubDate>Tue, 24 Feb 2026 17:22:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47139782</link><dc:creator>adamzwasserman</dc:creator><comments>https://news.ycombinator.com/item?id=47139782</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47139782</guid></item><item><title><![CDATA[New comment by adamzwasserman in "AI Will Never Be Conscious"]]></title><description><![CDATA[
<p>The commonality breaks down at value assignment. You hear an unexpected sound and have a threat/delight assessment in 170ms. Faster than Google serves a first byte. You do this with virtually no data.<p>An LLM doesn't assign value to anything; it predicts tokens. The interesting question isn't whether we share a process with LLMs, it's whether the thing that makes your decisions matter to you, moral weight, spontaneous motivation, can emerge from a system that has no survival stake in its own outputs. I wrote about this a few years ago as "the consciousness gap": <a href="https://hackernoon.com/ai-and-the-consciousness-gap-lr4k3yg8" rel="nofollow">https://hackernoon.com/ai-and-the-consciousness-gap-lr4k3yg8</a></p>
]]></description><pubDate>Tue, 24 Feb 2026 17:21:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47139766</link><dc:creator>adamzwasserman</dc:creator><comments>https://news.ycombinator.com/item?id=47139766</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47139766</guid></item><item><title><![CDATA[New comment by adamzwasserman in "Show HN: Envware – E2EE CLI to manage environment variables across devices"]]></title><description><![CDATA[
<p>Thanks, I will check it out.</p>
]]></description><pubDate>Sat, 31 Jan 2026 23:32:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46842022</link><dc:creator>adamzwasserman</dc:creator><comments>https://news.ycombinator.com/item?id=46842022</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46842022</guid></item><item><title><![CDATA[New comment by adamzwasserman in "Ask HN: Is understanding code becoming "optional"?"]]></title><description><![CDATA[
<p>I have written a research paper on another interesting prompting technique that I call axiomatic prompting. On objectively measurable tasks, when an AI scores below 70%, including clear axioms in the prompt systematically increases success.<p>In coding this would convert to: when trying to impose a pattern or architecture that is different enough from the "mid" programming approach that the AI is compelled to use, including axioms about the approach (in a IF this THEN than style, as opposed to few shot examples) will improve success.<p>The key is the 70% threshold: if the model already has enough training data, axioms hurt. If the model is underperforming because the training set did -not- have enough examples (for example hyperscript), axioms helps.</p>
]]></description><pubDate>Sat, 31 Jan 2026 19:45:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=46840061</link><dc:creator>adamzwasserman</dc:creator><comments>https://news.ycombinator.com/item?id=46840061</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46840061</guid></item><item><title><![CDATA[New comment by adamzwasserman in "Ask HN: Exhaused by AI looking for some advice on how to keep moving forward"]]></title><description><![CDATA[
<p>I admire your honesty. I suspect that attitude will take you far.<p>Exercism.io is excellent for exactly the experience you're seeking. The mentored tracks force you to see multiple solutions to the same problem, which builds pattern recognition faster than production work alone. You start noticing when code "feels" fragile before you can articulate why.</p>
]]></description><pubDate>Sat, 31 Jan 2026 19:28:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=46839916</link><dc:creator>adamzwasserman</dc:creator><comments>https://news.ycombinator.com/item?id=46839916</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46839916</guid></item><item><title><![CDATA[New comment by adamzwasserman in "Ask HN: Is understanding code becoming "optional"?"]]></title><description><![CDATA[
<p>Job security for those of us who think like this.<p>Two layers vibe coding can't touch: architecture decisions (where the constraints live) and cleanup when the junior-dev-quality code accumulates enough debt. Someone has to hold the mental model.</p>
]]></description><pubDate>Sat, 31 Jan 2026 19:24:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=46839864</link><dc:creator>adamzwasserman</dc:creator><comments>https://news.ycombinator.com/item?id=46839864</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46839864</guid></item><item><title><![CDATA[New comment by adamzwasserman in "Deep dive into Turso, the “SQLite rewrite in Rust”"]]></title><description><![CDATA[
<p>SQLite is battle-tested in production at massive scale. Discord handles millions of concurrent users with SQLite clusters. WhatsApp served 900 million users before Facebook acquired them, running on SQLite for message storage. The "lightweight" perception is outdated.<p>Who knows, maybe 5 years from now, you will say to yourself: that crazy wasn't so crazy after all!</p>
]]></description><pubDate>Sat, 31 Jan 2026 19:19:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=46839825</link><dc:creator>adamzwasserman</dc:creator><comments>https://news.ycombinator.com/item?id=46839825</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46839825</guid></item><item><title><![CDATA[New comment by adamzwasserman in "Ask HN: Is understanding code becoming "optional"?"]]></title><description><![CDATA[
<p>Although I write very little code myself anymore, I don't trust AI code at all. My default assumption: every line is the most mid possible implementation, every important architecture constraint violated wantonly. Your typical junior programmer.<p>So I run specialized compliance agents regularly. I watch the AI code and interrupt frequently to put it back on track. I occasionally write snippets as few-shot examples. Verification without reading every line, but not "vibe checking" either.</p>
]]></description><pubDate>Sat, 31 Jan 2026 19:16:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46839791</link><dc:creator>adamzwasserman</dc:creator><comments>https://news.ycombinator.com/item?id=46839791</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46839791</guid></item></channel></rss>