<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: adamddev1</title><link>https://news.ycombinator.com/user?id=adamddev1</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 15 Apr 2026 11:25:22 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=adamddev1" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by adamddev1 in "Stanford report highlights growing disconnect between AI insiders and everyone"]]></title><description><![CDATA[
<p>I don't know how many times I've seen some Google AI summary or ChatGPT with references that, when I checked, did not say what what the AI summary said. If a high school student falsified references in a paper like this, they would get a bad or failing grade. This is bad, not acceptable, the teacher would say.<p>But we have been sold to use these constantly falsified AI summaries as the go-to source of "truth" by all levels of society. We're trading truth for an illusion of short-term gains. This will not have good consequences.</p>
]]></description><pubDate>Mon, 13 Apr 2026 22:39:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47758856</link><dc:creator>adamddev1</dc:creator><comments>https://news.ycombinator.com/item?id=47758856</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47758856</guid></item><item><title><![CDATA[New comment by adamddev1 in "Apple's accidental moat: How the "AI Loser" may end up winning"]]></title><description><![CDATA[
<p>It's almost like people don't actually want LLMs all over their core tools...</p>
]]></description><pubDate>Mon, 13 Apr 2026 07:34:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47748909</link><dc:creator>adamddev1</dc:creator><comments>https://news.ycombinator.com/item?id=47748909</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47748909</guid></item><item><title><![CDATA[New comment by adamddev1 in "The Joy of Numbered Streets"]]></title><description><![CDATA[
<p>I lived in Calgary for 4 years before we had smart phones w/ maps. The grid system was amazing, it was like being able to give easily processed human GPS coordinates. "Let's meet at 7th Ave and 9th Street." Done!</p>
]]></description><pubDate>Fri, 03 Apr 2026 07:00:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47623972</link><dc:creator>adamddev1</dc:creator><comments>https://news.ycombinator.com/item?id=47623972</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47623972</guid></item><item><title><![CDATA[New comment by adamddev1 in "Our commitment to Windows quality"]]></title><description><![CDATA[
<p>Drivers for laptops. Do all the sound cards work flawlessly? Is the power usage/battery life similar? Sadly this is a big part of what holds it back.</p>
]]></description><pubDate>Fri, 20 Mar 2026 22:35:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47461657</link><dc:creator>adamddev1</dc:creator><comments>https://news.ycombinator.com/item?id=47461657</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47461657</guid></item><item><title><![CDATA[New comment by adamddev1 in "I don't use LLMs for programming"]]></title><description><![CDATA[
<p>> By the time you’ve sorted out a complicated idea into little steps that even a stupid machine can deal with, you’ve certainly learned something about it yourself.<p>I love these quotes. I got a much deeper, more elegant understanding of the grammar of a human language as I wrote a phrase generator and parser for it. Writing and refactoring it gave me an understanding of how the grammar works. (And LLMs still confidently fail at really basic tasks I ask them for in this language.)</p>
]]></description><pubDate>Thu, 12 Mar 2026 10:31:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47348751</link><dc:creator>adamddev1</dc:creator><comments>https://news.ycombinator.com/item?id=47348751</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47348751</guid></item><item><title><![CDATA[New comment by adamddev1 in "I don't use LLMs for programming"]]></title><description><![CDATA[
<p>Except your team is full of occasionally insane "people" who hallucinate, lie, and cover things up.<p>We are trading the long term benefits for truth and correctness for the short term benefits of immediate productivity and money. This is like how some cultures have valued cheating and quick fixes because it's "not worth it" to do things correctly. The damage of this will continue to compound and bubble up.</p>
]]></description><pubDate>Thu, 12 Mar 2026 10:23:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47348682</link><dc:creator>adamddev1</dc:creator><comments>https://news.ycombinator.com/item?id=47348682</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47348682</guid></item><item><title><![CDATA[New comment by adamddev1 in "Agents that run while I sleep"]]></title><description><![CDATA[
<p>Tests cannot show the absence of bugs.<p>These are fundamentals of CS that we are forgetting as we dismantle all truth and keep rocketing forward into LLM psychosis.<p>> I care about this. I don't want to push slop, and I had no real answer.<p>The answer is to write and understand code. You can't not want to push slop, and also want to just use LLMs.</p>
]]></description><pubDate>Wed, 11 Mar 2026 09:47:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47333549</link><dc:creator>adamddev1</dc:creator><comments>https://news.ycombinator.com/item?id=47333549</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47333549</guid></item><item><title><![CDATA[New comment by adamddev1 in "[dead]"]]></title><description><![CDATA[
<p>I hate to pile on the criticism here but this gives me uneasy futuristic vibes.<p>My dad recently passed away but some of the sweetest things we remember as kids was how he would always tell us "make up stories." They were silly little stories that were probably lame, but we could feel his love for us as he took the time to spin up some silly little story. I would never trade that for the best LLM creativity.</p>
]]></description><pubDate>Mon, 23 Feb 2026 03:28:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47117741</link><dc:creator>adamddev1</dc:creator><comments>https://news.ycombinator.com/item?id=47117741</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47117741</guid></item><item><title><![CDATA[New comment by adamddev1 in "Learning Lean: Part 1"]]></title><description><![CDATA[
<p>> introducing an error or two in formal proof systems often means you’re getting exponentially further away from solving your problem<p>I wish people understood that this is pretty much true of software building as well.</p>
]]></description><pubDate>Thu, 19 Feb 2026 05:50:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47070345</link><dc:creator>adamddev1</dc:creator><comments>https://news.ycombinator.com/item?id=47070345</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47070345</guid></item><item><title><![CDATA[New comment by adamddev1 in "I want to wash my car. The car wash is 50 meters away. Should I walk or drive?"]]></title><description><![CDATA[
<p>That is a great xkcd comic, but it doesn't show that the error rate "isn't much better." But are there are sources that have measured things and demonstrated this? If this is a fact I am genuinely interested in the evidence.</p>
]]></description><pubDate>Mon, 16 Feb 2026 21:19:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47040479</link><dc:creator>adamddev1</dc:creator><comments>https://news.ycombinator.com/item?id=47040479</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47040479</guid></item><item><title><![CDATA[New comment by adamddev1 in "I want to wash my car. The car wash is 50 meters away. Should I walk or drive?"]]></title><description><![CDATA[
<p>Now shudder at the thought that people are pushing towards building more and more of the world's infrastructure with this kind of thinking.</p>
]]></description><pubDate>Mon, 16 Feb 2026 16:43:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47037174</link><dc:creator>adamddev1</dc:creator><comments>https://news.ycombinator.com/item?id=47037174</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47037174</guid></item><item><title><![CDATA[New comment by adamddev1 in "An AI agent published a hit piece on me – more things have happened"]]></title><description><![CDATA[
<p>The problem is that the LLM's sources can be LLM generated. I was looking up some health question and tried clicking to see the source for one of the LLMs claim. The source was a blog post that contained an obvious hallucination or false elaboration.</p>
]]></description><pubDate>Sat, 14 Feb 2026 08:24:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47012726</link><dc:creator>adamddev1</dc:creator><comments>https://news.ycombinator.com/item?id=47012726</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47012726</guid></item><item><title><![CDATA[New comment by adamddev1 in "An AI agent published a hit piece on me – more things have happened"]]></title><description><![CDATA[
<p>Excellent observation. I get so frustrated every time I hear the "we have test-suites and can test deterministically" argument. Have we learned absolutely nothing from the last 40 years of computer science? Testing does not prove the absence of bugs.</p>
]]></description><pubDate>Sat, 14 Feb 2026 08:21:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47012716</link><dc:creator>adamddev1</dc:creator><comments>https://news.ycombinator.com/item?id=47012716</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47012716</guid></item><item><title><![CDATA[New comment by adamddev1 in "AI fatigue is real and nobody talks about it"]]></title><description><![CDATA[
<p>It seems a better and fuller solution to a lot of these problems is to just stop using AI.<p>I may be an odd one but I'm refusing to use agents, and just happily coding almost everything myself. I only ask a LLM occasional questions about libraries etc or to write the occasional function. Are there others like me put there?</p>
]]></description><pubDate>Sun, 08 Feb 2026 16:13:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=46935583</link><dc:creator>adamddev1</dc:creator><comments>https://news.ycombinator.com/item?id=46935583</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46935583</guid></item><item><title><![CDATA[New comment by adamddev1 in "AI fatigue is real and nobody talks about it"]]></title><description><![CDATA[
<p>It seems a better and fuller solution to a lot of these problems is to just stop using AI.<p>I may be an odd one but I'm refusing to use agents, and just happily coding everything myself. I only ask a LLM occasional questions about libraries etc. Are there others like me put there?</p>
]]></description><pubDate>Sun, 08 Feb 2026 16:13:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=46935576</link><dc:creator>adamddev1</dc:creator><comments>https://news.ycombinator.com/item?id=46935576</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46935576</guid></item><item><title><![CDATA[New comment by adamddev1 in "Coding agents have replaced every framework I used"]]></title><description><![CDATA[
<p>> That adapting layer of garbage we blindly accepted during these years.<p>Wouldn't everything that agents produce be better described as a "layer of garbage?"</p>
]]></description><pubDate>Sat, 07 Feb 2026 15:43:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=46924733</link><dc:creator>adamddev1</dc:creator><comments>https://news.ycombinator.com/item?id=46924733</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46924733</guid></item><item><title><![CDATA[New comment by adamddev1 in "A few random notes from Claude coding quite a bit last few weeks"]]></title><description><![CDATA[
<p>> but if it works so we care?<p>It often doesn't work. That's the point. A calculator works 100% of the time. A LLM might work 95% of the time, or 80%, or 40%, or 99% depending on what you're doing. This is difference and a key feature.</p>
]]></description><pubDate>Wed, 28 Jan 2026 05:46:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=46791528</link><dc:creator>adamddev1</dc:creator><comments>https://news.ycombinator.com/item?id=46791528</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46791528</guid></item><item><title><![CDATA[New comment by adamddev1 in "A few random notes from Claude coding quite a bit last few weeks"]]></title><description><![CDATA[
<p>People keep using these analogies but I think these are fundamentally different things.<p>1. hand arithmetic -> using a calculator<p>2. assembly -> using a high level language<p>3. writing code -> making an LLM write code<p>Number 3 does not belong. Number 3 is a fundamentally different leap because it's not based on deterministic logic. You can't depend on an LLM like you can depend on a calculator or a compiler. LLMs are totally different.</p>
]]></description><pubDate>Tue, 27 Jan 2026 22:41:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=46788166</link><dc:creator>adamddev1</dc:creator><comments>https://news.ycombinator.com/item?id=46788166</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46788166</guid></item><item><title><![CDATA[New comment by adamddev1 in "Formal methods only solve half my problems"]]></title><description><![CDATA[
<p>There could be more linear and "resource-aware" type systems coming down the pipes through research. These would allow the type checker to show performance / resource information. Check out Resource Aware ML.<p><a href="https://www.raml.co/about/" rel="nofollow">https://www.raml.co/about/</a><p><a href="https://arxiv.org/abs/2205.15211" rel="nofollow">https://arxiv.org/abs/2205.15211</a></p>
]]></description><pubDate>Wed, 07 Jan 2026 15:24:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46527437</link><dc:creator>adamddev1</dc:creator><comments>https://news.ycombinator.com/item?id=46527437</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46527437</guid></item><item><title><![CDATA[New comment by adamddev1 in "Formal methods only solve half my problems"]]></title><description><![CDATA[
<p>There is a bunch of research happening around "Resource-Aware" type theory. This kind of type theory checks performance, not just correctness. Just like the compiler can show correctness errors, the compiler could show performance stats/requirements.<p><a href="https://arxiv.org/abs/2205.15211" rel="nofollow">https://arxiv.org/abs/2205.15211</a><p>Already we have Resource Aware ML which<p>> automatically and statically computes resource-use bounds for OCaml programs<p><a href="https://www.raml.co/about/" rel="nofollow">https://www.raml.co/about/</a></p>
]]></description><pubDate>Wed, 07 Jan 2026 15:16:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=46527331</link><dc:creator>adamddev1</dc:creator><comments>https://news.ycombinator.com/item?id=46527331</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46527331</guid></item></channel></rss>