<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: netdevphoenix</title><link>https://news.ycombinator.com/user?id=netdevphoenix</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 07:42:50 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=netdevphoenix" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by netdevphoenix in "Why do we tell ourselves scary stories about AI?"]]></title><description><![CDATA[
<p>There is a very interesting book that explores the West's generally negative view of artificial intelligence whenever it is portrayed in media (Skynet) while Japanese media tends to have a positive view (Astro Boy).</p>
]]></description><pubDate>Fri, 10 Apr 2026 15:14:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47719397</link><dc:creator>netdevphoenix</dc:creator><comments>https://news.ycombinator.com/item?id=47719397</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47719397</guid></item><item><title><![CDATA[New comment by netdevphoenix in "An AI robot in my home"]]></title><description><![CDATA[
<p>People really are pushing the word "AI" everywhere. Robot already implies AI unless we are now assuming that AI = LLM.</p>
]]></description><pubDate>Fri, 10 Apr 2026 11:11:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47716319</link><dc:creator>netdevphoenix</dc:creator><comments>https://news.ycombinator.com/item?id=47716319</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47716319</guid></item><item><title><![CDATA[New comment by netdevphoenix in "How the AI Bubble Bursts"]]></title><description><![CDATA[
<p>Local LLMs don't sound profitable at all for those building them. If you really wanted a SOTA model, you would be paying eye watering amounts to own it unless you got an open sourced one.</p>
]]></description><pubDate>Mon, 30 Mar 2026 13:09:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47573832</link><dc:creator>netdevphoenix</dc:creator><comments>https://news.ycombinator.com/item?id=47573832</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47573832</guid></item><item><title><![CDATA[New comment by netdevphoenix in "Ask HN: Where have you found the coding limits of current models?"]]></title><description><![CDATA[
<p>I think consistency levels when operating autonomously is still a challenge which you could refer to as their limit. You need to do so much around them to keep the consistency AND a decent level of performance. It's like a savant with a short-attention span.</p>
]]></description><pubDate>Mon, 30 Mar 2026 12:26:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47573398</link><dc:creator>netdevphoenix</dc:creator><comments>https://news.ycombinator.com/item?id=47573398</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47573398</guid></item><item><title><![CDATA[New comment by netdevphoenix in "ChatGPT won't let you type until Cloudflare reads your React state"]]></title><description><![CDATA[
<p>Performing an automated action on a website that has not consented is the problem. OpenAI showing you how to opt-opt is backwards. Consent comes first.<p>Bit concerning that some professional engineers don't understand this given the sensitive systems they interact with.</p>
]]></description><pubDate>Mon, 30 Mar 2026 09:56:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47572383</link><dc:creator>netdevphoenix</dc:creator><comments>https://news.ycombinator.com/item?id=47572383</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47572383</guid></item><item><title><![CDATA[New comment by netdevphoenix in "Building a Procedural Hex Map with Wave Function Collapse"]]></title><description><![CDATA[
<p>Is there anything special about using the wave function collapse algorithm in this particular context? I feel this is like when experimental musician make music with plant electrical signals, wind flow or sea wave movements, etc. The idea sounds great but the execution not so much.</p>
]]></description><pubDate>Tue, 10 Mar 2026 10:25:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47321349</link><dc:creator>netdevphoenix</dc:creator><comments>https://news.ycombinator.com/item?id=47321349</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47321349</guid></item><item><title><![CDATA[New comment by netdevphoenix in "Grammarly is offering ‘expert’ AI reviews from famous dead and living writers"]]></title><description><![CDATA[
<p>> while my calculator can perfectly add any numbers up to its memory limit, it has no understanding of addition.<p>"my calculator can perfectly add any numbers up to its memory limit" This kind of anthropomorphic language is misleading in these conversations. Your calculator isn't an agent so it should not be expected to be capable of any cognition.</p>
]]></description><pubDate>Mon, 09 Mar 2026 17:00:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47311762</link><dc:creator>netdevphoenix</dc:creator><comments>https://news.ycombinator.com/item?id=47311762</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47311762</guid></item><item><title><![CDATA[New comment by netdevphoenix in "Grammarly is offering ‘expert’ AI reviews from famous dead and living writers"]]></title><description><![CDATA[
<p>> If you built an LLM exclusively on the writings and letters of John Steinbeck, you could NOT tell the LLM to solve an integral for you amd expect it to be right.<p>Isn't this obvious? There is not enough latent knowledge of math there to enable current LLMs to approximate anything resembling an integral.</p>
]]></description><pubDate>Mon, 09 Mar 2026 16:56:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47311701</link><dc:creator>netdevphoenix</dc:creator><comments>https://news.ycombinator.com/item?id=47311701</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47311701</guid></item><item><title><![CDATA[New comment by netdevphoenix in "The Missing Semester of Your CS Education – Revised for 2026"]]></title><description><![CDATA[
<p>> If most people are not using a tool properly, it is not their fault; it is the tool's fault.<p>Replace tool with one of piano|guitar|etc and see your logic fall apart. Software tools like any other have a manual and require effort and time to learn.</p>
]]></description><pubDate>Tue, 24 Feb 2026 15:35:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47138465</link><dc:creator>netdevphoenix</dc:creator><comments>https://news.ycombinator.com/item?id=47138465</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47138465</guid></item><item><title><![CDATA[New comment by netdevphoenix in "Magical Mushroom – Europe's first industrial-scale mycelium packaging producer"]]></title><description><![CDATA[
<p>EU <> Europe</p>
]]></description><pubDate>Mon, 23 Feb 2026 13:11:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47121878</link><dc:creator>netdevphoenix</dc:creator><comments>https://news.ycombinator.com/item?id=47121878</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47121878</guid></item><item><title><![CDATA[New comment by netdevphoenix in "Is Show HN dead? No, but it's drowning"]]></title><description><![CDATA[
<p>> For me, posting a Show HN was a huge deal - usually done after years of development<p>This is still possible. Vibe coders are just not interested in working on a piece of software for years till it's polished. It's a self selection pattern. Like the vast amount of terrible VB6 apps when it came out. Or the state of JS until very recently.</p>
]]></description><pubDate>Tue, 17 Feb 2026 17:33:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47050273</link><dc:creator>netdevphoenix</dc:creator><comments>https://news.ycombinator.com/item?id=47050273</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47050273</guid></item><item><title><![CDATA[New comment by netdevphoenix in "I asked Claude Code to remove jQuery. It failed miserably"]]></title><description><![CDATA[
<p>Why would they? Github has 28 million public repos, Codeberg only hit 300k last year. Anyway, Codeberg was just a placeholder for 'repo source _less_ likely to be in their training data'. Codeberg was quick candidate for a place to find a big old codebase with non-sensitive data.<p>It is indeed hard but the guys at Codeberg are certainly an order of magnitude better than Github as they opted out of the main AI crawlers, regularly block IPs known to belong to AI startups and they allow you to make your repos only be accessible to logged in users.<p>You seem be going on a tangent, here. Main point was about performing a well documented test anyway.</p>
]]></description><pubDate>Tue, 17 Feb 2026 10:19:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47045732</link><dc:creator>netdevphoenix</dc:creator><comments>https://news.ycombinator.com/item?id=47045732</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47045732</guid></item><item><title><![CDATA[New comment by netdevphoenix in "Modern CSS Code Snippets: Stop writing CSS like it's 2015"]]></title><description><![CDATA[
<p>The majority of its userbase is no longer made of humans though.</p>
]]></description><pubDate>Mon, 16 Feb 2026 14:45:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47035630</link><dc:creator>netdevphoenix</dc:creator><comments>https://news.ycombinator.com/item?id=47035630</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47035630</guid></item><item><title><![CDATA[New comment by netdevphoenix in "I asked Claude Code to remove jQuery. It failed miserably"]]></title><description><![CDATA[
<p>I thought it would be obvious: OpenAI has used repos on GitHub as training data. Would be like testing someone using a past paper publicly available.<p>Are you planning on carrying out the experiment? Regardless of the outcome, it would be of value to developers.</p>
]]></description><pubDate>Mon, 16 Feb 2026 13:21:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47034633</link><dc:creator>netdevphoenix</dc:creator><comments>https://news.ycombinator.com/item?id=47034633</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47034633</guid></item><item><title><![CDATA[New comment by netdevphoenix in "JavaScript-heavy approaches are not compatible with long-term performance goals"]]></title><description><![CDATA[
<p>> Google would have to take the lead and implement this in chrome then enough developers would have to build sites using it and force safari and firefox to comply. It just isn't feasible.<p>This is not something you really want to happen for the health of the web tech ecosystem. I am surprised to see actual developers nonchalantly suggesting this. A type system for the web is not worth an IE 2.0</p>
]]></description><pubDate>Mon, 16 Feb 2026 11:37:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47033837</link><dc:creator>netdevphoenix</dc:creator><comments>https://news.ycombinator.com/item?id=47033837</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47033837</guid></item><item><title><![CDATA[New comment by netdevphoenix in "Expensively Quadratic: The LLM Agent Cost Curve"]]></title><description><![CDATA[
<p>> Now I first discuss with an AI Agent or ChatGPT to write a thorough spec before handing it off to an agent to code it. I don’t read every line. Instead, I thoroughly test the outcome.<p>This is likely the future.<p>That being said: "I used to spend most of my time writing code, fixing syntax, thinking through how to structure the code, looking up documentation on how to use a library.".<p>If you are spending a lot of time fixing syntax, have you looked into linters? If you are spending too much time thinking about how to structure the code, how about spending some days coming up with some general conventions or simply use existing ones.<p>If you are getting so much productivity from LLMs, it is worth checking if you were simply unproductive relative to your average dev in the first place. If that's the case, you might want to think, what is going to happen to your productivity gains when everyone else jumps on the LLM train. LLMs might be covering for your unproductivity at the code level, but you might still be dropping the ball in non-code areas. That's the higher level pattern I would be thinking about.</p>
]]></description><pubDate>Mon, 16 Feb 2026 11:29:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47033779</link><dc:creator>netdevphoenix</dc:creator><comments>https://news.ycombinator.com/item?id=47033779</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47033779</guid></item><item><title><![CDATA[New comment by netdevphoenix in "Expensively Quadratic: The LLM Agent Cost Curve"]]></title><description><![CDATA[
<p>> AI generates code fast but then you're stuck reading every line because it might've missed some edge case or broken something three layers deep<p>I will imagine that in the future this will be tackled with a heavy driven approach and tight regulation of what the agent can and cannot touch. So frequent small PRs over big ones. Limit folder access to only those that need changing. Let it build the project. If it doesn't build, no PR submissions allowed. If a single test fails, no PR submissions allowed. And the tests will likely be the first if not the main focus in LLM PRs.<p>I use the term "LLM" and not "AI" because I notice that people have started attributing LLM related issues (like ripping off copyrighted material, excessive usage of natural resources, etc) to AI in general which is damaging for the future of AI.</p>
]]></description><pubDate>Mon, 16 Feb 2026 11:14:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47033670</link><dc:creator>netdevphoenix</dc:creator><comments>https://news.ycombinator.com/item?id=47033670</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47033670</guid></item><item><title><![CDATA[New comment by netdevphoenix in "I asked Claude Code to remove jQuery. It failed miserably"]]></title><description><![CDATA[
<p>Extraordinary claims require extraordinary evidence. "Works on my machine" ain't it.</p>
]]></description><pubDate>Fri, 13 Feb 2026 14:12:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47002932</link><dc:creator>netdevphoenix</dc:creator><comments>https://news.ycombinator.com/item?id=47002932</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47002932</guid></item><item><title><![CDATA[New comment by netdevphoenix in "I asked Claude Code to remove jQuery. It failed miserably"]]></title><description><![CDATA[
<p>> I often ask it "I have this bug. Why?" And it almost always figures it out and fixes it. Huge code base.<p>Is your AI PR publicly available in github?</p>
]]></description><pubDate>Fri, 13 Feb 2026 14:09:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47002904</link><dc:creator>netdevphoenix</dc:creator><comments>https://news.ycombinator.com/item?id=47002904</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47002904</guid></item><item><title><![CDATA[New comment by netdevphoenix in "I asked Claude Code to remove jQuery. It failed miserably"]]></title><description><![CDATA[
<p>> Not my experience. It excels in existing codebases too.<p>Why don't you prove it?<p>1. Find an old large codebase in codeberg (avoiding the octopus for obvious reasons)<p>2. Video stream the session and make the LLM convo public<p>3. Ask your LLM to remove jQuery from the db and submit regular commits to a public remote branch<p>Then we will be able to judge if the evidence stands</p>
]]></description><pubDate>Fri, 13 Feb 2026 14:06:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47002861</link><dc:creator>netdevphoenix</dc:creator><comments>https://news.ycombinator.com/item?id=47002861</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47002861</guid></item></channel></rss>