<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: 13years</title><link>https://news.ycombinator.com/user?id=13years</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 04:26:38 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=13years" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[AI Vibe Coding, Is It Working? No]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.mindprison.cc/p/ai-vibe-coding-is-it-working-no">https://www.mindprison.cc/p/ai-vibe-coding-is-it-working-no</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46322413">https://news.ycombinator.com/item?id=46322413</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 19 Dec 2025 05:00:46 +0000</pubDate><link>https://www.mindprison.cc/p/ai-vibe-coding-is-it-working-no</link><dc:creator>13years</dc:creator><comments>https://news.ycombinator.com/item?id=46322413</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46322413</guid></item><item><title><![CDATA[New comment by 13years in "Grokipedia or Slopipedia? Is It Truthful and Accurate?"]]></title><description><![CDATA[
<p>Thanks, interesting reference. However, their analysis doesn't tell us much about the quality of Grokipedia. Would be more interested in something like hallucination density, but I know of no way that could be measured.</p>
]]></description><pubDate>Wed, 05 Nov 2025 14:14:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=45823060</link><dc:creator>13years</dc:creator><comments>https://news.ycombinator.com/item?id=45823060</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45823060</guid></item><item><title><![CDATA[Grokipedia or Slopipedia? Is It Truthful and Accurate?]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.mindprison.cc/p/grokipedia-or-slopipedia-is-it-truthful-accurate">https://www.mindprison.cc/p/grokipedia-or-slopipedia-is-it-truthful-accurate</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45818875">https://news.ycombinator.com/item?id=45818875</a></p>
<p>Points: 4</p>
<p># Comments: 2</p>
]]></description><pubDate>Wed, 05 Nov 2025 03:59:30 +0000</pubDate><link>https://www.mindprison.cc/p/grokipedia-or-slopipedia-is-it-truthful-accurate</link><dc:creator>13years</dc:creator><comments>https://news.ycombinator.com/item?id=45818875</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45818875</guid></item><item><title><![CDATA[The Work of AI, Ourselves]]></title><description><![CDATA[
<p>Article URL: <a href="https://oliverbatemandoesthework.substack.com/p/the-work-of-ai-ourselves">https://oliverbatemandoesthework.substack.com/p/the-work-of-ai-ourselves</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45807559">https://news.ycombinator.com/item?id=45807559</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 04 Nov 2025 04:58:25 +0000</pubDate><link>https://oliverbatemandoesthework.substack.com/p/the-work-of-ai-ourselves</link><dc:creator>13years</dc:creator><comments>https://news.ycombinator.com/item?id=45807559</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45807559</guid></item><item><title><![CDATA[AI Discovers Novel Cancer Drug, or Did It?]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.mindprison.cc/p/ai-discovers-novel-cancer-drug-or-did-it-gemma-27b">https://www.mindprison.cc/p/ai-discovers-novel-cancer-drug-or-did-it-gemma-27b</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45717142">https://news.ycombinator.com/item?id=45717142</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 27 Oct 2025 03:30:15 +0000</pubDate><link>https://www.mindprison.cc/p/ai-discovers-novel-cancer-drug-or-did-it-gemma-27b</link><dc:creator>13years</dc:creator><comments>https://news.ycombinator.com/item?id=45717142</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45717142</guid></item><item><title><![CDATA[Google Search Console Performance Report Stuck at Sunday October 19]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.seroundtable.com/google-search-console-performance-report-stuck-40311.html">https://www.seroundtable.com/google-search-console-performance-report-stuck-40311.html</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45683683">https://news.ycombinator.com/item?id=45683683</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 23 Oct 2025 16:19:51 +0000</pubDate><link>https://www.seroundtable.com/google-search-console-performance-report-stuck-40311.html</link><dc:creator>13years</dc:creator><comments>https://news.ycombinator.com/item?id=45683683</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45683683</guid></item><item><title><![CDATA[New comment by 13years in "I Could Have Lived Without AI"]]></title><description><![CDATA[
<p>> We suck at measuring ourselves.<p>That is a certainty. I was once asked to calculate how much time we would save through our companies code reuse program. I read all the material on estimating savings, but then proved it was all ridiculous.<p>I came across a study that attempted to estimate how long it took to build libraries that had already been built. In this case, there were no unknowns, you had the entire code. Estimates were off by orders of magnitude. If we can't estimate the work when the work is already done, how could we ever estimate the work when we know less?</p>
]]></description><pubDate>Tue, 21 Oct 2025 12:41:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=45655056</link><dc:creator>13years</dc:creator><comments>https://news.ycombinator.com/item?id=45655056</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45655056</guid></item><item><title><![CDATA[I Could Have Lived Without AI]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.mindprison.cc/p/i-could-have-lived-without-ai">https://www.mindprison.cc/p/i-could-have-lived-without-ai</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45652006">https://news.ycombinator.com/item?id=45652006</a></p>
<p>Points: 5</p>
<p># Comments: 3</p>
]]></description><pubDate>Tue, 21 Oct 2025 03:03:09 +0000</pubDate><link>https://www.mindprison.cc/p/i-could-have-lived-without-ai</link><dc:creator>13years</dc:creator><comments>https://news.ycombinator.com/item?id=45652006</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45652006</guid></item><item><title><![CDATA[The Seven Trillion Dollar Scam]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.alilybit.com/p/the-seven-trillion-dollar-scam">https://www.alilybit.com/p/the-seven-trillion-dollar-scam</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45564379">https://news.ycombinator.com/item?id=45564379</a></p>
<p>Points: 4</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 13 Oct 2025 03:17:11 +0000</pubDate><link>https://www.alilybit.com/p/the-seven-trillion-dollar-scam</link><dc:creator>13years</dc:creator><comments>https://news.ycombinator.com/item?id=45564379</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45564379</guid></item><item><title><![CDATA[Four Fallacies of Modern AI]]></title><description><![CDATA[
<p>Article URL: <a href="https://blog.apiad.net/p/the-four-fallacies-of-modern-ai">https://blog.apiad.net/p/the-four-fallacies-of-modern-ai</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45207008">https://news.ycombinator.com/item?id=45207008</a></p>
<p>Points: 79</p>
<p># Comments: 93</p>
]]></description><pubDate>Thu, 11 Sep 2025 02:26:16 +0000</pubDate><link>https://blog.apiad.net/p/the-four-fallacies-of-modern-ai</link><dc:creator>13years</dc:creator><comments>https://news.ycombinator.com/item?id=45207008</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45207008</guid></item><item><title><![CDATA[OpenAI Researchers Have Discovered Why Language Models Hallucinate]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.thealgorithmicbridge.com/p/openai-researchers-have-discovered">https://www.thealgorithmicbridge.com/p/openai-researchers-have-discovered</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45177322">https://news.ycombinator.com/item?id=45177322</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 09 Sep 2025 04:14:15 +0000</pubDate><link>https://www.thealgorithmicbridge.com/p/openai-researchers-have-discovered</link><dc:creator>13years</dc:creator><comments>https://news.ycombinator.com/item?id=45177322</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45177322</guid></item><item><title><![CDATA[Invisible Hands on the Scale]]></title><description><![CDATA[
<p>Article URL: <a href="https://culturalcourage.substack.com/p/invisible-hands-on-the-scale">https://culturalcourage.substack.com/p/invisible-hands-on-the-scale</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44979200">https://news.ycombinator.com/item?id=44979200</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 21 Aug 2025 23:02:52 +0000</pubDate><link>https://culturalcourage.substack.com/p/invisible-hands-on-the-scale</link><dc:creator>13years</dc:creator><comments>https://news.ycombinator.com/item?id=44979200</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44979200</guid></item><item><title><![CDATA[New comment by 13years in "OpenAI claims gold-medal performance at IMO 2025"]]></title><description><![CDATA[
<p>Not sure how you get around the contamination problems. I use these everyday and they are extremely problematic about making errors that are hard to perceive.<p>They are not reliable tools for any tasks that require accurate data.</p>
]]></description><pubDate>Sat, 19 Jul 2025 23:07:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=44620310</link><dc:creator>13years</dc:creator><comments>https://news.ycombinator.com/item?id=44620310</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44620310</guid></item><item><title><![CDATA[New comment by 13years in "AI, Heidegger, and Evangelion"]]></title><description><![CDATA[
<p>A philosophical lens can sometimes help us perceive the root drivers of a set of problems. I sometimes call AI humanity's great hubris experiment.<p>AI's disproportionate capability to influence and capture attention versus productive output is a significant part of so many negative outcomes.</p>
]]></description><pubDate>Sat, 24 May 2025 20:12:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=44083517</link><dc:creator>13years</dc:creator><comments>https://news.ycombinator.com/item?id=44083517</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44083517</guid></item><item><title><![CDATA[New comment by 13years in "AI, Heidegger, and Evangelion"]]></title><description><![CDATA[
<p>Yes, that's an excellent description.</p>
]]></description><pubDate>Sat, 24 May 2025 20:03:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=44083457</link><dc:creator>13years</dc:creator><comments>https://news.ycombinator.com/item?id=44083457</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44083457</guid></item><item><title><![CDATA[New comment by 13years in "AI, Heidegger, and Evangelion"]]></title><description><![CDATA[
<p>I think it is creating a growing interest in authenticity among some. Although, it still feels like this is a minority opinion. Every content platform is being flooded with AI content. Social media floods it into all of my feeds.<p>I wish I could push a button and filter it all out. But that's the problem we have created. It is nearly impossible to do. If you want to consume truly human authentic content, it is nearly impossible to know. Everyone I interact with now might just be a bot.</p>
]]></description><pubDate>Sat, 24 May 2025 19:47:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=44083369</link><dc:creator>13years</dc:creator><comments>https://news.ycombinator.com/item?id=44083369</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44083369</guid></item><item><title><![CDATA[New comment by 13years in "AI, Heidegger, and Evangelion"]]></title><description><![CDATA[
<p>> Myself I believe technology and eventually AI were our fate once we became intelligence optimizers.<p>Yes, everyone talks about the Singularity, but I see the instrumental point of concern to be something prior which I've called the Event Horizon. We are optimizing, but without any understanding any longer for the outcomes.<p>"The point where we are now blind as to where we are going. The outcomes become increasingly unpredictable, and it becomes less likely that we can find our way back as it becomes a technology trap. Our existence becomes dependent on the very technology that is broken, fragile, unpredictable, and no longer understandable. There is just as much uncertainty in attempting to retrace our steps as there is in going forward."</p>
]]></description><pubDate>Sat, 24 May 2025 19:17:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=44083186</link><dc:creator>13years</dc:creator><comments>https://news.ycombinator.com/item?id=44083186</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44083186</guid></item><item><title><![CDATA[New comment by 13years in "AI, Heidegger, and Evangelion"]]></title><description><![CDATA[
<p>> AI is not inevitable fate. It is an invitation to wake up. The work is to keep dragging what is singular, poetic, and profoundly alive back into focus, despite all pressures to automate it away.<p>This is the struggle. The race to automate everything. Turn all of our social interactions into algorithmic digital bits. However, I don't think people are just going to wake up from calls to wake up, unfortunately.<p>We typically only wake up to anything once it is broken. Society has to break from the over optimization of attention and engagement. Not sure how that is going to play out, but we certainly aren't slowing down yet.<p>For example, take a look at the short clip I have posted here. It is an example of just how far everyone is scaling bot and content farms. It is an absolute flood of noise into all of our knowledge repositories.
<a href="https://www.mindprison.cc/p/dead-internet-at-scale" rel="nofollow">https://www.mindprison.cc/p/dead-internet-at-scale</a></p>
]]></description><pubDate>Sat, 24 May 2025 17:41:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=44082615</link><dc:creator>13years</dc:creator><comments>https://news.ycombinator.com/item?id=44082615</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44082615</guid></item><item><title><![CDATA[New comment by 13years in "We Have Made No Progress Toward AGI"]]></title><description><![CDATA[
<p>>  that the author also tripped over<p>The evidence for unfaithful reasoning comes from Anthropic. It is in their system card and this Anthropic paper.<p><a href="https://assets.anthropic.com/m/71876fabef0f0ed4/original/reasoning_models_paper.pdf" rel="nofollow">https://assets.anthropic.com/m/71876fabef0f0ed4/original/rea...</a></p>
]]></description><pubDate>Thu, 24 Apr 2025 04:16:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=43779205</link><dc:creator>13years</dc:creator><comments>https://news.ycombinator.com/item?id=43779205</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43779205</guid></item><item><title><![CDATA[New comment by 13years in "We Have Made No Progress Toward AGI"]]></title><description><![CDATA[
<p>But it is not an illusion, and the answers make no sense. In some cases the models pick exactly the opposite answer. No human would do this.<p>Yes, outside the training patterns is the point. I have no doubt if you trained LLMs on this type of pattern with millions of examples it could get the answers reliably.<p>The whole point is that humans do not need data training. They understand such concepts from one example.</p>
]]></description><pubDate>Thu, 24 Apr 2025 04:06:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=43779162</link><dc:creator>13years</dc:creator><comments>https://news.ycombinator.com/item?id=43779162</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43779162</guid></item></channel></rss>