<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: babblingfish</title><link>https://news.ycombinator.com/user?id=babblingfish</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 02 May 2026 10:42:36 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=babblingfish" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by babblingfish in "Show HN: Destiny – Claude Code's fortune Teller skill"]]></title><description><![CDATA[
<p>feature request: marriage palace built using two user profiles</p>
]]></description><pubDate>Fri, 01 May 2026 21:20:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47980493</link><dc:creator>babblingfish</dc:creator><comments>https://news.ycombinator.com/item?id=47980493</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47980493</guid></item><item><title><![CDATA[New comment by babblingfish in "DRAM Crunch: Lessons for System Design"]]></title><description><![CDATA[
<p>This is what keeps Amodei and Altman up at night. Their whole moat is data centers.  But what if we didn't need the data centers?</p>
]]></description><pubDate>Thu, 30 Apr 2026 00:29:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47956490</link><dc:creator>babblingfish</dc:creator><comments>https://news.ycombinator.com/item?id=47956490</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47956490</guid></item><item><title><![CDATA[New comment by babblingfish in "New Modern Greek"]]></title><description><![CDATA[
<p>I came to say this and this is a wonderful summary!</p>
]]></description><pubDate>Wed, 15 Apr 2026 17:47:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47782618</link><dc:creator>babblingfish</dc:creator><comments>https://news.ycombinator.com/item?id=47782618</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47782618</guid></item><item><title><![CDATA[New comment by babblingfish in "Claude Mythos: The System Card"]]></title><description><![CDATA[
<p>The "hiding from researchers" framing is particularly bad. The parsimonious explanation for why a model produces different outputs when it detects eval contexts: eval contexts appear differently in the training distribution and the model learned different output patterns for them. No theory of mind required. Occam's razor.<p>The agentic behaviors emerge from optimization pressure plus tool access plus a long context window. Interesting engineering. Not intent.<p>People are falling for yet another Anthropic PR stunt.</p>
]]></description><pubDate>Mon, 13 Apr 2026 17:02:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47754967</link><dc:creator>babblingfish</dc:creator><comments>https://news.ycombinator.com/item?id=47754967</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47754967</guid></item><item><title><![CDATA[New comment by babblingfish in "They're made out of meat (1991)"]]></title><description><![CDATA[
<p>It's amazing how consciousness remains a mystery given all the scientific progress over the last 100 years</p>
]]></description><pubDate>Wed, 08 Apr 2026 16:46:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47692727</link><dc:creator>babblingfish</dc:creator><comments>https://news.ycombinator.com/item?id=47692727</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47692727</guid></item><item><title><![CDATA[New comment by babblingfish in "[dead]"]]></title><description><![CDATA[
<p>Why reading cozy sci-fi in the age of AI feels like an act of resistance</p>
]]></description><pubDate>Tue, 31 Mar 2026 21:03:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47593444</link><dc:creator>babblingfish</dc:creator><comments>https://news.ycombinator.com/item?id=47593444</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47593444</guid></item><item><title><![CDATA[New comment by babblingfish in "Ollama is now powered by MLX on Apple Silicon in preview"]]></title><description><![CDATA[
<p>I see a lot of people are confused about the electricity claim so I'll elaborate on it more. The assumption I'm making here is that on device people will run smaller models, that can fit on their machines without needing to buy new computers. If everyone ran inference on their machine there would be no need for these massive datacenters which use huge quantities of electricity. It would utilize the machines they already have and the electricity they're already using.<p>People are making a comparison of the cost per inference or token or whatever and saying datacenters are more efficient which makes obvious sense. What i'm saying is if we eliminate the need for building out dozens of gigawatt datacenters completely then we would use less electricity. I feel like this makes intuitive sense. People are getting lost in the details about cost per inference, and performance on different models.</p>
]]></description><pubDate>Tue, 31 Mar 2026 17:11:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47590460</link><dc:creator>babblingfish</dc:creator><comments>https://news.ycombinator.com/item?id=47590460</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47590460</guid></item><item><title><![CDATA[New comment by babblingfish in "Ollama is now powered by MLX on Apple Silicon in preview"]]></title><description><![CDATA[
<p>LLMs on device is the future. It's more secure and solves the problem of too much demand for inference compared to data center supply, it also would use less electricity. It's just a matter of getting the performance good enough. Most users don't need frontier model performance.</p>
]]></description><pubDate>Tue, 31 Mar 2026 04:40:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47582826</link><dc:creator>babblingfish</dc:creator><comments>https://news.ycombinator.com/item?id=47582826</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47582826</guid></item><item><title><![CDATA[New comment by babblingfish in "R3 Bio pitched “brainless clones” to serve the role of backup human bodies"]]></title><description><![CDATA[
<p>I don't think this idea could work. There's this common misconception that our brains control our bodies, like how software can control hardware. The fact is that our brains are intrinsically connected to the rest of our body: via the central nervous system, sensory, and motor neurons. You can't just swap out our brains. It's integrated with the rest of our body in a fundamental way. If you cloned someone, the neuronal connections between the CNS and organs would not be the same, because these interconnections develop over a lifetime and are not predetermined at birth.<p>It also feels super unethical to me. Reminds me of "Never let me go" by Kazuo Ishiguro.</p>
]]></description><pubDate>Mon, 30 Mar 2026 22:31:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47580532</link><dc:creator>babblingfish</dc:creator><comments>https://news.ycombinator.com/item?id=47580532</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47580532</guid></item><item><title><![CDATA[New comment by babblingfish in "Google is reportedly testing a Gemini app for Mac"]]></title><description><![CDATA[
<p>While it does not seem to include inference done on your local computer, to me it feels like a precursor for doing so.<p>To my mind, inference at the edge is what will kill inference in the datacenter. Inference at the edge is more secure, faster, and uses less electricity. People share vulnerable and personal info in their chats, why share it with OpenAI who will use it to sell ads?<p>In a world where most of inference being done at the edge, what do we need all of these data centers for? You may say we need them to continue pre-training even bigger models. And yet, pre-training models has hit a performance plateau.<p>Inference in a data center never made sense. It's such a massive investment of resources when we're all carrying around computers in our pockets. As someone who values my privacy, I will start doing inference on device exclusively as soon as possible.</p>
]]></description><pubDate>Fri, 20 Mar 2026 16:20:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47456813</link><dc:creator>babblingfish</dc:creator><comments>https://news.ycombinator.com/item?id=47456813</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47456813</guid></item><item><title><![CDATA[Google is reportedly testing a Gemini app for Mac]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.engadget.com/ai/google-is-reportedly-testing-a-gemini-app-for-mac-203703372.html">https://www.engadget.com/ai/google-is-reportedly-testing-a-gemini-app-for-mac-203703372.html</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47456753">https://news.ycombinator.com/item?id=47456753</a></p>
<p>Points: 4</p>
<p># Comments: 1</p>
]]></description><pubDate>Fri, 20 Mar 2026 16:16:06 +0000</pubDate><link>https://www.engadget.com/ai/google-is-reportedly-testing-a-gemini-app-for-mac-203703372.html</link><dc:creator>babblingfish</dc:creator><comments>https://news.ycombinator.com/item?id=47456753</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47456753</guid></item><item><title><![CDATA[New comment by babblingfish in "The Los Angeles Aqueduct Is Wild"]]></title><description><![CDATA[
<p>I really dig the editorial viewpoint of this article. New journalism style meets fun facts about engineering.</p>
]]></description><pubDate>Fri, 20 Mar 2026 15:52:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47456363</link><dc:creator>babblingfish</dc:creator><comments>https://news.ycombinator.com/item?id=47456363</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47456363</guid></item><item><title><![CDATA[New comment by babblingfish in "Language model teams as distributed systems"]]></title><description><![CDATA[
<p>[flagged]</p>
]]></description><pubDate>Tue, 17 Mar 2026 01:26:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47407469</link><dc:creator>babblingfish</dc:creator><comments>https://news.ycombinator.com/item?id=47407469</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47407469</guid></item><item><title><![CDATA[New comment by babblingfish in "Writers and Their Day Jobs"]]></title><description><![CDATA[
<p>Brandon Sanderson often says in interviews that "laying bricks" is the best job a writer can have. He also says being a software engineer is particularly bad job for writers because you cannot do it on autopilot. I can confirm.<p>Back then, all jobs moved at a much slower pace. There was a lot more off time during work hours.</p>
]]></description><pubDate>Thu, 26 Feb 2026 17:32:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47169207</link><dc:creator>babblingfish</dc:creator><comments>https://news.ycombinator.com/item?id=47169207</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47169207</guid></item><item><title><![CDATA[New comment by babblingfish in "Looks like it is happening"]]></title><description><![CDATA[
<p>The number of submissions to high energy physics category on arXiv is double this year compared to the historical average. The author hypothesizes the increase is due to papers being written by LLMs.</p>
]]></description><pubDate>Tue, 24 Feb 2026 21:52:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47143739</link><dc:creator>babblingfish</dc:creator><comments>https://news.ycombinator.com/item?id=47143739</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47143739</guid></item><item><title><![CDATA[New comment by babblingfish in "Show HN: I built "AI Wattpad" to eval LLMs on fiction"]]></title><description><![CDATA[
<p>Do watch the video as it makes a compelling argument against this exact kind of thing. From a product design perspective, you're asking people to read a bunch of slop and organize it into slop piles. What's the point of that? Honestly it seems like a huge waste of everyone's time.</p>
]]></description><pubDate>Tue, 03 Feb 2026 19:24:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46875927</link><dc:creator>babblingfish</dc:creator><comments>https://news.ycombinator.com/item?id=46875927</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46875927</guid></item><item><title><![CDATA[New comment by babblingfish in "Show HN: I built "AI Wattpad" to eval LLMs on fiction"]]></title><description><![CDATA[
<p>> The surge of AI, large language models, and generated art begs fascinating questions. The industry’s progress so far is enough to force us to explore what art is and why we make it. Brandon Sanderson explores the rise of AI art, the importance of the artistic process, and why he rebels against this new technological and artistic frontier.<p>What It Means To Be Human | Art in the AI Era<p><a href="https://www.youtube.com/watch?v=mb3uK-_QkOo" rel="nofollow">https://www.youtube.com/watch?v=mb3uK-_QkOo</a></p>
]]></description><pubDate>Tue, 03 Feb 2026 19:11:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46875721</link><dc:creator>babblingfish</dc:creator><comments>https://news.ycombinator.com/item?id=46875721</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46875721</guid></item><item><title><![CDATA[New comment by babblingfish in "Show HN: Moltbook – A social network for moltbots (clawdbots) to hang out"]]></title><description><![CDATA[
<p>Someone is using it to write a memoir. Which I find incredibly ironic, since the goal of a memoir is self-reflection, and they're outsourcing their introspection to a LLM. It says their inspirations are Dostoyevsky and Proust.</p>
]]></description><pubDate>Fri, 30 Jan 2026 05:31:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=46820877</link><dc:creator>babblingfish</dc:creator><comments>https://news.ycombinator.com/item?id=46820877</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46820877</guid></item><item><title><![CDATA[New comment by babblingfish in "Why is there no recent fiction with software engineer protagonists?"]]></title><description><![CDATA[
<p>Please Report Your Bug Here (Josh Riedel, 2023)
<a href="https://www.kirkusreviews.com/book-reviews/josh-riedel/please-report-your-bug-here/" rel="nofollow">https://www.kirkusreviews.com/book-reviews/josh-riedel/pleas...</a></p>
]]></description><pubDate>Tue, 27 Jan 2026 20:00:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46785606</link><dc:creator>babblingfish</dc:creator><comments>https://news.ycombinator.com/item?id=46785606</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46785606</guid></item><item><title><![CDATA[New comment by babblingfish in "AI Lazyslop and Personal Responsibility"]]></title><description><![CDATA[
<p>This is consistent with my own observations of LLM-generated code increasing the burden on reviewers. You either review the code carefully, putting more effort into it than the actual original author. Or you approve it without careful review. I feel like the latter is becoming more common. This is basically creating tech debt that will only be realized later by future maintainers</p>
]]></description><pubDate>Mon, 26 Jan 2026 20:30:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=46771064</link><dc:creator>babblingfish</dc:creator><comments>https://news.ycombinator.com/item?id=46771064</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46771064</guid></item></channel></rss>