<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: borsch_not_soup</title><link>https://news.ycombinator.com/user?id=borsch_not_soup</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 13:55:45 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=borsch_not_soup" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by borsch_not_soup in "Soul Player C64 – A real transformer running on a 1 MHz Commodore 64"]]></title><description><![CDATA[
<p>Interesting, I’ve always thought neural network progress was primarily bottlenecked by compute.<p>If it turns out that LLM-like models can produce genuinely useful outputs on something as constrained as a Commodore 64—or even more convincingly, if someone manages to train a capable model within the limits of hardware from that era—it would suggest we may have left a lot of progress on the table. Not just in terms of efficiency, but in how we framed the problem space for decades.</p>
]]></description><pubDate>Tue, 21 Apr 2026 01:36:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47843525</link><dc:creator>borsch_not_soup</dc:creator><comments>https://news.ycombinator.com/item?id=47843525</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47843525</guid></item></channel></rss>