<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: JimSanchez</title><link>https://news.ycombinator.com/user?id=JimSanchez</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 21 Apr 2026 16:26:49 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=JimSanchez" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by JimSanchez in "Show HN: How I topped the HuggingFace open LLM leaderboard on two gaming GPUs"]]></title><description><![CDATA[
<p>Fascinating idea that LLM performance might improve simply by changing the inference path through existing layers rather than retraining weights. It’s interesting to think of transformer stacks developing something like functional “circuits” similar to brain regions.</p>
]]></description><pubDate>Wed, 11 Mar 2026 18:19:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47339184</link><dc:creator>JimSanchez</dc:creator><comments>https://news.ycombinator.com/item?id=47339184</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47339184</guid></item><item><title><![CDATA[New comment by JimSanchez in "Yann LeCun raises $1B to build AI that understands the physical world"]]></title><description><![CDATA[
<p>Interesting perspective from LeCun. The debate between scaling LLMs versus building systems that understand the physical world seems like one of the big open questions in AI right now. It will be fascinating to see whether “world models” end up complementing LLMs or eventually replacing parts of them.</p>
]]></description><pubDate>Wed, 11 Mar 2026 17:57:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47338912</link><dc:creator>JimSanchez</dc:creator><comments>https://news.ycombinator.com/item?id=47338912</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47338912</guid></item></channel></rss>