<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: mcfry</title><link>https://news.ycombinator.com/user?id=mcfry</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 26 Apr 2026 10:17:26 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=mcfry" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by mcfry in "ChatGPT Images 2.0"]]></title><description><![CDATA[
<p>How hard is it to have a video player with a fucking volume toggle?</p>
]]></description><pubDate>Wed, 22 Apr 2026 02:50:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47858250</link><dc:creator>mcfry</dc:creator><comments>https://news.ycombinator.com/item?id=47858250</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47858250</guid></item><item><title><![CDATA[New comment by mcfry in "'Attention is all you need' coauthor says he's 'sick' of transformers"]]></title><description><![CDATA[
<p>But it can't actually deduce, can it? If 136891438 * 1294538 isn't in the training data, it won't be able to give you a valid answer using the model itself. There's no process. It has to offload that task to a tool, which will then calculate and return.<p>Further, any offloading needs to be manually defined at some point. You could maybe give it a way to define its own tools, but even then they would still be defined by what has come before.</p>
]]></description><pubDate>Mon, 27 Oct 2025 18:00:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=45724273</link><dc:creator>mcfry</dc:creator><comments>https://news.ycombinator.com/item?id=45724273</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45724273</guid></item><item><title><![CDATA[New comment by mcfry in "'Attention is all you need' coauthor says he's 'sick' of transformers"]]></title><description><![CDATA[
<p>A proof assistant is a verifier, and a tool so therefor a patch, so I really fail to see how that could be understood as the LLM having deduction.</p>
]]></description><pubDate>Fri, 24 Oct 2025 20:51:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=45698943</link><dc:creator>mcfry</dc:creator><comments>https://news.ycombinator.com/item?id=45698943</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45698943</guid></item><item><title><![CDATA[New comment by mcfry in "'Attention is all you need' coauthor says he's 'sick' of transformers"]]></title><description><![CDATA[
<p>Something which I haven't been able to fully parse that perhaps someone has better insight into: aren't transformers inherently only capable of inductive reasoning? In order to actually progress to AGI, which is being promised at least as an eventuality, don't models have to be capable of deduction? Wouldn't that mean fundamentally changing the pipeline in some way? And no, tools are not deduction. They are useful patches for the lack of deduction.<p>Models need to move beyond the domain of parsing existing information into existing ideas.</p>
]]></description><pubDate>Fri, 24 Oct 2025 17:42:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=45697134</link><dc:creator>mcfry</dc:creator><comments>https://news.ycombinator.com/item?id=45697134</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45697134</guid></item><item><title><![CDATA[New comment by mcfry in "Claude Skills"]]></title><description><![CDATA[
<p>This is just... rebranding for instructions and files? lol. Love how instructions for creating a skill is buried. Marketing go brr.</p>
]]></description><pubDate>Fri, 17 Oct 2025 00:48:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=45612317</link><dc:creator>mcfry</dc:creator><comments>https://news.ycombinator.com/item?id=45612317</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45612317</guid></item><item><title><![CDATA[New comment by mcfry in "Making Minecraft Spherical"]]></title><description><![CDATA[
<p>You still have the 'pointy' problem, even with many layers, no? The bottom-most block has to be a triangle.</p>
]]></description><pubDate>Tue, 02 Sep 2025 15:46:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=45104641</link><dc:creator>mcfry</dc:creator><comments>https://news.ycombinator.com/item?id=45104641</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45104641</guid></item><item><title><![CDATA[New comment by mcfry in "Learning Is Slower Than You Think"]]></title><description><![CDATA[
<p>Was thinking the same thing. It's impossible to take this article's criticisms of AI seriously when it's so obviously over-edited with AI itself.</p>
]]></description><pubDate>Tue, 29 Jul 2025 15:47:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=44724834</link><dc:creator>mcfry</dc:creator><comments>https://news.ycombinator.com/item?id=44724834</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44724834</guid></item></channel></rss>