<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: tavern1991</title><link>https://news.ycombinator.com/user?id=tavern1991</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 04 May 2026 14:59:05 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=tavern1991" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by tavern1991 in "What can LLMs never do?"]]></title><description><![CDATA[
<p>To me it seems you can get the LLM to predict some tokens that contain words that point to the right algorithm. But the LLM doesn't know what it chose. It just sees some tokens. Do you think it could somehow tell it had chosen a CNN in its response and then do something with that knowledge to run a CNN?</p>
]]></description><pubDate>Tue, 30 Apr 2024 14:06:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=40211078</link><dc:creator>tavern1991</dc:creator><comments>https://news.ycombinator.com/item?id=40211078</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40211078</guid></item><item><title><![CDATA[New comment by tavern1991 in "What can LLMs never do?"]]></title><description><![CDATA[
<p>What do you mean by "choose the right one to solve a problem"? This phrase seems to carry a lot of water for your take. My understanding is that an LLM has no capability to choose anything. It predicting some tokens based on its training data and your prompt.</p>
]]></description><pubDate>Tue, 30 Apr 2024 02:00:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=40206483</link><dc:creator>tavern1991</dc:creator><comments>https://news.ycombinator.com/item?id=40206483</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40206483</guid></item><item><title><![CDATA[New comment by tavern1991 in "What can LLMs never do?"]]></title><description><![CDATA[
<p>I couldn't agree more. It is shocking to me how many of my peers think something magic is happening inside an LLM. It is just a token predictor. It doesn't know anything. It can't solve novel problems.</p>
]]></description><pubDate>Tue, 30 Apr 2024 01:54:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=40206452</link><dc:creator>tavern1991</dc:creator><comments>https://news.ycombinator.com/item?id=40206452</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40206452</guid></item></channel></rss>