<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: dhampi</title><link>https://news.ycombinator.com/user?id=dhampi</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 15 May 2026 15:56:25 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=dhampi" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by dhampi in "Ontario auditors find doctors' AI note takers routinely blow basic facts"]]></title><description><![CDATA[
<p>Well, with thinking models, it’s not that simple. The probability distribution is next token. But if a model thinks to produce an answer, you can have a high confidence next token even if MCMC sampling the model’s thinking chain would reveal that the real probability distribution had low confidence.</p>
]]></description><pubDate>Fri, 15 May 2026 02:45:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=48143953</link><dc:creator>dhampi</dc:creator><comments>https://news.ycombinator.com/item?id=48143953</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48143953</guid></item><item><title><![CDATA[New comment by dhampi in "Canada's deal with China signals it is serious about shift from US"]]></title><description><![CDATA[
<p>Canada has no domestic automaker and US automakers, under pressure from Trump, are closing some factories in Canada & relocating production to the US.<p>Yes, the Canadian auto industry will take a hit, but it already has from the US (and might take more).</p>
]]></description><pubDate>Sat, 17 Jan 2026 21:25:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46662260</link><dc:creator>dhampi</dc:creator><comments>https://news.ycombinator.com/item?id=46662260</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46662260</guid></item><item><title><![CDATA[New comment by dhampi in "Advanced Rail Energy Storage of North America"]]></title><description><![CDATA[
<p>Presumably a lot less expensive than pumped hydro to build.</p>
]]></description><pubDate>Sun, 04 Jan 2026 04:02:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=46484758</link><dc:creator>dhampi</dc:creator><comments>https://news.ycombinator.com/item?id=46484758</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46484758</guid></item><item><title><![CDATA[New comment by dhampi in "Detailed balance in large language model-driven agents"]]></title><description><![CDATA[
<p>The actual title is pretty buzzy given how limited the task described is. In one specific, very constrained and artificial task, you can find something like detailed balance.  And even then, their data are quite far from being a perfect fit for detailed balance.<p>Would love it if I could use my least action principle knowledge for LLM interpretability, this paper doesn't convince me at all :)</p>
]]></description><pubDate>Sat, 20 Dec 2025 22:06:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46340075</link><dc:creator>dhampi</dc:creator><comments>https://news.ycombinator.com/item?id=46340075</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46340075</guid></item><item><title><![CDATA[New comment by dhampi in "What I don’t like about chains of thoughts (2023)"]]></title><description><![CDATA[
<p>No, I still don’t understand the analogy.<p>All of this burn-in stuff is designed to get your Markov chain to forget where it started.<p>But I don’t want to get from “how many apples does Bob have?” to a state where Bob and the apples are forgotten.  I want to remember that state, and I probably want to stay close to it — not far away in the “typical set” of all language.<p>Are you implicitly conditioning the probability distribution or otherwise somehow cutting the manifold down? Then the analogy would be plausible to me, but I don’t understand what conditioning we’re doing and how the LLM respects that.<p>Or are you claiming that we want to travel to the “closest” high probability region somehow? So we’re not really doing burn-in but something a little more delicate?</p>
]]></description><pubDate>Thu, 04 Dec 2025 05:15:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=46144038</link><dc:creator>dhampi</dc:creator><comments>https://news.ycombinator.com/item?id=46144038</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46144038</guid></item><item><title><![CDATA[New comment by dhampi in "What I don’t like about chains of thoughts (2023)"]]></title><description><![CDATA[
<p>I don't understand the analogy.<p>If I'm using an MCMC algorithm to sample a probability distribution, I need to wait for my Markov chain to converge to a stationary distribution before sampling, sure.<p>But in no way is 'a good answer' a stationary state in the LLM Markov chain. If I continue running next-token prediction, I'm not going to start looping.</p>
]]></description><pubDate>Thu, 04 Dec 2025 04:32:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=46143836</link><dc:creator>dhampi</dc:creator><comments>https://news.ycombinator.com/item?id=46143836</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46143836</guid></item><item><title><![CDATA[New comment by dhampi in "Fannie Mae officials ousted after sounding alarm on sharing confidential data"]]></title><description><![CDATA[
<p>I used to be befuddled by this too. Then I lived in the U.S. for a few years.<p>I think the answer is that the democrats are shockingly bad too, in many parts of the US. People expect grift and corruption from both parties.<p>Perhaps they didn’t expect the scale of this admin’s grift.</p>
]]></description><pubDate>Fri, 14 Nov 2025 04:20:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=45923819</link><dc:creator>dhampi</dc:creator><comments>https://news.ycombinator.com/item?id=45923819</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45923819</guid></item><item><title><![CDATA[New comment by dhampi in "The Principles of Diffusion Models"]]></title><description><![CDATA[
<p>Guessing you’re a physicist based on the name. You don’t think automatically doing RG flow in reverse has beauty to it?<p>There’s a lot of “force” in statistics, but that force relies on pretty deep structures and choices.</p>
]]></description><pubDate>Sun, 09 Nov 2025 22:25:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=45869813</link><dc:creator>dhampi</dc:creator><comments>https://news.ycombinator.com/item?id=45869813</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45869813</guid></item><item><title><![CDATA[New comment by dhampi in "Correctness and composability bugs in the Julia ecosystem (2022)"]]></title><description><![CDATA[
<p>I quit Julia after running into serious bugs in basic CSV package functionality a few years back.<p>The language is elegant, intuitive and achieves what it promises 99% of the time, but that’s not enough compared to other programming languages.</p>
]]></description><pubDate>Tue, 30 Sep 2025 16:21:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=45427490</link><dc:creator>dhampi</dc:creator><comments>https://news.ycombinator.com/item?id=45427490</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45427490</guid></item><item><title><![CDATA[New comment by dhampi in "The Rising Sea: Foundations of Algebraic Geometry Notes"]]></title><description><![CDATA[
<p>FOAG is probably the shortest readable introduction to serious algebraic geometry anyone has written. That's the nature of algebraic geometry.</p>
]]></description><pubDate>Tue, 16 Sep 2025 05:34:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=45258388</link><dc:creator>dhampi</dc:creator><comments>https://news.ycombinator.com/item?id=45258388</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45258388</guid></item><item><title><![CDATA[New comment by dhampi in "Why haven't quantum computers factored 21 yet?"]]></title><description><![CDATA[
<p>Yes, Schrodinger’s equation is entirely deterministic. There is no randomness inherent in quantum mechanics. The perceived randomness only arises when we have incomplete information. (It is another matter that quantum mechanics to some extent forbids us from having perfect information.)<p>I mean no disrespect, but I don’t think it’s a particularly useful activity to speculate on physics if you don’t know the basic equations.</p>
]]></description><pubDate>Sun, 31 Aug 2025 20:24:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=45086746</link><dc:creator>dhampi</dc:creator><comments>https://news.ycombinator.com/item?id=45086746</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45086746</guid></item></channel></rss>