<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: jaidhyani</title><link>https://news.ycombinator.com/user?id=jaidhyani</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 06:38:53 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=jaidhyani" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by jaidhyani in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>Said company is literally in court against said government at the moment, after said government attempted to designate it too dangerous to do business with.</p>
]]></description><pubDate>Tue, 07 Apr 2026 21:48:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47681770</link><dc:creator>jaidhyani</dc:creator><comments>https://news.ycombinator.com/item?id=47681770</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47681770</guid></item><item><title><![CDATA[New comment by jaidhyani in "Baby is healed with first personalized gene-editing treatment"]]></title><description><![CDATA[
<p>Approximately no one in the community thinks this. If you can go two days in a rationalist space without hearing about "Chesterton's Fence", I'll be impressed. No one thinks they're 100% rational nor that this is a reasonable aspiration. Traditions are generally regarded as sufficiently important that a not small amount of effort has gone into trying to build new ones. Not only is the case that no one thinks that anyone including themselves is 100% correct, but the community norm is to express credence in probabilities and convert those probabilities into bets when possible. People in the rationalist community constantly, loudly, and proudly disagree with each other, to the point that this can make it difficult to coordinate on anything. And everyone is obsessed with studying and learning, and constantly trying to come up with ways to do this more effectively.<p>Like, I'm sure there are people who approximately match the description you're giving here. But I've spent a lot of time around flesh-and-blood rationalists and EAs, and they violently diverge from the account you give here.</p>
]]></description><pubDate>Fri, 16 May 2025 08:46:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=44003129</link><dc:creator>jaidhyani</dc:creator><comments>https://news.ycombinator.com/item?id=44003129</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44003129</guid></item><item><title><![CDATA[New comment by jaidhyani in "Food price hikes are no longer outpacing overall inflation"]]></title><description><![CDATA[
<p>Compare the trajectory of the US to other industrialized countries.<p>The best charts I could find on this are from an admittedly-biased think tank, but the sources it's pulling from are well-regarded and neutral:<p><a href="https://www.americanprogress.org/article/7-reasons-the-u-s-economy-is-among-the-strongest-in-the-g7/" rel="nofollow noreferrer">https://www.americanprogress.org/article/7-reasons-the-u-s-e...</a><p>The US also recently passed a massive investment in building out renewable energy.</p>
]]></description><pubDate>Fri, 22 Dec 2023 02:22:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=38730164</link><dc:creator>jaidhyani</dc:creator><comments>https://news.ycombinator.com/item?id=38730164</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38730164</guid></item><item><title><![CDATA[New comment by jaidhyani in "Solar and wind to top coal power in US for first time in 2024"]]></title><description><![CDATA[
<p><a href="https://ourworldindata.org/renewable-energy">https://ourworldindata.org/renewable-energy</a><p>Quick stats for the US:<p>In 2022, 11.3% of energy was generated by renewables (hydropower, solar, wind, geothermal, bioenergy, wave, and tidal). It's been growing at just under 0.5pp/year since 2007, when it was at 4.4%.<p>This is primarily driven by wind and solar. Wind power took off around 2000, and in the years since has grown from 5.6TWh to 434.3TWh in 2022. Solar power took off around 2011 and has since grown from 1.82TWh to 205.1TWh. Hydropower remains the #2 renewable in the US, with a noisy-but-nondirectional generation between 200TWh and 350TWh going back to the 60's, but solar appears poised to overtake it by 2024. All other renewables combined are holding steady or slightly dropping at ~75TWh (though anecdotally there may be some large geothermal capacity coming online in the medium-term future that would change this).<p>Narrowing the focus from all-energy-generation (e.g. including fuel) to specifically electricity, the US is currently generating 22.3% of its electricity from renewables, a number that has been steadily increasing at about 1pp/year since it was 8.4% in 2007.<p>Naïve extrapolation suggests we're about 75 years out from 100% renewables for electricity, but of course there are reasons to doubt that. For one, we've recently passed the tipping point where renewables are just straightforwardly cheaper than other sources of energy in many circumstances, and improvements in technology and infrastructure will just continue to make this true in more and more cases.</p>
]]></description><pubDate>Sat, 16 Dec 2023 21:48:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=38668039</link><dc:creator>jaidhyani</dc:creator><comments>https://news.ycombinator.com/item?id=38668039</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38668039</guid></item><item><title><![CDATA[New comment by jaidhyani in "Sam Bankman-Fried is a feature, not a bug"]]></title><description><![CDATA[
<p>> His actions made perfect sense from his utilitarian Effective Altruist worldview.<p>They don't. Everyone in EA (AFAICT) has been pretty clear about this. Lying and undermining trust and institutions does tremendous lasting harm.<p>I am also tired of "people are very concerned about X and think that it's important, so they're basically a cult".</p>
]]></description><pubDate>Sun, 05 Nov 2023 18:09:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=38153760</link><dc:creator>jaidhyani</dc:creator><comments>https://news.ycombinator.com/item?id=38153760</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38153760</guid></item><item><title><![CDATA[New comment by jaidhyani in "Sam Bankman-Fried is a feature, not a bug"]]></title><description><![CDATA[
<p>I will never cease to wonder at how so many people can blame so much on people trying to take a rigorous approach to world improvement, up to and including "a narcissistic con-man claimed to do trying to do X, and I can imagine a scenario 
where someone could justify doing the shitty things he did to justify X, so therefore everyone trying to do X must also suck and be complicit in fraud and assorted sins".</p>
]]></description><pubDate>Sun, 05 Nov 2023 18:06:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=38153725</link><dc:creator>jaidhyani</dc:creator><comments>https://news.ycombinator.com/item?id=38153725</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38153725</guid></item><item><title><![CDATA[New comment by jaidhyani in "LLMs can't self-correct in reasoning tasks, DeepMind study finds"]]></title><description><![CDATA[
<p>I am begging people to stop confusing "I was unable to get LLM X to do Y using strategy Z" with "All LLMs are categorically unable to do Y".</p>
]]></description><pubDate>Mon, 09 Oct 2023 20:22:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=37825021</link><dc:creator>jaidhyani</dc:creator><comments>https://news.ycombinator.com/item?id=37825021</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37825021</guid></item><item><title><![CDATA[New comment by jaidhyani in "LLMs can't self-correct in reasoning tasks, DeepMind study finds"]]></title><description><![CDATA[
<p>As the other commenter said, this is incorrect. The input was a sequence of legal moves (not even "real" moves - most of the training data was synthetically generated with "generate legal moves" as the only constraint).<p>Deducing board state from this is extremely non-trivial.</p>
]]></description><pubDate>Mon, 09 Oct 2023 20:21:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=37825011</link><dc:creator>jaidhyani</dc:creator><comments>https://news.ycombinator.com/item?id=37825011</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37825011</guid></item><item><title><![CDATA[New comment by jaidhyani in "LLMs can't self-correct in reasoning tasks, DeepMind study finds"]]></title><description><![CDATA[
<p>Alternatively, the prior on "this is not possible" is very low because RLHF & Friends have targeted metrics that, inadvertently or not, discourage that outcome.</p>
]]></description><pubDate>Mon, 09 Oct 2023 20:19:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=37824988</link><dc:creator>jaidhyani</dc:creator><comments>https://news.ycombinator.com/item?id=37824988</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37824988</guid></item><item><title><![CDATA[New comment by jaidhyani in "Successful room temperature ambient-pressure magnetic levitation of LK-99"]]></title><description><![CDATA[
<p>Smallpox eradication</p>
]]></description><pubDate>Fri, 04 Aug 2023 05:03:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=36995675</link><dc:creator>jaidhyani</dc:creator><comments>https://news.ycombinator.com/item?id=36995675</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36995675</guid></item><item><title><![CDATA[New comment by jaidhyani in "The ‘Enshittification’ of TikTok: Or how, platforms die"]]></title><description><![CDATA[
<p>Going to be extremely hard to quantify, ransomware peddlers aren't famous for their meticulous public record-keeping. You could try to sift through all the transactions on the public blockchain and try to classify the ransomware ones, but that's going to be challenging at best I imagine.<p>Overall I'm skeptical of the claim. It could be true, or partially true, but demonstrating that would probably require some work. Alternatively, someone could demonstrate that liquidity came from speculators or some other source, which seems potentially less doable but still not easy.</p>
]]></description><pubDate>Mon, 08 May 2023 17:30:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=35864444</link><dc:creator>jaidhyani</dc:creator><comments>https://news.ycombinator.com/item?id=35864444</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35864444</guid></item><item><title><![CDATA[New comment by jaidhyani in "Are emergent abilities of large language models a mirage?"]]></title><description><![CDATA[
<p>Could have gone with "More Comprehensive Metrics Are All You Need"</p>
]]></description><pubDate>Mon, 01 May 2023 04:32:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=35769112</link><dc:creator>jaidhyani</dc:creator><comments>https://news.ycombinator.com/item?id=35769112</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35769112</guid></item><item><title><![CDATA[New comment by jaidhyani in "OpenAI’s CEO says the age of giant AI models is already over"]]></title><description><![CDATA[
<p>This is a weird future.</p>
]]></description><pubDate>Thu, 20 Apr 2023 16:07:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=35642179</link><dc:creator>jaidhyani</dc:creator><comments>https://news.ycombinator.com/item?id=35642179</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35642179</guid></item><item><title><![CDATA[New comment by jaidhyani in "OpenAI’s CEO says the age of giant AI models is already over"]]></title><description><![CDATA[
<p>GPT-LikeSubscribeAndRingThatBell</p>
]]></description><pubDate>Tue, 18 Apr 2023 04:17:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=35609902</link><dc:creator>jaidhyani</dc:creator><comments>https://news.ycombinator.com/item?id=35609902</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35609902</guid></item><item><title><![CDATA[New comment by jaidhyani in "What are transformer models and how do they work?"]]></title><description><![CDATA[
<p>This is true in general but not in the use case they presented. If they had explained why a normalized distribution is useful it would have made sense - but they just describe this as pick-the-top-answer next-word predictor, which makes the softmax superfluous.</p>
]]></description><pubDate>Sun, 16 Apr 2023 17:05:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=35591867</link><dc:creator>jaidhyani</dc:creator><comments>https://news.ycombinator.com/item?id=35591867</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35591867</guid></item><item><title><![CDATA[New comment by jaidhyani in "What are transformer models and how do they work?"]]></title><description><![CDATA[
<p>Prediction happens at the very end (sometimes functionally earlier, but not always) - most of what happens in the model can be thought of as collecting information in vectors-derived-from-token-embeddings, performing operations on those vectors, and then repeating this process a bunch of times until <i>at some point</i> it results in a meaningful token prediction.<p>It's pedagogically unfortunate that the residual stream is in the same space as the token embeddings, because it obscures how the residual stream is used as a kind of general compressed-information conduit through the model that attention heads read and write different information to to enable the eventual prediction task.</p>
]]></description><pubDate>Sun, 16 Apr 2023 16:59:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=35591776</link><dc:creator>jaidhyani</dc:creator><comments>https://news.ycombinator.com/item?id=35591776</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35591776</guid></item><item><title><![CDATA[New comment by jaidhyani in "What are transformer models and how do they work?"]]></title><description><![CDATA[
<p>It depends on the values of the vectors. (4, 4) + (3, 3) results in a new vector (7, 7) which is further away from both contributing vectors than either one was to each other originally. Additionally, negative coefficients are a thing.</p>
]]></description><pubDate>Sun, 16 Apr 2023 16:48:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=35591663</link><dc:creator>jaidhyani</dc:creator><comments>https://news.ycombinator.com/item?id=35591663</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35591663</guid></item><item><title><![CDATA[New comment by jaidhyani in "What are transformer models and how do they work?"]]></title><description><![CDATA[
<p>The original paper is very good but I would argue it's not well optimized for pedagogy. Among other things, it's targeting a very specific application (translation) and in doing so adopts a more complicated architecture than most cutting-edge modes actually use (encoder-decoder instead of just one or the other). The writers of the paper probably didn't realize they were writing a foundational document at the time. It's good for understanding how certain conventions developed and important historically - but as someone who did read it as an intro to transformers,  in retrospect I would have gone with other resources (e.g. "The Illustrated Transformer").</p>
]]></description><pubDate>Sun, 16 Apr 2023 16:44:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=35591625</link><dc:creator>jaidhyani</dc:creator><comments>https://news.ycombinator.com/item?id=35591625</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35591625</guid></item><item><title><![CDATA[New comment by jaidhyani in "What are transformer models and how do they work?"]]></title><description><![CDATA[
<p>I endorse all of this and will further endorse (probably as a follow-up once one has a basic grasp) "A Mathematical Framework for Transformer Circuits" which builds a lot of really useful ideas for understanding how and why transformers work and how to start getting a grasp on treating them as something other than magical black boxes.<p><a href="https://transformer-circuits.pub/2021/framework/index.html" rel="nofollow">https://transformer-circuits.pub/2021/framework/index.html</a></p>
]]></description><pubDate>Sun, 16 Apr 2023 16:40:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=35591598</link><dc:creator>jaidhyani</dc:creator><comments>https://news.ycombinator.com/item?id=35591598</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35591598</guid></item><item><title><![CDATA[New comment by jaidhyani in "What are transformer models and how do they work?"]]></title><description><![CDATA[
<p>That's true, but they didn't go into any other applications in this explainer and were presenting it strictly as a next-word-predictor. If they are going to include final softmax, they should explain why it's useful. It would be improved by being simpler (skip softmax) or more comprehensive (present a use case for softmax), but complexity without reason is bad pedagogy.</p>
]]></description><pubDate>Sun, 16 Apr 2023 16:37:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=35591571</link><dc:creator>jaidhyani</dc:creator><comments>https://news.ycombinator.com/item?id=35591571</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35591571</guid></item></channel></rss>