<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: svara</title><link>https://news.ycombinator.com/user?id=svara</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 13 Apr 2026 06:00:21 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=svara" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by svara in "Renewables reached nearly 50% of global electricity capacity last year"]]></title><description><![CDATA[
<p>This is correct in the sense that, if you were to build a zero emissions energy system from scratch with today's technology, your conclusion would be that you'd eventually have to do this.<p>But in much of the world, setting up PV is economically sound simply because it <i>displaces</i> a certain amount of kWh generated over the course of a year from other sources that are more polluting and more expensive.<p>In this regime, the dynamics of production over time don't matter yet.<p>At some point, when renewable generation has very high penetration, you'll reach a point where building more is uneconomical, and to then displace the remaining other power sources you'll need to overpay (ignoring externalities).<p>However, that's assuming no technological change on the way there, which is a whole separate topic.</p>
]]></description><pubDate>Thu, 02 Apr 2026 19:59:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47619436</link><dc:creator>svara</dc:creator><comments>https://news.ycombinator.com/item?id=47619436</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47619436</guid></item><item><title><![CDATA[New comment by svara in "AI overly affirms users asking for personal advice"]]></title><description><![CDATA[
<p>The issue is it will <i>follow your instructions</i>. It's sycophancy one step removed.</p>
]]></description><pubDate>Sun, 29 Mar 2026 11:51:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47562346</link><dc:creator>svara</dc:creator><comments>https://news.ycombinator.com/item?id=47562346</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47562346</guid></item><item><title><![CDATA[New comment by svara in "AI overly affirms users asking for personal advice"]]></title><description><![CDATA[
<p>Yeah, and if you ask it to be critical specifically to get a different perspective or just to avoid this bias, it'll go over the top in the opposite direction.<p>This is imo currently the top chatbot failure mode. The insidious thing is that it often feels good to read these things. Factual accuracy by contrast has gotten very good.<p>I think there's a deeper philosophical dimension to this though, in that it relates to alignment.<p>There are situations where in the grand scheme of things the right thing to do would be for the chatbot to push back hard, be harsh and dismissive. But is it the really aligned with the human then? Which human?</p>
]]></description><pubDate>Sat, 28 Mar 2026 15:19:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47555379</link><dc:creator>svara</dc:creator><comments>https://news.ycombinator.com/item?id=47555379</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47555379</guid></item><item><title><![CDATA[New comment by svara in "Epoch confirms GPT5.4 Pro solved a frontier math open problem"]]></title><description><![CDATA[
<p>I think you're misreading me. My point isn't that you can't in principle state the optimization problem, but that it's much easier in some domains than in others, that this tracks with how AI has been progressing, and that progress in one area doesn't automatically mean progress in another, because current AI cost functions are less general than the cost functions that humans are working with in the world.</p>
]]></description><pubDate>Tue, 24 Mar 2026 10:27:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47500666</link><dc:creator>svara</dc:creator><comments>https://news.ycombinator.com/item?id=47500666</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47500666</guid></item><item><title><![CDATA[New comment by svara in "Epoch confirms GPT5.4 Pro solved a frontier math open problem"]]></title><description><![CDATA[
<p>The capabilities of AI are determined by the cost function it's trained on.<p>That's a self-evident thing to say, but it's worth repeating, because there's this odd implicit notion sometimes that you train on some cost function, and then, poof, "intelligence", as if that was a mysterious <i>other</i> thing. Really, intelligence <i>is minimizing a complex cost function</i>. The leadership of the big AI companies sometimes imply something else when they talk of "generalization". But there is no mechanism to generate a model with capabilities beyond what is useful to minimize a specific cost function.<p>You can view the progress of AI as progress in coming up with smarter cost functions: Cleaner, larger datasets, pretraining, RLHF, RLVR.<p>Notably, exciting early progress in AI came in places where simple cost functions generate rich behavior (Chess, Go).<p>The recent impressive advances in AI are similar. Mathematics and coding are extremely structured, and properties of a coding or maths result can be verified using automatic techniques. You can set up a RLVR "game" for maths and coding. It thus seems very likely to me that this is where the big advances are going to come from in the short term.<p>However, it does not follow that maths ability on par with expert mathematicians will lead to superiority over human cognitive ability broadly. A lot of what humans do has <i>social rewards</i> which are not verifiable, or includes genuine Knightian uncertainty where a reward function can not be built without actually operating independently in the world.<p>To be clear, none of the above is supposed to talk down past or future progress in AI; I'm just trying to be more nuanced about where I believe progress can be fast and where it's bound to be slower.</p>
]]></description><pubDate>Tue, 24 Mar 2026 09:05:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47500130</link><dc:creator>svara</dc:creator><comments>https://news.ycombinator.com/item?id=47500130</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47500130</guid></item><item><title><![CDATA[New comment by svara in "A sufficiently detailed spec is code"]]></title><description><![CDATA[
<p>The vibe coding maximalist position can be stated in information theory terms: That there exists a decoder that can decode the space of useful programs from a much smaller prompt space.<p>The compression ratio is the vibe coding gain.<p>I think that way of phrasing it makes it easier to think about boundaries of vibe coding.<p>"A class that represents (A) concept, using the (B) data structure and (C) algorithms for methods (D), in programming language (E)."<p>That's decodeable, at least to a narrow enough distribution.<p>"A commercially successful team communication app built around the concept of channels, like in IRC."<p>Without already knowing Slack, that's not decodable.<p>Thinking about what is <i>missing</i> is very helpful. Obviously, the business strategic positioning, non technical stakeholder inputs, UX design.<p>But I think it goes beyond that: In sufficiently complex apps, even purely technical "software engineering" decisions are to some degree learnt from experiment.<p>This also makes it more clear how to use AI coding effectively:<p>* Prompt in increments of components that can be encoded in a short prompt.<p>* If possible, add pre-existing information to the prompt (documentation, prior attempts at implementation).</p>
]]></description><pubDate>Thu, 19 Mar 2026 08:07:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47436293</link><dc:creator>svara</dc:creator><comments>https://news.ycombinator.com/item?id=47436293</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47436293</guid></item><item><title><![CDATA[New comment by svara in "Kagi Translate now supports LinkedIn Speak as an output language"]]></title><description><![CDATA[
<p>The funny thing about this is that even if the output is bad, it's actually good.</p>
]]></description><pubDate>Tue, 17 Mar 2026 06:11:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47409224</link><dc:creator>svara</dc:creator><comments>https://news.ycombinator.com/item?id=47409224</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47409224</guid></item><item><title><![CDATA[New comment by svara in "Ask HN: How is AI-assisted coding going for you professionally?"]]></title><description><![CDATA[
<p>Could you say more on how the tasks where it works vs. doesn't work differ? Just the fact that it's both small and greenfield in the one case and presumably neither in the other?</p>
]]></description><pubDate>Sun, 15 Mar 2026 18:48:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47390517</link><dc:creator>svara</dc:creator><comments>https://news.ycombinator.com/item?id=47390517</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47390517</guid></item><item><title><![CDATA[Ask HN: How is AI-assisted coding going for you professionally?]]></title><description><![CDATA[
<p>Comment sections on AI threads tend to split into "we're all cooked" and "AI is useless." I'd like to cut through the noise and learn what's actually working and what isn't, from concrete experience.<p>If you've recently used AI tools for professional coding work, tell us about it.<p>What tools did you use? What worked well and why? What challenges did you hit, and how (if at all) did you solve them?<p>Please share enough context (stack, project type, team size, experience level) for others to learn from your experience.<p>The goal is to build a grounded picture of where AI-assisted development actually stands in March 2026, without the hot air.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47388646">https://news.ycombinator.com/item?id=47388646</a></p>
<p>Points: 434</p>
<p># Comments: 616</p>
]]></description><pubDate>Sun, 15 Mar 2026 15:58:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47388646</link><dc:creator>svara</dc:creator><comments>https://news.ycombinator.com/item?id=47388646</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47388646</guid></item><item><title><![CDATA[New comment by svara in "Elon Musk pushes out more xAI founders as AI coding effort falters"]]></title><description><![CDATA[
<p>Hey, thanks, that was quite interesting!<p>I'd be curious to hear your thoughts on how the "fixer", who sounds rather ineffective as an executive, came into this position, in what sounds like overall a rather effective organization.<p>I've been personally thinking quite a bit about what makes organizations work or not work recently, and your story is quite interesting to me as a glimpse into a kind of organization that I've never seen from the inside myself.</p>
]]></description><pubDate>Sat, 14 Mar 2026 07:56:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47374348</link><dc:creator>svara</dc:creator><comments>https://news.ycombinator.com/item?id=47374348</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47374348</guid></item><item><title><![CDATA[New comment by svara in "The engine of Germany's wealth is blocking its future"]]></title><description><![CDATA[
<p>How large a demand for cars does the Chinese government have do you think?</p>
]]></description><pubDate>Mon, 09 Mar 2026 18:15:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47313059</link><dc:creator>svara</dc:creator><comments>https://news.ycombinator.com/item?id=47313059</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47313059</guid></item><item><title><![CDATA[New comment by svara in "The engine of Germany's wealth is blocking its future"]]></title><description><![CDATA[
<p>> I really don't see a solid economic future for Germany when enough other countries implement more progressive economic policies.<p>People do change their minds when the pain becomes too intense to ignore, but that is what it takes.</p>
]]></description><pubDate>Mon, 09 Mar 2026 17:59:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47312780</link><dc:creator>svara</dc:creator><comments>https://news.ycombinator.com/item?id=47312780</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47312780</guid></item><item><title><![CDATA[New comment by svara in "Labor market impacts of AI: A new measure and early evidence"]]></title><description><![CDATA[
<p>Yes, throughput is determined by the bottleneck and above a certain organization size, the bottleneck often is coordination costs.</p>
]]></description><pubDate>Fri, 06 Mar 2026 12:52:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47274352</link><dc:creator>svara</dc:creator><comments>https://news.ycombinator.com/item?id=47274352</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47274352</guid></item><item><title><![CDATA[New comment by svara in "Labor market impacts of AI: A new measure and early evidence"]]></title><description><![CDATA[
<p>Yes, it's the lump of labor fallacy.<p>Doesn't exclude the possibility of short term distribution, though.</p>
]]></description><pubDate>Fri, 06 Mar 2026 09:43:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47272922</link><dc:creator>svara</dc:creator><comments>https://news.ycombinator.com/item?id=47272922</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47272922</guid></item><item><title><![CDATA[New comment by svara in "Lean 4: How the theorem prover works and why it's the new competitive edge in AI"]]></title><description><![CDATA[
<p>Can you give some examples of this? Maybe have something online? I would love to learn more about how to do proof driven AI assisted development.</p>
]]></description><pubDate>Sat, 21 Feb 2026 14:01:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47100931</link><dc:creator>svara</dc:creator><comments>https://news.ycombinator.com/item?id=47100931</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47100931</guid></item><item><title><![CDATA[New comment by svara in "Gemini 3.1 Pro"]]></title><description><![CDATA[
<p>My understanding is that all recent gains are from post training and no one (publicly) knows how much scaling pretraining will still help at this point.<p>Happy to learn more about this if anyone has more information.</p>
]]></description><pubDate>Thu, 19 Feb 2026 19:57:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47078354</link><dc:creator>svara</dc:creator><comments>https://news.ycombinator.com/item?id=47078354</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47078354</guid></item><item><title><![CDATA[New comment by svara in "Russia's economy has entered the death zone"]]></title><description><![CDATA[
<p>Do you keep a collection of these? What for?</p>
]]></description><pubDate>Tue, 17 Feb 2026 19:41:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47052106</link><dc:creator>svara</dc:creator><comments>https://news.ycombinator.com/item?id=47052106</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47052106</guid></item><item><title><![CDATA[New comment by svara in "Semantic ablation: Why AI writing is generic and boring"]]></title><description><![CDATA[
<p>I think that mostly depends on how good a writer you are. A lot of people aren't, and the AI legitimately writes better. As in, the prose is easier to understand, free of obvious errors or ambiguities.<p>But then, the writing is also never great. I've tried a couple of times to get it to write in the style of a famous author, sometimes pasting in some example text to model the output on, but it never sounds right.</p>
]]></description><pubDate>Tue, 17 Feb 2026 17:35:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47050306</link><dc:creator>svara</dc:creator><comments>https://news.ycombinator.com/item?id=47050306</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47050306</guid></item><item><title><![CDATA[New comment by svara in "I guess I kinda get why people hate AI"]]></title><description><![CDATA[
<p>Yes, it's been odd to observe the parallels with the web3 craze.<p>You asked people what their project was for and you'd get a response that made sense to no one outside of that bubble, and if you pressed on people would get mad.<p>The bizarre thing is that this time around, these tools do have a bunch of real utility, but it's become almost impossible online to discuss how to use the tech properly, because that would require acknowledging some limitations.</p>
]]></description><pubDate>Mon, 16 Feb 2026 19:00:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47038827</link><dc:creator>svara</dc:creator><comments>https://news.ycombinator.com/item?id=47038827</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47038827</guid></item><item><title><![CDATA[New comment by svara in "I want to wash my car. The car wash is 50 meters away. Should I walk or drive?"]]></title><description><![CDATA[
<p>Opus 4.6:<p>Walk! At 50 meters, you'll get there in under a minute on foot. Driving such a short distance wastes fuel, and you'd spend more time starting the car and parking than actually traveling. Plus, you'll need to be at the car wash anyway to pick up your car once it's done.</p>
]]></description><pubDate>Mon, 16 Feb 2026 07:21:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47031928</link><dc:creator>svara</dc:creator><comments>https://news.ycombinator.com/item?id=47031928</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47031928</guid></item></channel></rss>