<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: saghm</title><link>https://news.ycombinator.com/user?id=saghm</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 01:09:21 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=saghm" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by saghm in "Bevy game development tutorials and in-depth resources"]]></title><description><![CDATA[
<p>I don't think anyone claimed that Ruby and Rust were the only two languages with those features, just that they're something they both have in common.</p>
]]></description><pubDate>Sat, 11 Apr 2026 18:23:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47732818</link><dc:creator>saghm</dc:creator><comments>https://news.ycombinator.com/item?id=47732818</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47732818</guid></item><item><title><![CDATA[New comment by saghm in "Bevy game development tutorials and in-depth resources"]]></title><description><![CDATA[
<p>It's a single data structure that contains your entire game though? The whole point of the ECS is that literally everything uses the same data; it's like if you modeled every object in the world with one struct that has an optional field for every piece of data that could exist. I'm not saying that necessarily makes the tradeoff worthwhile, but calling it a "single data structure" is a bit reductive.</p>
]]></description><pubDate>Sat, 11 Apr 2026 18:21:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47732801</link><dc:creator>saghm</dc:creator><comments>https://news.ycombinator.com/item?id=47732801</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47732801</guid></item><item><title><![CDATA[New comment by saghm in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>> The title of the article is “The Future of Everything is Lies, I Guess” and the first part is literally complaining about LLMs being bullshit machines, while the author proceeds to tell confabulations (or lies) of his own. Is there not a bit of irony in that?<p>Maybe some, but not that much given the disclaimers I cited above. There's value in a qualitative confidence level for a statement, and I'd argue that this is something that LLMs do not seem to produce in practice without someone explicitly asking for it. The human author's ability to anticipate potential mistakes in their logic and communicate those ahead of time is not equivalent to the type of fabrications that LLMs routinely make.<p>> If you’re a non-expert in a field, I don’t think it’s a good sign if you’re writing a 10 part article about that field’s impact on society and getting basic facts wrong. How can I trust that the conclusions will be any more credible?<p>I don't know why an expert in LLM implementation would be inherently more qualified to analyze the second-order effects of their product than anyone else. There's precedent for people who are "too close" to something having biases that make them less effective at recognizing how tools will get used by non-experts, and society as a whole is largely composed of people who are not experts in LLM implementations. If you wanted to understand what the net effect of everyone having access to LLMs, having an understanding of people is probably more important than knowing exactly what an LLM does under the hood.</p>
]]></description><pubDate>Fri, 10 Apr 2026 19:53:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47722859</link><dc:creator>saghm</dc:creator><comments>https://news.ycombinator.com/item?id=47722859</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47722859</guid></item><item><title><![CDATA[New comment by saghm in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>I would argue that most humans would either give the correct answer or just say "I don't know". Some of them might confidently give the wrong answer, but humans will readily refuse to follow instructions in plenty of circumstances where they decide they aren't worthwhile. LLMs don't do this, and I'd argue that the ability to reject premises is fundamental to engaging with things in a truly logical way.</p>
]]></description><pubDate>Fri, 10 Apr 2026 19:37:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47722678</link><dc:creator>saghm</dc:creator><comments>https://news.ycombinator.com/item?id=47722678</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47722678</guid></item><item><title><![CDATA[New comment by saghm in "Open source security at Astral"]]></title><description><![CDATA[
<p>> I was just calling them by their new name, but yes clearly I am not the biggest fan of OpenAI and me invoking their name so soon betrays that.<p>My point is that at least from the standpoint of "Why does this process exist in the way it does?", OpenAI is not their "new name" in any logical sense. If you aren't happy with the process used a year from now, it would be reasonable in my opinion to criticize OpenAI for not making it different somehow. I'd argue that a parent company trying to make substantive process changes in such a short window would be strictly a bad thing though, because it would mean they didn't take the time to fully understand the context of what it's trying to solve and why it's the way it is currently.<p>I don't really disagree with anything else you're saying about OpenAI here, but I still think it's somewhat disingenuous to name-check them in this context.</p>
]]></description><pubDate>Fri, 10 Apr 2026 19:34:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47722651</link><dc:creator>saghm</dc:creator><comments>https://news.ycombinator.com/item?id=47722651</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47722651</guid></item><item><title><![CDATA[New comment by saghm in "OpenAI backs Illinois bill that would limit when AI labs can be held liable"]]></title><description><![CDATA[
<p>There are several points above:<p>> wasn't OpenAI the company that was formed as a nonprofit to limit the risks of LLMs?<p>>> the whole “rationalist” movement is full of those lying fks<p>>>> Is Sam even a rationalist, or describe his views as rationalist?<p>The relevancy is that a question of how and why the company was formed isn't fully answered by only talking about his motivations.</p>
]]></description><pubDate>Fri, 10 Apr 2026 19:19:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47722458</link><dc:creator>saghm</dc:creator><comments>https://news.ycombinator.com/item?id=47722458</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47722458</guid></item><item><title><![CDATA[New comment by saghm in "OpenAI backs Illinois bill that would limit when AI labs can be held liable"]]></title><description><![CDATA[
<p>That depends on your definition of "we". As a society, we can regulate companies and punish the offenders (e.g. don't dump toxic waste into sources of drinking water or you'll get prosecuted). As individuals, there's not much we can do directly. How to translate individual actions into societal action is kind of the fundamental question of civilization, and if there's a uniform solution for how to achieve it, I don't think we've managed to come up with it yet.</p>
]]></description><pubDate>Fri, 10 Apr 2026 13:58:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47718264</link><dc:creator>saghm</dc:creator><comments>https://news.ycombinator.com/item?id=47718264</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47718264</guid></item><item><title><![CDATA[New comment by saghm in "OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths"]]></title><description><![CDATA[
<p>He was not the founder of OpenAI</p>
]]></description><pubDate>Fri, 10 Apr 2026 13:53:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47718211</link><dc:creator>saghm</dc:creator><comments>https://news.ycombinator.com/item?id=47718211</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47718211</guid></item><item><title><![CDATA[New comment by saghm in "Open source security at Astral"]]></title><description><![CDATA[
<p>That fits what I had assumed (and would expect), but it definitely doesn't hurt to have that confirmed, so thank you!</p>
]]></description><pubDate>Thu, 09 Apr 2026 15:21:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47704921</link><dc:creator>saghm</dc:creator><comments>https://news.ycombinator.com/item?id=47704921</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47704921</guid></item><item><title><![CDATA[New comment by saghm in "Open source security at Astral"]]></title><description><![CDATA[
<p>> Why is it a bunch of mostly unpaid volunteer hackers are putting more effort into supply chain security than OpenAI.<p>Didn't the acquisition only happen a few weeks ago? Wouldn't it be more alarming if OpenAI had gone in and forced them to change their build process? Unless you're claiming that the article is lying about this being a description of what they've already been doing for a while (which seems a bit outlandish without more evidence), it's not clear to me why you're attributing this process to the parent company.<p>Don't get me wrong; there's <i>plenty</i> you can criticize OpenAI over, and I'm not taking a stance on your technical claims, but it seems somewhat disingenuous to phrase it like this.</p>
]]></description><pubDate>Thu, 09 Apr 2026 10:27:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47701759</link><dc:creator>saghm</dc:creator><comments>https://news.ycombinator.com/item?id=47701759</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47701759</guid></item><item><title><![CDATA[New comment by saghm in "Who is Satoshi Nakamoto? My quest to unmask Bitcoin's creator"]]></title><description><![CDATA[
<p>> it's a coincidence that two different cryptosystems might use asymmetric ciphers<p>Not only that, but the exceedingly niche C++ language and the MIT license!</p>
]]></description><pubDate>Thu, 09 Apr 2026 10:16:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47701635</link><dc:creator>saghm</dc:creator><comments>https://news.ycombinator.com/item?id=47701635</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47701635</guid></item><item><title><![CDATA[New comment by saghm in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>If you change it from asking a question to giving an instruction, how many humans do you know that have trouble saying no to things that aren't reasonable? I'd argue that pretty much every human will refuse to do most things you might instruct them to do, whereas an LLM will happily attempt most things you ask them to do for you, regardless of whether they're capable of succeeding, and it's up to you to figure out if they actually did it right or not. There are tasks where this is extremely useful, but they're ones that are extremely low risk and can easily be audited upon completion. This isn't anywhere near the level of what a human is capable of.</p>
]]></description><pubDate>Thu, 09 Apr 2026 02:23:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47698635</link><dc:creator>saghm</dc:creator><comments>https://news.ycombinator.com/item?id=47698635</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47698635</guid></item><item><title><![CDATA[New comment by saghm in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>They key capability that humans have that I've yet to see in an LLM is the ability to recognize when they would not be capable of doing a task well and refuse to do it poorly instead. The only times I've ever seen LLMs give up on a problem are when the prompting is very explicitly crafted to try to elicit a response like that when necessary or after very long back-and-forth exchanges where they get repeated feedback about unsatisfactory results. I think this has pretty dire implications in terms of what the consequences are for deploying them in any scenario where failure has significant risk or the output can't be immediately audited for correctness.</p>
]]></description><pubDate>Thu, 09 Apr 2026 02:16:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47698604</link><dc:creator>saghm</dc:creator><comments>https://news.ycombinator.com/item?id=47698604</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47698604</guid></item><item><title><![CDATA[New comment by saghm in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>> Just like we have machines that can do "math", and they do so artificially.<p>Nobody calls calculators "artificial mathemeticians", though; we refer to them by a unique word that defines what they can and can't do in a far less fanciful and ambiguous way.</p>
]]></description><pubDate>Thu, 09 Apr 2026 02:02:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47698513</link><dc:creator>saghm</dc:creator><comments>https://news.ycombinator.com/item?id=47698513</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47698513</guid></item><item><title><![CDATA[New comment by saghm in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>> Now, suddenly, this name has been broadcast to every human in the world more or less. To them, it's a new term, and it obviously means something human mind-like. But to people who work on AI, that's not generally what it means. (Which isn't to say that some of them don't think we're near to achieving that; they just use other terms like "AGI" for that goal). So the name, which has a long history, is deceptive to people who aren't familiar with computer science.<p>I think it's even worse than that: people were familiar with the term already, but from science fiction, where it referred to actually human-level intelligence. It's similar to the "hoverboard" thing from a while back, except this time with profoundly higher stakes and requires for more technical knowledge to be able to see that it is in fact touching the ground.</p>
]]></description><pubDate>Thu, 09 Apr 2026 01:56:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47698482</link><dc:creator>saghm</dc:creator><comments>https://news.ycombinator.com/item?id=47698482</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47698482</guid></item><item><title><![CDATA[New comment by saghm in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>I'm not sure if you misread the statement you quoted or I'm misreading yours, but it doesn't sound like you're really disagreeing with their point. Did you miss the "un" in "unable", or am I misunderstanding you as also saying that you don't consider them to be creative?</p>
]]></description><pubDate>Thu, 09 Apr 2026 01:52:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47698450</link><dc:creator>saghm</dc:creator><comments>https://news.ycombinator.com/item?id=47698450</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47698450</guid></item><item><title><![CDATA[New comment by saghm in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>>  LLMs with harnesses are clearly capable of engaging with logical problems that only need text.<p>> LLMs are clearly unable to propose new, creative solutions for problems it has never seen before.<p>How do you reconcile this with this article that the author linked? It's not a novel problem, and it's only text: <a href="https://medium.com/the-generator/one-word-answers-expose-ai-flaws-0ea96b271702" rel="nofollow">https://medium.com/the-generator/one-word-answers-expose-ai-...</a><p>I guess it's a form of engagement to give a wildly wrong answer, but I'm not convinced that the extra nuance you've introduced is really all that nuanced either.</p>
]]></description><pubDate>Thu, 09 Apr 2026 01:50:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47698442</link><dc:creator>saghm</dc:creator><comments>https://news.ycombinator.com/item?id=47698442</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47698442</guid></item><item><title><![CDATA[New comment by saghm in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>This sounds almost identical to the article that's literally linked at the end of the paragraph that the parent comment quoted: <a href="https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson.pdf" rel="nofollow">https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson...</a><p>I don't think anything you're saying here is in disagreement with the points they're making.</p>
]]></description><pubDate>Thu, 09 Apr 2026 01:47:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47698418</link><dc:creator>saghm</dc:creator><comments>https://news.ycombinator.com/item?id=47698418</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47698418</guid></item><item><title><![CDATA[New comment by saghm in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>Literally the paragraph right before the one you quote is this:<p>> I am generally outside the ML field, but I do talk with people in the field. One of the things they tell me is that we don’t really know why transformer models have been so successful, or how to make them better. This is my summary of discussions-over-drinks; take it with many grains of salt. I am certain that People in The Comments will drop a gazillion papers to tell you why this is wrong.<p>As I understand it, this article is basically a conglomeration of several attempts at an article that the author has attempted to make over the past decade or so considering the impacts of AI on society. In their own words:<p>> Some of these ideas felt prescient in the 2010s and are now obvious. Others may be more novel, or not yet widely-heard. Some predictions will pan out, but others are wild speculation. I hope that regardless of your background or feelings on the current generation of ML systems, you find something interesting to think about.<p>As for the "Bitter Lesson" part, they pretty much directly said that it <i>wasn't</i> the Bitter Lesson exactly, saying it might be a variant of it. Honestly, it felt more like a way of throwing in a reference to something that also might provoke thought, which was done throughout the piece (which again, is the entire point).<p>It's totally valid to say "this article didn't provoke much thought for me". I'm a bit confused at why you think a lack of specific domain knowledge in a domain that they literally state they are not an expert in would be disqualifying for that purpose though.</p>
]]></description><pubDate>Thu, 09 Apr 2026 01:44:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47698405</link><dc:creator>saghm</dc:creator><comments>https://news.ycombinator.com/item?id=47698405</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47698405</guid></item><item><title><![CDATA[New comment by saghm in "Move Detroit"]]></title><description><![CDATA[
<p>It's also moving through time at a rate of one second per second (but similarly not relative to the surrounding cities).</p>
]]></description><pubDate>Thu, 09 Apr 2026 00:56:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47698091</link><dc:creator>saghm</dc:creator><comments>https://news.ycombinator.com/item?id=47698091</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47698091</guid></item></channel></rss>