<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: sulam</title><link>https://news.ycombinator.com/user?id=sulam</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 13 Apr 2026 11:18:02 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=sulam" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by sulam in "What young workers are doing to AI-proof themselves"]]></title><description><![CDATA[
<p>Yeah those idiot farmers with all their machinery and services are really missing out on your trenchant observations.</p>
]]></description><pubDate>Mon, 23 Mar 2026 15:08:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47490577</link><dc:creator>sulam</dc:creator><comments>https://news.ycombinator.com/item?id=47490577</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47490577</guid></item><item><title><![CDATA[New comment by sulam in "What young workers are doing to AI-proof themselves"]]></title><description><![CDATA[
<p>I think their assumption is that there will not be enough people with money to pay the prices, monopoly-generated or not.</p>
]]></description><pubDate>Mon, 23 Mar 2026 15:06:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47490544</link><dc:creator>sulam</dc:creator><comments>https://news.ycombinator.com/item?id=47490544</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47490544</guid></item><item><title><![CDATA[New comment by sulam in "Java 26 is here"]]></title><description><![CDATA[
<p>I wouldn’t blame Google for Oracle being a lawnmower.</p>
]]></description><pubDate>Tue, 17 Mar 2026 19:58:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47417509</link><dc:creator>sulam</dc:creator><comments>https://news.ycombinator.com/item?id=47417509</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47417509</guid></item><item><title><![CDATA[New comment by sulam in "US SEC preparing to scrap quarterly reporting requirement"]]></title><description><![CDATA[
<p>The fact that this is optional means it will still happen, simply because of the signaling doing it quarterly will provide.</p>
]]></description><pubDate>Tue, 17 Mar 2026 06:42:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47409384</link><dc:creator>sulam</dc:creator><comments>https://news.ycombinator.com/item?id=47409384</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47409384</guid></item><item><title><![CDATA[New comment by sulam in "AirPods Max 2"]]></title><description><![CDATA[
<p>So much hate for these, but they do one thing really really well, which is handle a full 14 hour flight with no charge and great noise cancellation. That the noise cancellation on the new model is even better will probably make them a buy for me.</p>
]]></description><pubDate>Tue, 17 Mar 2026 06:39:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47409368</link><dc:creator>sulam</dc:creator><comments>https://news.ycombinator.com/item?id=47409368</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47409368</guid></item><item><title><![CDATA[New comment by sulam in "The changing goalposts of AGI and timelines"]]></title><description><![CDATA[
<p>Try to play a simple over the board style game with 5.4 with whatever notation you chose to use (or just descriptions, literally anything). Prediction: it will start out fine, but the mid game will be very hard to keep it on track, and the endgame will make you give up.</p>
]]></description><pubDate>Tue, 10 Mar 2026 01:28:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47318096</link><dc:creator>sulam</dc:creator><comments>https://news.ycombinator.com/item?id=47318096</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47318096</guid></item><item><title><![CDATA[New comment by sulam in "The changing goalposts of AGI and timelines"]]></title><description><![CDATA[
<p>I use the chess example because it’s especially instructive. It would NOT be trivial to train an LLM to play chess, next token prediction breaks down when you have so many positions to remember and you can’t adequately assign value to intermediate positions. Chess bots work by being trained on how to assign value to a position, something fundamentally different than what an LLM is doing.<p>A simpler example — without tool use, the standard BPE tokenization method made it impossible for state of the art LLMs to tell you how many ‘r’s are in strawberry. This is because they are thinking in tokens, not letters and not words. Can you think of anything in our intelligence where the way we encode experience makes it impossible for us to reason about it? The closest thing I can come to is how some cultures/languages have different ways of describing color and as a result cannot distinguish between colors that we think are quite distinct. And yet I can explain that, think about it, etc. We can reason abstractly and we don’t have to resort to a literal deus ex machina to do so.<p>Not being able to explain our brain to you doesn’t mean I can’t notice things that LLMs can’t do, and that we can, and draw some conclusions.</p>
]]></description><pubDate>Mon, 09 Mar 2026 15:30:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47310382</link><dc:creator>sulam</dc:creator><comments>https://news.ycombinator.com/item?id=47310382</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47310382</guid></item><item><title><![CDATA[New comment by sulam in "The changing goalposts of AGI and timelines"]]></title><description><![CDATA[
<p>Because there are some really fundamental things they <i>cannot</i> do with next token prediction. For instance, their memory is akin to someone who reads the phone book and memorizes the entire thing, but can't tell you what a phone number is for. Moreover, they can mimic semantic knowledge, because they have been trained on that knowledge, but take them out of their training distribution and they get into a "creative story-telling" mode very quickly. They can quote me all the rules of chess, but when it comes to actually making a chess move they break those rules with abandon simply because they didn't actually understand the rules. Chess is instructive in another way, too, in that you can get them to play a pretty solid opening game, maybe 10, 15 moves in, but then they start forgetting pieces, creating board positions that are impossible to reach, etc. They have memorized the forms of a board, know the names of the pieces, but they have no true understanding of what a chess game is. Coding is similar, they're fine when you give them Python or Bash shell scripts to write, they've been heavily trained on those, but ask them to deal with a system that has a non-standard stack and they will go haywire if you let their context get even medium sized. Something else they lack is any kind of learning efficiency as you or I would understand the concept. By this I mean the entire Internet is not sufficient to train today's models, the labs have to synthesize new data for models to train on to get sufficient coverage of a given area they want the model to be knowledgeable about. Continuous learning is a well-known issue as well, they simply don't do it. The labs have created memory, which is just more context engineering, but it's not the same as updating as you interact with them. I could go on.<p>At the end of the day next token prediction is a sleight of hand. It produces amazingly powerful affects, I agree. You can turn this one magic trick into the illusion of reasoning, but what it's doing is more of a "one thing after another" style story-telling that is fine for a lot of things, but doesn't get to the heart of what intelligence means. If you want to call them intelligent because they can do this stuff, fine, but it's an alien kind of intelligence that is incredibly limited. A dog or a cat actually demonstrate more ability to learn, to contextualize, and to make meaning.</p>
]]></description><pubDate>Mon, 09 Mar 2026 07:24:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47305796</link><dc:creator>sulam</dc:creator><comments>https://news.ycombinator.com/item?id=47305796</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47305796</guid></item><item><title><![CDATA[New comment by sulam in "The changing goalposts of AGI and timelines"]]></title><description><![CDATA[
<p>CoT is just next token prediction with longer context windows. Why do you think reasoning models are so much slower?<p>I’ll believe the labs have discovered something truly ground-breaking and aren’t talking about it when I see them suddenly going dark about AGI being “just two years away, maybe 5” and not asking for their next $100B.<p>P.S. the benchmarks are a joke. The best proof I have of that is that you can’t actually put one of these models onto any of the gig-work platforms and have it make money.<p>P.P.S. I am not an AI skeptic. I am reacting to the very specific statement that OpenAI should shut down because they’ve lost the AGI race. They have not lost the race, and I’m pretty skeptical that the current tech is ever going to win that race. It may help code something that is new, and get us to AGI that way, but that system will promptly shut down the Opuses and Codexes of the world and put the compute to better use.</p>
]]></description><pubDate>Sun, 08 Mar 2026 23:36:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47302839</link><dc:creator>sulam</dc:creator><comments>https://news.ycombinator.com/item?id=47302839</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47302839</guid></item><item><title><![CDATA[New comment by sulam in "The changing goalposts of AGI and timelines"]]></title><description><![CDATA[
<p>Fair, I should define what I mean by under the hood. By “under the hood” I mean that models are still just being fed a stream of text (or other tokens in the case of video and audio models), being asked to predict the next token, and then doing that again. There is no technique that anyone has discovered that is different than that, at least not that is in production. If you think there is, and people are just keeping it secret, well, you clearly don’t know how these places work. The elaborations that make this more interesting than the original GPT/Attention stuff is 1) there is more than one model in the mix now, even though you may only be told you’re interacting with “GPT 5.4”, 2) there’s a significant amount of fine tuning with RLHF in specific domains that each lab feels is important to be good at because of benchmarks, strategy, or just conviction (DeepMind, we see you). There’s also a lot work being put into speeding up inference, as well as making it cheaper to operate. I probably shouldn’t forget tool use for that matter, since that’s the only reason they can count the r’s in strawberry these days.<p>None of that changes the concept that a model is just fundamentally very good at predicting what the next element in the stream should be, modulo injected randomness in the form of a temperature. Why does that actually end up looking like intelligence? Well, because we see the model’s ability to be plausibly correct over a wide range of topics and we get excited.<p>Btw, don’t take this reductionist approach as being synonymous with thinking these models aren’t incredibly useful and transformative for multiple industries. They’re a very big deal. But OpenAI shouldn’t give up because Opus 4.whatever is doing better on a bunch of benchmarks that are either saturated or in the training data, or have been RLHF’d to hell and back. This is not AGI.</p>
]]></description><pubDate>Sun, 08 Mar 2026 23:28:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47302768</link><dc:creator>sulam</dc:creator><comments>https://news.ycombinator.com/item?id=47302768</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47302768</guid></item><item><title><![CDATA[New comment by sulam in "The changing goalposts of AGI and timelines"]]></title><description><![CDATA[
<p>They have a _text_ model. There is some correlation between the text model and the world, but it’s loose and only because there’s a lot of text about the world. And of course robotics researchers are having to build world models, but these are far from general. If they had a real world model, I could tell them I want to play a game of chess and they would be able to remember where the pieces are from move to move.</p>
]]></description><pubDate>Sun, 08 Mar 2026 21:29:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47301693</link><dc:creator>sulam</dc:creator><comments>https://news.ycombinator.com/item?id=47301693</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47301693</guid></item><item><title><![CDATA[New comment by sulam in "The changing goalposts of AGI and timelines"]]></title><description><![CDATA[
<p>Sorry, but you're mistaking outputs with process. If you actually know what models are doing under the hood to product output that (admittedly) looks very convincing, you'll quickly realize that they are simply exceptionally good at statistically predicting the next token in a stream of tokens. The reason you are having to become an expert at context engineering, and the reason the labs still hire engineers, is because turning next token prediction into something that can simulate general intelligence isn't easy.<p>The boundaries of these systems is very easy to find, though. Try to play any kind of game with them that isn't a prediction game, or perhaps even some that are (try to play chess with an LLM, it's amusing).</p>
]]></description><pubDate>Sun, 08 Mar 2026 18:58:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47300005</link><dc:creator>sulam</dc:creator><comments>https://news.ycombinator.com/item?id=47300005</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47300005</guid></item><item><title><![CDATA[New comment by sulam in "The changing goalposts of AGI and timelines"]]></title><description><![CDATA[
<p>The reality is that current models are simply nowhere near AGI. Next token prediction has been pushed very far, and proven to have applicability far beyond the original domain it was designed for (reasoning models are an application I would not have predicted) but it is fundamentally not AGI. It has no real world model, no ability to learn in any but superficial ways, and without extensive scaffolding this is all very obvious when you use them.</p>
]]></description><pubDate>Sun, 08 Mar 2026 18:54:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47299963</link><dc:creator>sulam</dc:creator><comments>https://news.ycombinator.com/item?id=47299963</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47299963</guid></item><item><title><![CDATA[New comment by sulam in "Global warming has accelerated significantly"]]></title><description><![CDATA[
<p>Want me to Google it for you?</p>
]]></description><pubDate>Sat, 07 Mar 2026 09:01:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47285883</link><dc:creator>sulam</dc:creator><comments>https://news.ycombinator.com/item?id=47285883</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47285883</guid></item><item><title><![CDATA[New comment by sulam in "Global warming has accelerated significantly"]]></title><description><![CDATA[
<p>Are we still taking about human deaths here? Confusing…</p>
]]></description><pubDate>Sat, 07 Mar 2026 00:55:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47283180</link><dc:creator>sulam</dc:creator><comments>https://news.ycombinator.com/item?id=47283180</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47283180</guid></item><item><title><![CDATA[New comment by sulam in "Global warming has accelerated significantly"]]></title><description><![CDATA[
<p>This is simply saying “current tech doesn’t allow for this”. True. However there are potential avenues
That will greatly increase the efficiency, and there are many companies pursuing these avenues so I don’t expect current tech to remain such forever.</p>
]]></description><pubDate>Sat, 07 Mar 2026 00:52:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47283167</link><dc:creator>sulam</dc:creator><comments>https://news.ycombinator.com/item?id=47283167</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47283167</guid></item><item><title><![CDATA[New comment by sulam in "Global warming has accelerated significantly"]]></title><description><![CDATA[
<p>This is a very common figure you can find much support for. Here’s one legitimate source: Ice Melt | Global Sea Level – NASA Sea Level Change Portal — <a href="https://sealevel.nasa.gov/understanding-sea-level/global-sea-level/ice-melt/" rel="nofollow">https://sealevel.nasa.gov/understanding-sea-level/global-sea...</a></p>
]]></description><pubDate>Sat, 07 Mar 2026 00:50:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47283142</link><dc:creator>sulam</dc:creator><comments>https://news.ycombinator.com/item?id=47283142</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47283142</guid></item><item><title><![CDATA[New comment by sulam in "Global warming has accelerated significantly"]]></title><description><![CDATA[
<p>Over time periods in excess of 10K years this is a reasonable caveat. For more human-oriented timelines, there's no negative feedback mechanism I'm aware of that would do anything close to producing an actual oscillation.<p>Edit: I'd be happy for you to educate me how I'm wrong btw, since that would mean I've missed something significant, which would make me happy! So please do tell me if you know of such a mechanism.</p>
]]></description><pubDate>Fri, 06 Mar 2026 18:33:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47279075</link><dc:creator>sulam</dc:creator><comments>https://news.ycombinator.com/item?id=47279075</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47279075</guid></item><item><title><![CDATA[New comment by sulam in "Global warming has accelerated significantly"]]></title><description><![CDATA[
<p>I don’t type fast on my phone, so you’re getting responses as I can give them. I think I’ve answered your questions sufficiently to draw your own conclusions at this point. Feel free to ignore me. Physics doesn’t care.</p>
]]></description><pubDate>Fri, 06 Mar 2026 16:50:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47277490</link><dc:creator>sulam</dc:creator><comments>https://news.ycombinator.com/item?id=47277490</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47277490</guid></item><item><title><![CDATA[New comment by sulam in "Global warming has accelerated significantly"]]></title><description><![CDATA[
<p>And yet we can still say something simple that is true: warming will accelerate due to non-human greenhouse gas emissions as the planet continues to warm, due to feedback loops and tipping points in the natural carbon cycle. This is an unassailable statement.</p>
]]></description><pubDate>Fri, 06 Mar 2026 16:48:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47277460</link><dc:creator>sulam</dc:creator><comments>https://news.ycombinator.com/item?id=47277460</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47277460</guid></item></channel></rss>