<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: MadxX79</title><link>https://news.ycombinator.com/user?id=MadxX79</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 16 May 2026 11:06:34 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=MadxX79" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by MadxX79 in "The sigmoids won't save you"]]></title><description><![CDATA[
<p>Yeah, I use them all the time. I just don't see any good argument that it's anything other than statistical pattern matching plus some sort of logic encoded in language.
My overfitted LLM obviously didn't arrive at Harry Potter the same way JK Rowling did, so the amount of time she spent writing it is completely irrelevant to any discussion about whether or not the LLM should be able to reproduce it. discussions of AGI if it took her an hour or a decade to write it, it has seen the result, so it can reproduce it.</p>
]]></description><pubDate>Sat, 16 May 2026 09:38:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=48158551</link><dc:creator>MadxX79</dc:creator><comments>https://news.ycombinator.com/item?id=48158551</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48158551</guid></item><item><title><![CDATA[New comment by MadxX79 in "The sigmoids won't save you"]]></title><description><![CDATA[
<p>Yeah, what about them? As far as I read it the tasks are fixed. The AI companies should know the tasks by now, and have overfitted their models on the tests by now, in the same way I'm implying I overfitted my model to reproduce Harry Potter.</p>
]]></description><pubDate>Sat, 16 May 2026 06:05:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=48157266</link><dc:creator>MadxX79</dc:creator><comments>https://news.ycombinator.com/item?id=48157266</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48157266</guid></item><item><title><![CDATA[New comment by MadxX79 in "The sigmoids won't save you"]]></title><description><![CDATA[
<p>I don't know why people are so impressed by 8h.<p>I trained an LLM to write the whole Harry Potter series, and that took JK Rowling like 17 years.<p>For my next point on the graph, I'll train the LLM to write the Bible, something that took humans >1500 years.</p>
]]></description><pubDate>Fri, 15 May 2026 17:35:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=48151477</link><dc:creator>MadxX79</dc:creator><comments>https://news.ycombinator.com/item?id=48151477</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48151477</guid></item><item><title><![CDATA[New comment by MadxX79 in "I moved my digital stack to Europe"]]></title><description><![CDATA[
<p>Karım Kahn at the International Criminal Court would like a word about that.</p>
]]></description><pubDate>Wed, 13 May 2026 17:42:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=48125036</link><dc:creator>MadxX79</dc:creator><comments>https://news.ycombinator.com/item?id=48125036</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48125036</guid></item><item><title><![CDATA[New comment by MadxX79 in "Mythos Finds a Curl Vulnerability"]]></title><description><![CDATA[
<p>Google is the leader, they really don't want AI to be a success, it only comes with a risk of disruption. They probably don't even really believe it's going to be that big of a deal. They are only in that game to hedge; sure they have wasted a trillion dollars if AI doesn't come through, but they will earn that back in 3-5 years. So why would they need to do deranged marketing stunts and sacrifice their credibility for that?<p>If OpenAI or Anthropic doesn't turn this into a trillion dollar industry FAST, they are cooked.
The strategy of building up fear around your product is risky, but necessary. There is simply no way to grow the AI business fast enough if they can't talk directly to the CEOs and bypass input from the employees, and baba yaga stories are perfect for that. Every time the CEO hears an employee say that the AI isn't working great for him, he hears an employee that's scared for his job or for his life, dismisses it, and sends out a mandate that everyone needs to prompt an AI every time they as much as need to go to the toilet.</p>
]]></description><pubDate>Mon, 11 May 2026 14:25:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=48095432</link><dc:creator>MadxX79</dc:creator><comments>https://news.ycombinator.com/item?id=48095432</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48095432</guid></item><item><title><![CDATA[New comment by MadxX79 in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>How do you propose to do a Turing test on a human (in a sense that is different from a machine simply passing the Turing test)?<p>Like failing to pick out all the motorcycles in a captcha, or a turing test where you have a guy chat with two people without knowing that one of them could be a computer, and the interrogator, unprompted, suggesting one of them might be a computer?</p>
]]></description><pubDate>Thu, 09 Apr 2026 12:24:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47702767</link><dc:creator>MadxX79</dc:creator><comments>https://news.ycombinator.com/item?id=47702767</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47702767</guid></item><item><title><![CDATA[New comment by MadxX79 in "The OpenAI Graveyard: All the Deals and Products That Haven't Happened"]]></title><description><![CDATA[
<p>They won't figure it out. It's the tragedy of the commons.</p>
]]></description><pubDate>Wed, 01 Apr 2026 19:50:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47605660</link><dc:creator>MadxX79</dc:creator><comments>https://news.ycombinator.com/item?id=47605660</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47605660</guid></item><item><title><![CDATA[New comment by MadxX79 in "ARC-AGI-3"]]></title><description><![CDATA[
<p>Yeah, so you are agreeing that the benchmarks are useless because they don't answer those questions.</p>
]]></description><pubDate>Thu, 26 Mar 2026 17:20:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47533148</link><dc:creator>MadxX79</dc:creator><comments>https://news.ycombinator.com/item?id=47533148</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47533148</guid></item><item><title><![CDATA[New comment by MadxX79 in "ARC-AGI-3"]]></title><description><![CDATA[
<p>Same question I have for all these benchmarks:<p>What's going to stop e.g. OpenAI from hiring a bunch of teenagers to play these games non-stop for a month and annotate the game with their logic for deriving the rules, generate a data set based on those playthroughs and fine tuning the next version of chatgpt on all those playthroughs?</p>
]]></description><pubDate>Thu, 26 Mar 2026 13:41:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47530369</link><dc:creator>MadxX79</dc:creator><comments>https://news.ycombinator.com/item?id=47530369</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47530369</guid></item><item><title><![CDATA[New comment by MadxX79 in "“Collaboration” is bullshit"]]></title><description><![CDATA[
<p>That pretty much describes shape up : <a href="https://basecamp.com/shapeup" rel="nofollow">https://basecamp.com/shapeup</a><p>I have a mixed relationship to it, but the scope cutting part of it works extremely well.<p>The focus it brings on focusing on the problem solved rather than on the concrete solution is also healthy I feel.</p>
]]></description><pubDate>Mon, 23 Mar 2026 20:28:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47494671</link><dc:creator>MadxX79</dc:creator><comments>https://news.ycombinator.com/item?id=47494671</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47494671</guid></item><item><title><![CDATA[New comment by MadxX79 in "Thinking Fast, Slow, and Artificial: How AI Is Reshaping Human Reasoning"]]></title><description><![CDATA[
<p>Yeah, enormously.
People will hedge depending on how sure they are about something. They might also have credentials in whatever you ask them, if you get legal advice from a lawyer, that can be judged to be more reliable than from a lay person.<p>Relationships with real people are pretty cool actually. 
If you talk to people that you have a longer relationship with, you might also be able to judge their areas of expertise and how prone to bullshitting they are.</p>
]]></description><pubDate>Sun, 22 Mar 2026 15:09:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47478331</link><dc:creator>MadxX79</dc:creator><comments>https://news.ycombinator.com/item?id=47478331</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47478331</guid></item><item><title><![CDATA[New comment by MadxX79 in "Thinking Fast, Slow, and Artificial: How AI Is Reshaping Human Reasoning"]]></title><description><![CDATA[
<p>In my experience the last answer it gives is usually the right one</p>
]]></description><pubDate>Sun, 22 Mar 2026 11:40:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47476488</link><dc:creator>MadxX79</dc:creator><comments>https://news.ycombinator.com/item?id=47476488</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47476488</guid></item><item><title><![CDATA[New comment by MadxX79 in "Thinking Fast, Slow, and Artificial: How AI Is Reshaping Human Reasoning"]]></title><description><![CDATA[
<p>Great, now I have two answers and still no clue which one is the right one.</p>
]]></description><pubDate>Sun, 22 Mar 2026 09:51:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47475914</link><dc:creator>MadxX79</dc:creator><comments>https://news.ycombinator.com/item?id=47475914</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47475914</guid></item><item><title><![CDATA[New comment by MadxX79 in "Thinking Fast, Slow, and Artificial: How AI Is Reshaping Human Reasoning"]]></title><description><![CDATA[
<p>It's an interesting parallel to, especially right wingers, want project intelligence into 1 dimension so things all humans can be ordered from inferior to superior.
That logic was already strained with humans, but with the introduction of AI the wheels are really coming off for that model.</p>
]]></description><pubDate>Sun, 22 Mar 2026 09:12:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47475751</link><dc:creator>MadxX79</dc:creator><comments>https://news.ycombinator.com/item?id=47475751</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47475751</guid></item><item><title><![CDATA[New comment by MadxX79 in "Coding after coders: The end of computer programming as we know it?"]]></title><description><![CDATA[
<p>Now they have agents.<p>People need to understand that code is a liability. LLMs hasn't changed that at all. You LLM will get every bit as confused when you have a bug somewhere in the backend and you then work around it with another line of code in the front end. line of code</p>
]]></description><pubDate>Sat, 14 Mar 2026 08:35:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47374571</link><dc:creator>MadxX79</dc:creator><comments>https://news.ycombinator.com/item?id=47374571</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47374571</guid></item><item><title><![CDATA[New comment by MadxX79 in "Coding after coders: The end of computer programming as we know it?"]]></title><description><![CDATA[
<p>Your developers were so preoccupied with whether or not they could, they didn't stop to think if they should (add 250kloc)</p>
]]></description><pubDate>Sat, 14 Mar 2026 07:22:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47374180</link><dc:creator>MadxX79</dc:creator><comments>https://news.ycombinator.com/item?id=47374180</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47374180</guid></item><item><title><![CDATA[New comment by MadxX79 in "Create value for others and don’t worry about the returns"]]></title><description><![CDATA[
<p>I get what you're saying, but I remember watching teletubbies back in the days with my nephew, and all questions of the form:<p>Have ____ surpassed teletubbies?<p>Can <i>always</i> be answered in the affirmative.</p>
]]></description><pubDate>Wed, 11 Mar 2026 11:26:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47334221</link><dc:creator>MadxX79</dc:creator><comments>https://news.ycombinator.com/item?id=47334221</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47334221</guid></item><item><title><![CDATA[New comment by MadxX79 in "The changing goalposts of AGI and timelines"]]></title><description><![CDATA[
<p>I love the <i>total</i> lack of humility on that site.
"What if the METR study turns out not to capture anything relevant? We just add a constant gap to be conservative!".
But I guess these guys aren't really scientist, so it's probably a lot to ask that they relate critically to what they are doing and be honest about the limitations of their methods.<p>What if it turns out that the more you scale the more your LLM resembles a lobotomized human. It looks like it goes really well in the beginning, but you are just never going to get to Einstein. How does that affect everything?<p>What if it turned out that those AI companies were maybe having a whole bunch of humans solving the problems that are currently just below the 50% reliability threshold they set, and do fine tuning with those solutions. That will make their models perform better on the benchmark, but it's just training for the test... will the constant gap be a good approximation then?</p>
]]></description><pubDate>Mon, 09 Mar 2026 12:57:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47308436</link><dc:creator>MadxX79</dc:creator><comments>https://news.ycombinator.com/item?id=47308436</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47308436</guid></item><item><title><![CDATA[New comment by MadxX79 in "The changing goalposts of AGI and timelines"]]></title><description><![CDATA[
<p>It really dispelled the illusion for me, but it's not that easy to find those examples, but the combinatorics of possible number of guesses is untractable enough that it can't learn a good set of clues for all possible guesses.</p>
]]></description><pubDate>Sun, 08 Mar 2026 21:34:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47301750</link><dc:creator>MadxX79</dc:creator><comments>https://news.ycombinator.com/item?id=47301750</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47301750</guid></item><item><title><![CDATA[New comment by MadxX79 in "The changing goalposts of AGI and timelines"]]></title><description><![CDATA[
<p>Isn't it just that he left way before gpt-5, then? At that point a sufficiently naive person could have believed that scaling was going to lead to AGI, but that sort of optimism died after he was already an outsider.</p>
]]></description><pubDate>Sun, 08 Mar 2026 20:20:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47301000</link><dc:creator>MadxX79</dc:creator><comments>https://news.ycombinator.com/item?id=47301000</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47301000</guid></item></channel></rss>