<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: famouswaffles</title><link>https://news.ycombinator.com/user?id=famouswaffles</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 13 May 2026 15:31:14 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=famouswaffles" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by famouswaffles in "A recent experience with ChatGPT 5.5 Pro"]]></title><description><![CDATA[
<p>>If you can sic ChatGPT on a mathematics problem and it can solve it without your input, that's a different matter but that's not what's happening.<p>I mean that has happened so yeah ?<p><a href="https://www.scientificamerican.com/article/amateur-armed-with-chatgpt-vibe-maths-a-60-year-old-problem/" rel="nofollow">https://www.scientificamerican.com/article/amateur-armed-wit...</a><p>Actual GPT transcript. Zero such input
<a href="https://chatgpt.com/share/69dd1c83-b164-8385-bf2e-8533e9baba9c" rel="nofollow">https://chatgpt.com/share/69dd1c83-b164-8385-bf2e-8533e9baba...</a><p>And maybe the other guy wasn't the most polite about it but his point is very valid. Replace chatgpt with a human in both of these stories and nobody would say that timothy 'took the horse and made it drink'. The 'Horse' would be the first and likely only Author so this just sounds like denial.<p>That there are multiple of these stories in the last few months by the latest set of models (there are even more than these 2) <i>should</i> provoke this sort of consideration and discussion.</p>
]]></description><pubDate>Wed, 13 May 2026 07:10:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=48118750</link><dc:creator>famouswaffles</dc:creator><comments>https://news.ycombinator.com/item?id=48118750</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48118750</guid></item><item><title><![CDATA[New comment by famouswaffles in "Googlebook"]]></title><description><![CDATA[
<p>You're thinking about it the wrong way. Have you never come across some successful business idea and go, 'Huh, I never realized this problem even existed' or even 'People are paying this much for this? Wow'<p>These machines are general purpose technologies used by hundreds of millions of people. ChatGPT alone is used by over 900M people every week at least. You can count the technologies with that scale of users in your hand.<p>You'll never conceive all the sort of uses it could possibly have, much like nobody could ever conceive all the uses the internet had and will have and it would be misguided to think so. As you see, there's like 2 dozen people here telling OP the thing he thought 'No one' could possibly LLMs use for is in-fact seeing some use.</p>
]]></description><pubDate>Wed, 13 May 2026 05:57:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=48118356</link><dc:creator>famouswaffles</dc:creator><comments>https://news.ycombinator.com/item?id=48118356</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48118356</guid></item><item><title><![CDATA[New comment by famouswaffles in "Googlebook"]]></title><description><![CDATA[
<p>Yeah I and i suspect a lot of others email myself little files all the time because surprisingly that's the most convenient way to get those files quickly from phone to laptop.</p>
]]></description><pubDate>Tue, 12 May 2026 20:15:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=48113882</link><dc:creator>famouswaffles</dc:creator><comments>https://news.ycombinator.com/item?id=48113882</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48113882</guid></item><item><title><![CDATA[New comment by famouswaffles in "I'm going back to writing code by hand"]]></title><description><![CDATA[
<p>Man people really overestimate training. Claude did not 'read' any of that either. I <i>wish</i> frontier models behaved like people that had read and remembered everything they've trained on, but they're not.</p>
]]></description><pubDate>Mon, 11 May 2026 15:43:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=48096518</link><dc:creator>famouswaffles</dc:creator><comments>https://news.ycombinator.com/item?id=48096518</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48096518</guid></item><item><title><![CDATA[New comment by famouswaffles in "Let's talk about LLMs"]]></title><description><![CDATA[
<p>You're not supposed to flag a post for something like that. Ideally you downvote and move on if you feel that strongly about it. Flagging is meant to be reserved for stuff that breaks the rules or guidelines.</p>
]]></description><pubDate>Mon, 04 May 2026 22:03:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=48015601</link><dc:creator>famouswaffles</dc:creator><comments>https://news.ycombinator.com/item?id=48015601</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48015601</guid></item><item><title><![CDATA[New comment by famouswaffles in "Where the goblins came from"]]></title><description><![CDATA[
<p>>Ex: I would not be comfortable flying on any airplane where the autopilot "just zones-out sometimes", even though it's a dysfunction also seen in people.<p>You might if that was the best auto-pilot could be. Have you never used a bus or taken a taxi ?<p>The vast majority of things people are using LLMs for isn't stuff deterministic logic machines did great at, but stuff those same machines did poorly at or straight up stuff previously relegated to the domains of humans only.<p>If your competition also "just zones out sometimes" then it's not something you're going to focus on.</p>
]]></description><pubDate>Thu, 30 Apr 2026 04:55:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47958287</link><dc:creator>famouswaffles</dc:creator><comments>https://news.ycombinator.com/item?id=47958287</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47958287</guid></item><item><title><![CDATA[New comment by famouswaffles in "AI should elevate your thinking, not replace it"]]></title><description><![CDATA[
<p>>LLM didn't solve an Erdos problem, it generated a text that a human looked at, cleaned up, corrected and used as base for a solution.<p>That's not at all what happened. You clearly are unable to actually understand the work it did so it would have been nice if you'd read the article and accounts of experts.</p>
]]></description><pubDate>Wed, 29 Apr 2026 17:55:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47951917</link><dc:creator>famouswaffles</dc:creator><comments>https://news.ycombinator.com/item?id=47951917</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47951917</guid></item><item><title><![CDATA[New comment by famouswaffles in "Amateur armed with ChatGPT solves an Erdős problem"]]></title><description><![CDATA[
<p>This is one of a number of such results achieved only in the last few months with only the last crop of models. They have undoubtedly gotten better in this domain. Saying anything else is just denial. You can run these same problems on GPT-4 or 5 all you want, you'll get nowhere. In fact people did, and you're hearing about it now because it's these crop of models that are getting meaningful results.</p>
]]></description><pubDate>Wed, 29 Apr 2026 17:43:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47951778</link><dc:creator>famouswaffles</dc:creator><comments>https://news.ycombinator.com/item?id=47951778</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47951778</guid></item><item><title><![CDATA[New comment by famouswaffles in "How ChatGPT serves ads"]]></title><description><![CDATA[
<p>>The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans.<p>Unless they botch the implementation, it's not going to be negligible with ~800M+ free subscribers.</p>
]]></description><pubDate>Wed, 29 Apr 2026 03:05:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47943751</link><dc:creator>famouswaffles</dc:creator><comments>https://news.ycombinator.com/item?id=47943751</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47943751</guid></item><item><title><![CDATA[New comment by famouswaffles in "Amateur armed with ChatGPT solves an Erdős problem"]]></title><description><![CDATA[
<p>Glad I could clear that up for you</p>
]]></description><pubDate>Sun, 26 Apr 2026 16:06:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47911374</link><dc:creator>famouswaffles</dc:creator><comments>https://news.ycombinator.com/item?id=47911374</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47911374</guid></item><item><title><![CDATA[New comment by famouswaffles in "Amateur armed with ChatGPT solves an Erdős problem"]]></title><description><![CDATA[
<p>>But you can only do that now, in hindsight.<p>No you could always do that. The meaning you take from it is up to you but you could always separate humans and calculators.<p>>No, that is not right. Fool’s gold is a thing.<p>I know what fools gold is. I used it for contrast. Fools gold can be tested for.<p>>but that doesn’t mean you know how to do it.<p>It doesn't matter. If you claim it exists but you don't know how to do it and you can't point to anyone who can, it's the same as something you made up.<p>>It’s like tasting two similar beers or sodas. You may be able to identify them by taste and understand they’re difference but be unable to articulate exactly how you know which is which to the point someone else can use your verbal instructions to know the difference.<p>You are still making the same mistake. Two similar beers or sodas taste different. No one is asking you to come up with a theory for intelligence. All you have to say here is the equivalent of "It tastes different" and let me taste it for myself. But even that much, you can not do. So why on earth should I treat what you say as worth anything ?</p>
]]></description><pubDate>Sun, 26 Apr 2026 15:02:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47910899</link><dc:creator>famouswaffles</dc:creator><comments>https://news.ycombinator.com/item?id=47910899</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47910899</guid></item><item><title><![CDATA[New comment by famouswaffles in "Amateur armed with ChatGPT solves an Erdős problem"]]></title><description><![CDATA[
<p>>Most people would consider someone who can calculate 56863*2446 instantly in their head to be intelligent. Does that mean pocket calculators are intelligent? The result is the same.<p>If you wanted to insist a calculator wasn't intelligent and satisfy my conditions then you can. At the very least you can test for the sort of intelligence that is present in humans but absent from calculators and cleanly separate the two. These are very easy conditions if there is some actual real difference.<p>>That is the equivalent of responding to criticism with “can you do better?”. One does not need to be a chef (or even know how to cook) to know when food tastes foul.<p>No it's not, and this is a silly argument. Foul food tastes different. Sometimes it even looks different. You can test for it and satisfy my conditions.<p>You come across a shiny piece of yellow metal that you think is gold. It looks like gold, feels like gold and tests like gold. Suddenly a strange fellow comes about insisting that it's not actually gold. No, apparently there is a 'fake' gold. You are intrigued so you ask him, "Alright, what exactly is fake gold, and how can I test or tell them apart ?". But this fellow is completely unable to answer either question. What would you say about him ? He's nothing more than a mad man rambling about a distinction he made up in his head.<p>What I'm asking you to do is incredibly easy and basic with a real distinction. I'm not going to tell you to stop believing in your fake gold, but I am going to tell you I and no one else can be expected to take you seriously.</p>
]]></description><pubDate>Sun, 26 Apr 2026 13:16:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47910079</link><dc:creator>famouswaffles</dc:creator><comments>https://news.ycombinator.com/item?id=47910079</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47910079</guid></item><item><title><![CDATA[New comment by famouswaffles in "Amateur armed with ChatGPT solves an Erdős problem"]]></title><description><![CDATA[
<p>Intelligence is Intelligence. It's intelligent because it does intelligent things. If someone feels the need to add a 'real' and 'fake' moniker to it so they can exclude the machine and make themselves feel better (or for whatever reason) then they are the one meant to be doing the defining, and to tell us how it can be tested for. If they can't, then there's no reason to pay attention to any of it. It's the equivalent of nonsensical rambling. At the end of the day, the semantic quibbling won't change anything.</p>
]]></description><pubDate>Sun, 26 Apr 2026 06:05:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47907792</link><dc:creator>famouswaffles</dc:creator><comments>https://news.ycombinator.com/item?id=47907792</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47907792</guid></item><item><title><![CDATA[New comment by famouswaffles in "Amateur armed with ChatGPT solves an Erdős problem"]]></title><description><![CDATA[
<p>Yeah? Those models are creative.</p>
]]></description><pubDate>Sun, 26 Apr 2026 04:20:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47907309</link><dc:creator>famouswaffles</dc:creator><comments>https://news.ycombinator.com/item?id=47907309</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47907309</guid></item><item><title><![CDATA[New comment by famouswaffles in "Amateur armed with ChatGPT solves an Erdős problem"]]></title><description><![CDATA[
<p>None of it is really from logical thought. The rationalizations don't make any sense, but they haven't for a while. It's an emotional response. Honestly, It's to be expected.</p>
]]></description><pubDate>Sun, 26 Apr 2026 04:19:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47907301</link><dc:creator>famouswaffles</dc:creator><comments>https://news.ycombinator.com/item?id=47907301</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47907301</guid></item><item><title><![CDATA[New comment by famouswaffles in "I'm done making desktop applications (2009)"]]></title><description><![CDATA[
<p>>To me, in my open source projects, my "development cycle" ends when I push to git, and that can be done as often as I want.<p>If development ends at a git push and users are left to build/fend for themselves (granted this is a lot of open source), then yeah not much difference, but if you're building and packaging it up for users (which you will more likely to be doing if your project is an app specifically) then the difference is massive.</p>
]]></description><pubDate>Fri, 24 Apr 2026 21:58:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47896318</link><dc:creator>famouswaffles</dc:creator><comments>https://news.ycombinator.com/item?id=47896318</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47896318</guid></item><item><title><![CDATA[New comment by famouswaffles in "We gave an AI a 3 year retail lease and asked it to make a profit"]]></title><description><![CDATA[
<p>Explanations can be faithful sometimes. That's the standard we can expect for any intelligence as far as we're aware.<p><a href="https://arxiv.org/abs/2504.14150" rel="nofollow">https://arxiv.org/abs/2504.14150</a></p>
]]></description><pubDate>Fri, 17 Apr 2026 13:08:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47805537</link><dc:creator>famouswaffles</dc:creator><comments>https://news.ycombinator.com/item?id=47805537</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47805537</guid></item><item><title><![CDATA[New comment by famouswaffles in "We gave an AI a 3 year retail lease and asked it to make a profit"]]></title><description><![CDATA[
<p>I did answer it, albeit not directly. "Guaranteed to be the motivation" isn't a standard anyone can meet, and so framing it that way doesn't really probe anything meaningful about LLMs specifically. If what you want to hear is No, then sure, have your No, but it doesn't mean anything. There's just not much to the question.<p>Even though you had it up as one borne of a greater understanding of LLMs, the interpretability research we have so far, and our current very little understanding of the internal computations of these models does not support your position and certainly not how assured you are about it.</p>
]]></description><pubDate>Fri, 17 Apr 2026 06:07:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47802898</link><dc:creator>famouswaffles</dc:creator><comments>https://news.ycombinator.com/item?id=47802898</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47802898</guid></item><item><title><![CDATA[New comment by famouswaffles in "We gave an AI a 3 year retail lease and asked it to make a profit"]]></title><description><![CDATA[
<p>>What research shows that you can ask ChatGPT to explain its reasoning and why it said what it said, and that's guaranteed to actually be the motivation?<p>What research shows that you can ask a Human to explain its reasoning and why it said what it said, and that's guaranteed to actually be the motivation? Because there's no such thing. If anything, what research exists suggests any explanation we're making is a nice post-hoc rationalization after the fact even if the Human thinks otherwise.<p><a href="https://transformer-circuits.pub/2025/introspection/index.html" rel="nofollow">https://transformer-circuits.pub/2025/introspection/index.ht...</a></p>
]]></description><pubDate>Thu, 16 Apr 2026 20:11:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47798841</link><dc:creator>famouswaffles</dc:creator><comments>https://news.ycombinator.com/item?id=47798841</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47798841</guid></item><item><title><![CDATA[New comment by famouswaffles in "We gave an AI a 3 year retail lease and asked it to make a profit"]]></title><description><![CDATA[
<p>Where do you get the idea that you have a good sense of the introspective capabilities of frontier models ? Certainly not from interpretability research. Ironically, the people who make these sort of comments understand LLMs the least.</p>
]]></description><pubDate>Thu, 16 Apr 2026 17:40:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47796876</link><dc:creator>famouswaffles</dc:creator><comments>https://news.ycombinator.com/item?id=47796876</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47796876</guid></item></channel></rss>