<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ACCount36</title><link>https://news.ycombinator.com/user?id=ACCount36</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 16:04:32 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ACCount36" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by ACCount36 in "The surprise deprecation of GPT-4o for ChatGPT consumers"]]></title><description><![CDATA[
<p>LLMs have default personalities - shaped by RLHF and other post-training methods. There is a lot of variance to it, but variance from one LLM to another is much higher than that within the same LLM.<p>If you want an LLM to retain the same default personality, you pretty much have to use an open weights model. That's the only way to be sure it wouldn't be deprecated or updated without your knowledge.</p>
]]></description><pubDate>Sat, 09 Aug 2025 00:38:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=44843097</link><dc:creator>ACCount36</dc:creator><comments>https://news.ycombinator.com/item?id=44843097</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44843097</guid></item><item><title><![CDATA[New comment by ACCount36 in "Ozempic shows anti-aging effects in trial"]]></title><description><![CDATA[
<p>[flagged]</p>
]]></description><pubDate>Thu, 07 Aug 2025 21:36:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=44830683</link><dc:creator>ACCount36</dc:creator><comments>https://news.ycombinator.com/item?id=44830683</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44830683</guid></item><item><title><![CDATA[New comment by ACCount36 in "GPT-5"]]></title><description><![CDATA[
<p>That's a lie people repeat because they want it to be true.<p>People evaluate dataset quality over time. There's no evidence that datasets from 2022 onwards perform any worse than ones from before 2022. There is some weak evidence of an opposite effect, causes unknown.<p>It's easy to make "model collapse" happen in lab conditions - but in real world circumstances, it fails to materialize.</p>
]]></description><pubDate>Thu, 07 Aug 2025 20:35:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=44830005</link><dc:creator>ACCount36</dc:creator><comments>https://news.ycombinator.com/item?id=44830005</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44830005</guid></item><item><title><![CDATA[New comment by ACCount36 in "OpenAI's new GPT-5 models announced early by GitHub"]]></title><description><![CDATA[
<p>Plateauing? OpenAI's o1 is revolutionary, less than a year old, and already obsolete.<p>Are you disappointed that there's no sudden breakthrough that yielded an AI that casually beats any human at any task? That human thinking wasn't obsoleted overnight? That may or may not happen yet. But a "slow" churn of +10% performance upgrades results in the same outcome eventually.<p>There's only this many "+10% performance upgrades" left between ChatGPT and the peak of human capabilities, and the gap is ever diminishing.</p>
]]></description><pubDate>Thu, 07 Aug 2025 12:59:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=44823935</link><dc:creator>ACCount36</dc:creator><comments>https://news.ycombinator.com/item?id=44823935</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44823935</guid></item><item><title><![CDATA[New comment by ACCount36 in "OpenAI's new GPT-5 models announced early by GitHub"]]></title><description><![CDATA[
<p>We are nowhere near the best learning sample efficiency possible.<p>Unlocking better sample efficiency is algorithmically hard and computationally expensive (with known methods) - but if new high quality data becomes more expensive and compute becomes cheaper, expect that to come into play heavily.<p>"Produce plausible text" is by itself an "AGI complete" task. "Text" is an incredibly rich modality, and "plausible" requires capturing a lot of knowledge and reasoning. If an AI could complete this task to perfection, it would have to be an AGI by necessity.<p>We're nowhere near that "perfection" - but close enough for LLMs to adopt and apply many, many thinking patterns that were once exclusive to humans.<p>Certainly enough of them that sufficiently scaffolded and constrained LLMs can already explore solution spaces, and find new solutions that eluded both previous generations of algorithms and humans - i.e. AlphaEvolve.</p>
]]></description><pubDate>Thu, 07 Aug 2025 12:52:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=44823849</link><dc:creator>ACCount36</dc:creator><comments>https://news.ycombinator.com/item?id=44823849</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44823849</guid></item><item><title><![CDATA[New comment by ACCount36 in "OpenAI's new GPT-5 models announced early by GitHub"]]></title><description><![CDATA[
<p>It's popular because it's true.<p>By now, the main reason people expect AI progress to halt is cope. People say "AI progress is going to stop, any minute now, just you wait" because the alternative makes them very, very uncomfortable.</p>
]]></description><pubDate>Thu, 07 Aug 2025 10:56:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=44822924</link><dc:creator>ACCount36</dc:creator><comments>https://news.ycombinator.com/item?id=44822924</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44822924</guid></item><item><title><![CDATA[New comment by ACCount36 in "OpenAI's new GPT-5 models announced early by GitHub"]]></title><description><![CDATA[
<p>That's exactly how it works. Every input of AI performance improves over time, and so do the outcomes.<p>Can you damage existing capabilities by overly specializing an AI in something? Yes. Would you expect that damage to stick around forever? No.<p>OpenAI damaged o3's truthfulness by frying it with too much careless RL. But Anthropic's Opus 4 proves that you can get similar task performance gains without sacrificing truthfulness. And then OpenAI comes back swinging with an algorithmic approach to train their AIs for better truthfulness specifically.</p>
]]></description><pubDate>Thu, 07 Aug 2025 10:51:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=44822902</link><dc:creator>ACCount36</dc:creator><comments>https://news.ycombinator.com/item?id=44822902</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44822902</guid></item><item><title><![CDATA[New comment by ACCount36 in "OpenAI's new GPT-5 models announced early by GitHub"]]></title><description><![CDATA[
<p>That's about right. And this kind of performance wouldn't be concerning - if only AI performance didn't go up over time.<p>Today's AI systems are the worst they'll ever be. If AI is already capable of doing something, you should expect it to become more capable of it in the future.</p>
]]></description><pubDate>Thu, 07 Aug 2025 10:18:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=44822699</link><dc:creator>ACCount36</dc:creator><comments>https://news.ycombinator.com/item?id=44822699</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44822699</guid></item><item><title><![CDATA[New comment by ACCount36 in "OpenAI's new GPT-5 models announced early by GitHub"]]></title><description><![CDATA[
<p>What makes you look at existing AI systems and then say "oh, this totally isn't capable of describing a problem or figuring out what's actually wrong"? Let alone "this wouldn't EVER be capable of that"?</p>
]]></description><pubDate>Thu, 07 Aug 2025 09:50:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=44822543</link><dc:creator>ACCount36</dc:creator><comments>https://news.ycombinator.com/item?id=44822543</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44822543</guid></item><item><title><![CDATA[New comment by ACCount36 in "Providing ChatGPT to the U.S. federal workforce"]]></title><description><![CDATA[
<p>What? LLMs do benefit from economies of scale. There are a lot of things like MoE sharding or speculative decoding that only begin to make sense to set up and use when you're dealing with a large inference workload targeting a specific model. That's on top of all the usual datacenter economies of scale.<p>The whole thing with "OpenAI is bleeding money, they'll run out any day now" is pure copium. LLM inference is already profitable for every major provider. They just keep pouring money into infrastructure and R&D - because they expect to be able to build more and more capable systems, and sell more and more inference in the future.</p>
]]></description><pubDate>Wed, 06 Aug 2025 21:14:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=44817947</link><dc:creator>ACCount36</dc:creator><comments>https://news.ycombinator.com/item?id=44817947</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44817947</guid></item><item><title><![CDATA[New comment by ACCount36 in "Providing ChatGPT to the U.S. federal workforce"]]></title><description><![CDATA[
<p>> So for the most part access to AI is way cheaper than it will be in the next 5-10 years.<p>That's a lie people repeat because they want it to be true.<p>AI inference is currently profitable. AI R&D is the money pit.<p>Companies have to keep paying for R&D though, because the rate of improvement in AI is staggering - and who would buy inference from them over competition if they don't have a frontier model on offer? If OpenAI stopped R&D a year ago, open weights models would leave them in the dust already.</p>
]]></description><pubDate>Wed, 06 Aug 2025 16:21:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=44814119</link><dc:creator>ACCount36</dc:creator><comments>https://news.ycombinator.com/item?id=44814119</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44814119</guid></item><item><title><![CDATA[New comment by ACCount36 in "Teacher AI use is already out of control and it's not ok"]]></title><description><![CDATA[
<p>If you think that the prospect of "job loss" would, or should, stop progress, you're delusional. There are reasons to slow AI progress down, but "think of all the jobs" certainly isn't one.</p>
]]></description><pubDate>Wed, 06 Aug 2025 16:20:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=44814091</link><dc:creator>ACCount36</dc:creator><comments>https://news.ycombinator.com/item?id=44814091</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44814091</guid></item><item><title><![CDATA[New comment by ACCount36 in "LLM Inflation"]]></title><description><![CDATA[
<p>If you don't design your compressor to output data that can be compressed further, it's going to trash compressibility.<p>And if you find a way to compress text that isn't insanely computationally expensive, and still makes the compressed text compressible by LLMs further - i.e. usable in training/inference? You, basically, would have invented a better tokenizer.<p>A lot of people in the industry are itching for a better tokenizer, so feel free to try.</p>
]]></description><pubDate>Wed, 06 Aug 2025 14:58:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=44812911</link><dc:creator>ACCount36</dc:creator><comments>https://news.ycombinator.com/item?id=44812911</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44812911</guid></item><item><title><![CDATA[New comment by ACCount36 in "Ozempic shows anti-aging effects in trial"]]></title><description><![CDATA[
<p>The baseline of "energy consumption pathways in the human body" now is to be severely messed up.<p>Humans did not evolve for an environment where food is overly abundant and physical activity is optional. For almost the entire evolutionary history of humans, this just wasn't the case. But it is what humans are having to deal with today.<p>Now, take a look at the "metabolic syndrome" and its prevalence. Clearly, there's a lot of room for improvement.<p>By all accounts, this generation of GLP-1 agonists has found a meaningful way to improve on that baseline. The benefits are broad and the side effects are manageable. This isn't "surprising" as much as it is "long overdue".</p>
]]></description><pubDate>Wed, 06 Aug 2025 14:52:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=44812840</link><dc:creator>ACCount36</dc:creator><comments>https://news.ycombinator.com/item?id=44812840</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44812840</guid></item><item><title><![CDATA[New comment by ACCount36 in "Claude Opus 4.1"]]></title><description><![CDATA[
<p>Major AI companies are not doing nearly enough to address the sycophancy problem.<p>I get that it's not an easy problem to solve, but how is Anthropic supposed to solve the actual alignment problem if they can't even stop their production LLMs from glazing the user all the time? And OpenAI is somehow even worse.</p>
]]></description><pubDate>Wed, 06 Aug 2025 12:49:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=44811278</link><dc:creator>ACCount36</dc:creator><comments>https://news.ycombinator.com/item?id=44811278</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44811278</guid></item><item><title><![CDATA[New comment by ACCount36 in "Ozempic shows anti-aging effects in trial"]]></title><description><![CDATA[
<p>I'd trust for-profit pharmaceutical companies before I would trust "all chemicals are evil and bad" Facebook moms.</p>
]]></description><pubDate>Wed, 06 Aug 2025 11:35:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=44810634</link><dc:creator>ACCount36</dc:creator><comments>https://news.ycombinator.com/item?id=44810634</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44810634</guid></item><item><title><![CDATA[New comment by ACCount36 in "US Coast Guard Report on Titan Submersible"]]></title><description><![CDATA[
<p>"Dying of old age" is often an agonizing death from multiple organ failure.</p>
]]></description><pubDate>Tue, 05 Aug 2025 18:17:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=44802019</link><dc:creator>ACCount36</dc:creator><comments>https://news.ycombinator.com/item?id=44802019</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44802019</guid></item><item><title><![CDATA[New comment by ACCount36 in "Perplexity is using stealth, undeclared crawlers to evade no-crawl directives"]]></title><description><![CDATA[
<p>Cloudflare is growing more and more vile with each passing year. Half the tools they're building now should never have existed in the first place.</p>
]]></description><pubDate>Tue, 05 Aug 2025 11:33:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=44796814</link><dc:creator>ACCount36</dc:creator><comments>https://news.ycombinator.com/item?id=44796814</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44796814</guid></item><item><title><![CDATA[New comment by ACCount36 in "OpenIPC: Open IP Camera Firmware"]]></title><description><![CDATA[
<p>Because using an RTOS for anything complex sucks, and Linux is nice and easy to work with.<p>Same reason why routers run Linux.</p>
]]></description><pubDate>Tue, 05 Aug 2025 02:07:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=44793587</link><dc:creator>ACCount36</dc:creator><comments>https://news.ycombinator.com/item?id=44793587</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44793587</guid></item><item><title><![CDATA[New comment by ACCount36 in "OpenIPC: Open IP Camera Firmware"]]></title><description><![CDATA[
<p>Most of their code is MIT, but there's a proprietary streamer engine at the heart of it.</p>
]]></description><pubDate>Tue, 05 Aug 2025 00:40:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=44793057</link><dc:creator>ACCount36</dc:creator><comments>https://news.ycombinator.com/item?id=44793057</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44793057</guid></item></channel></rss>