<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: nullbio</title><link>https://news.ycombinator.com/user?id=nullbio</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 09:35:13 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=nullbio" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by nullbio in "Anthropic tries to hide Claude's AI actions. Devs hate it"]]></title><description><![CDATA[
<p>Speaking of burning tokens, they also like to waste our tokens with paragraphs of system messages for every single file read you do with Claude. Take a look at your jsonl files, search for <system-reminder>.</p>
]]></description><pubDate>Mon, 16 Feb 2026 15:57:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47036633</link><dc:creator>nullbio</dc:creator><comments>https://news.ycombinator.com/item?id=47036633</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47036633</guid></item><item><title><![CDATA[New comment by nullbio in "Claude Code is being dumbed down?"]]></title><description><![CDATA[
<p>^</p>
]]></description><pubDate>Mon, 16 Feb 2026 15:53:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47036577</link><dc:creator>nullbio</dc:creator><comments>https://news.ycombinator.com/item?id=47036577</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47036577</guid></item><item><title><![CDATA[New comment by nullbio in "Claude Code is being dumbed down?"]]></title><description><![CDATA[
<p>The solution was simple for me: Cancel my max sub, start using Codex instead. Hopefully others do the same and Anthropic will learn to listen to their users.</p>
]]></description><pubDate>Mon, 16 Feb 2026 15:52:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47036569</link><dc:creator>nullbio</dc:creator><comments>https://news.ycombinator.com/item?id=47036569</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47036569</guid></item><item><title><![CDATA[New comment by nullbio in "Anthropic tries to hide Claude's AI actions. Devs hate it"]]></title><description><![CDATA[
<p>Well they've successfully burned a bridge with me. I had 2 max subs, cancelled one of them and have been using Codex religiously for the last couple of weeks. Haven't had a need for Claude Code at all, and every time I open it I get annoyed at how slow it is and the lack of feedback - looking at it spin for 20 minutes on a simple prompt with no feedback is infuriating. Honestly, I don't miss it at all.</p>
]]></description><pubDate>Mon, 16 Feb 2026 15:50:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47036530</link><dc:creator>nullbio</dc:creator><comments>https://news.ycombinator.com/item?id=47036530</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47036530</guid></item><item><title><![CDATA[New comment by nullbio in "Anthropic tries to hide Claude's AI actions. Devs hate it"]]></title><description><![CDATA[
<p>Out of principle I'm never paying them a cent for "fast mode". I've already started using Codex anyway, will probably just cancel my sub since I've found I actually haven't needed CC at all since making the switch.</p>
]]></description><pubDate>Mon, 16 Feb 2026 15:46:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47036467</link><dc:creator>nullbio</dc:creator><comments>https://news.ycombinator.com/item?id=47036467</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47036467</guid></item><item><title><![CDATA[New comment by nullbio in "Anthropic tries to hide Claude's AI actions. Devs hate it"]]></title><description><![CDATA[
<p>It's all well and good for Anthropic developers who have 10x the model speed us regular users have and so their TUI is streaming quickly. But over here, it takes 20 minutes for Claude to do a basic task.</p>
]]></description><pubDate>Mon, 16 Feb 2026 15:43:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47036418</link><dc:creator>nullbio</dc:creator><comments>https://news.ycombinator.com/item?id=47036418</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47036418</guid></item><item><title><![CDATA[New comment by nullbio in "We mourn our craft"]]></title><description><![CDATA[
<p>Repost this in a couple of years and it'll be relevant. Too soon though, as it stands.</p>
]]></description><pubDate>Sun, 08 Feb 2026 15:42:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=46935261</link><dc:creator>nullbio</dc:creator><comments>https://news.ycombinator.com/item?id=46935261</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46935261</guid></item><item><title><![CDATA[New comment by nullbio in "We mourn our craft"]]></title><description><![CDATA[
<p>Strangly, yeah. LLMs are absolute trash at generating good UX and UI.</p>
]]></description><pubDate>Sun, 08 Feb 2026 15:39:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46935218</link><dc:creator>nullbio</dc:creator><comments>https://news.ycombinator.com/item?id=46935218</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46935218</guid></item><item><title><![CDATA[New comment by nullbio in "We mourn our craft"]]></title><description><![CDATA[
<p>We're on the precipice of something very disgusting. A massive power imbalance where a single company or two swallows the Earth's economy, due to a lack of competition, distribution and right of access laws. The wildest part is that these greedy companies, one of them in particular, are continuously framed in a positive light. This same company that has partnered with Palantir. AI should be a public good, not something gatekept by greedy capitalists with an ego complex.</p>
]]></description><pubDate>Sun, 08 Feb 2026 15:34:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=46935163</link><dc:creator>nullbio</dc:creator><comments>https://news.ycombinator.com/item?id=46935163</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46935163</guid></item><item><title><![CDATA[New comment by nullbio in "A case study in PDF forensics: The Epstein PDFs"]]></title><description><![CDATA[
<p>The real question is: Which of the documents are the ones that are "simulating" scanned documents, and what political narrative do they reinforce?<p>The only reason I can think of for why someone would want to do this is to pass off fraudulent or AI generated images as real.</p>
]]></description><pubDate>Thu, 05 Feb 2026 08:23:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46897126</link><dc:creator>nullbio</dc:creator><comments>https://news.ycombinator.com/item?id=46897126</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46897126</guid></item><item><title><![CDATA[New comment by nullbio in "Show HN: Zuckerman – minimalist personal AI agent that self-edits its own code"]]></title><description><![CDATA[
<p>Yep. It's very obvious, and lazy.</p>
]]></description><pubDate>Sun, 01 Feb 2026 18:27:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46848148</link><dc:creator>nullbio</dc:creator><comments>https://news.ycombinator.com/item?id=46848148</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46848148</guid></item><item><title><![CDATA[New comment by nullbio in "Show HN: Zuckerman – minimalist personal AI agent that self-edits its own code"]]></title><description><![CDATA[
<p>> Agents propose and publish capabilities to a shared contribution site, letting others discover, adopt, and evolve them further. A collaborative, living ecosystem of personal AIs.<p>While I like this idea in terms of crowd-sourced intelligence, how do you prevent this being abused as an attack vector for prompt injection?</p>
]]></description><pubDate>Sun, 01 Feb 2026 18:24:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=46848126</link><dc:creator>nullbio</dc:creator><comments>https://news.ycombinator.com/item?id=46848126</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46848126</guid></item><item><title><![CDATA[New comment by nullbio in "Show HN: UltraContext – A simple context API for AI agents with auto-versioning"]]></title><description><![CDATA[
<p>Something like this needs to be open-sourced. You're going to have a hell of a time trying to get enough trust from people to run all of their prompts through your servers.</p>
]]></description><pubDate>Thu, 29 Jan 2026 05:39:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46806234</link><dc:creator>nullbio</dc:creator><comments>https://news.ycombinator.com/item?id=46806234</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46806234</guid></item><item><title><![CDATA[New comment by nullbio in "Mistral 3 family of models released"]]></title><description><![CDATA[
<p>Benchmarks are never to be believed, and that has been the case since day 1.</p>
]]></description><pubDate>Tue, 02 Dec 2025 17:01:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46123399</link><dc:creator>nullbio</dc:creator><comments>https://news.ycombinator.com/item?id=46123399</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46123399</guid></item><item><title><![CDATA[New comment by nullbio in "Mistral 3 family of models released"]]></title><description><![CDATA[
<p>Google games benchmarks more than anyone, hence Gemini's strong bench lead. In reality though, it's still garbage for general usage.</p>
]]></description><pubDate>Tue, 02 Dec 2025 17:00:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=46123373</link><dc:creator>nullbio</dc:creator><comments>https://news.ycombinator.com/item?id=46123373</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46123373</guid></item><item><title><![CDATA[New comment by nullbio in "Mistral 3 family of models released"]]></title><description><![CDATA[
<p>Anyone else find that despite Gemini performing best on benches, it's actually still far worse than ChatGPT and Claude? It seems to hallucinate nonsense far more frequently than any of the others. Feels like Google just bench maxes all day every day. As for Mistral, hopefully OSS can eat all of their lunch soon enough.</p>
]]></description><pubDate>Tue, 02 Dec 2025 16:58:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=46123345</link><dc:creator>nullbio</dc:creator><comments>https://news.ycombinator.com/item?id=46123345</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46123345</guid></item><item><title><![CDATA[New comment by nullbio in "What OpenAI did when ChatGPT users lost touch with reality"]]></title><description><![CDATA[
<p>When will folks stop trusting Palantir-partnered Anthropic is probably a better question.<p>Anthropic has weaponized the safety narrative into a marketing and political tool, and it is quite clear that they're pushing this narrative both for publicity from media that love the doomer narrative because it brings in ad-revenue, and for regulatory capture reasons.<p>Their intentions are obviously self-motivated, or they wouldn't be partnering with a company that openly prides itself on dystopian-level spying and surveillance of the world.<p>OpenAI aren't the good guys either, but I wish people would stop pretending like Anthropic are.</p>
]]></description><pubDate>Tue, 25 Nov 2025 01:41:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46041464</link><dc:creator>nullbio</dc:creator><comments>https://news.ycombinator.com/item?id=46041464</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46041464</guid></item><item><title><![CDATA[New comment by nullbio in "Claude Opus 4.5"]]></title><description><![CDATA[
<p>You're forgetting the step where they write a nefarious paper for their marketing team about the "world-ending dangers" of the capabilities they've discovered within their new model, and push it out to their web of media companies who make bank from the ad-revenue from clicks on their doomsday articles while furthering the regulatory capture goals of the hypocritically Palantir-partnered Anthropic.</p>
]]></description><pubDate>Tue, 25 Nov 2025 01:24:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46041356</link><dc:creator>nullbio</dc:creator><comments>https://news.ycombinator.com/item?id=46041356</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46041356</guid></item><item><title><![CDATA[New comment by nullbio in "Three Years from GPT-3 to Gemini 3"]]></title><description><![CDATA[
<p>Novel solutions require some combination of guided brute-force search over a knowledge-database/search-engine (NOT a search over the models weights and NOT using chain of thought), combined with adaptive goal creation and evaluation, and reflective contrast against internal "learned" knowledge. Not only that, but it also requires exploration of the lower-probability space, i.e. results lesser explored, otherwise you're always going to end up with the most common and likely answers. That means being able to quantify what is a "less-likely but more novel solution" to begin with, which is a problem in itself. Transformer architecture LLMs do not even come close to approaching AI in this way.<p>All the novel solutions humans create are a result of combining existing solutions (learned or researched in real-time), with subtle and lesser-explored avenues and variations that are yet to be tried, and then verifying the results and cementing that acquired knowledge for future application as a building block for more novel solutions, as well as building a memory of when and where they may next be applicable. Building up this tree, to eventually satisfy an end goal, and backtracking and reshaping that tree when a certain measure of confidence stray from successful goal evaluation is predicted.<p>This is clearly very computationally expensive. It is also very different to the statistical pattern repeaters we are currently using, especially considering that their entire premise works because the algorithm chooses the next most probable token which is a function of the frequency of which that token appears in the training data. In other words, the algorithm is designed explicitly NOT to yield novel results, but rather return the most likely result. Higher temperature results tend to reduce textual coherence rather than increase novelty, because token frequency is a literal proxy for textual coherence in coherent training samples, and there is no actual "understanding" happening, nor reflection of the probability results at this level.<p>I'm sure smart people have figured a lot of this out already - we have general theory and ideas to back this, look into AIXI for example, and I'm sure there is far newer work. But I imagine that any efficient solutions to this problem will permanently remain in the realm of being a computational and scaling nightmare. Plus adaptive goal creation and evaluation is a really really hard problem, especially if text is your only modality of "thinking". My guess would be that it would require the models to create simulations of physical systems in text-only format, to be able to evaluate them, which also means being able to translate vague descriptions of physical systems into text-based physics sims with the same degrees of freedom as the real world - or at least the target problem, and then also imagine ideal outcomes in that same translated system, and develop metrics of "progress" within this system, for the particular target goal. This is a requirement for the feedback loop of building the tree of exploration and validation. Very challenging. I think these big companies are going to chase their tails for the next 10 years trying to reach an ever elusive intelligence goal, before begrudgingly conceding that existing LLM architectures will not get them there.</p>
]]></description><pubDate>Tue, 25 Nov 2025 01:19:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=46041328</link><dc:creator>nullbio</dc:creator><comments>https://news.ycombinator.com/item?id=46041328</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46041328</guid></item><item><title><![CDATA[New comment by nullbio in "Shai-Hulud Returns: Over 300 NPM Packages Infected"]]></title><description><![CDATA[
<p>You're right, that is the underlying problem. Not only of this, but of our entire economy.</p>
]]></description><pubDate>Tue, 25 Nov 2025 00:08:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=46040914</link><dc:creator>nullbio</dc:creator><comments>https://news.ycombinator.com/item?id=46040914</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46040914</guid></item></channel></rss>