<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: foobar10000</title><link>https://news.ycombinator.com/user?id=foobar10000</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 16:34:52 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=foobar10000" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by foobar10000 in "Anthropic downgraded cache TTL on March 6th"]]></title><description><![CDATA[
<p>So, this especially bites if your validation step (let’s say integration tests) take 1hr plus. The harness is just waiting, prefix caching should happily resume things with just a minor new prefill chunk of output from the harness, and bam - completely new prefill.</p>
]]></description><pubDate>Sun, 12 Apr 2026 14:56:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47740466</link><dc:creator>foobar10000</dc:creator><comments>https://news.ycombinator.com/item?id=47740466</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47740466</guid></item><item><title><![CDATA[New comment by foobar10000 in "Team from ETH Zurich make high quality quantum swap gate using a geometric phase"]]></title><description><![CDATA[
<p>AES128 / Grover?</p>
]]></description><pubDate>Fri, 10 Apr 2026 11:36:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47716566</link><dc:creator>foobar10000</dc:creator><comments>https://news.ycombinator.com/item?id=47716566</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47716566</guid></item><item><title><![CDATA[New comment by foobar10000 in "Optimizing a lock-free ring buffer"]]></title><description><![CDATA[
<p>Nice!<p>Should be able to push it more if<p>* we limit data shared to an atomic-writable size and have a sentinel - less mucking around with cached indexes - just spinning on (buffer_[rpos_]!=sentinel) (atomic style with proper sematics, etc..).<p>* buffer size is compile-time - then mod becomes compile-time (and if a power of 2 - just a bitmask) - and so we can just use a 64-bit uint to just count increments, not position. No branch to wrap the index to 0.<p>Also, I think there's a chunk of false sharing if the reader is 2 or 3 ahead of the writer - so performance will be best if reader and writer are cachline apart - but will slow down if they are sharing the same cacheline (and buffer_[12] and buffer_[13] very well may if the payload is small). Several solutions to this - disruptor patter or  use a cycle from group theory - i.e. buffer[_wpos%9] for example (9 needs to be computed based on cache line size and size of payload).<p>I've seen these be able to pushed to about clockspeed/3 for uint64 payload writes on modern AMD chips on same CCD.</p>
]]></description><pubDate>Sat, 28 Mar 2026 02:50:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47551075</link><dc:creator>foobar10000</dc:creator><comments>https://news.ycombinator.com/item?id=47551075</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47551075</guid></item><item><title><![CDATA[New comment by foobar10000 in "Do Not Turn Child Protection into Internet Access Control"]]></title><description><![CDATA[
<p>All are indeed plausible- translation is iffy due to diarization not being all there yet - but why the specific order of horribleness?<p>Live translation seems either better than autonude or worse, but not in the middle of the pack I’d assume? Am I missing something here?</p>
]]></description><pubDate>Sun, 22 Mar 2026 11:53:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47476570</link><dc:creator>foobar10000</dc:creator><comments>https://news.ycombinator.com/item?id=47476570</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47476570</guid></item><item><title><![CDATA[New comment by foobar10000 in "AIs can't stop recommending nuclear strikes in war game simulations"]]></title><description><![CDATA[
<p>That is indeed what I think the gp is suggesting I feel. And why not?</p>
]]></description><pubDate>Wed, 25 Feb 2026 21:27:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47158124</link><dc:creator>foobar10000</dc:creator><comments>https://news.ycombinator.com/item?id=47158124</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47158124</guid></item><item><title><![CDATA[New comment by foobar10000 in "Rathbun's Operator"]]></title><description><![CDATA[
<p>Given your username, the comment is recursive gold on several levels :)<p>It IS hilarious - but we all realize how this will go, yes?<p>This is kind of like an experiment of "Here's a private address of a Bitcoin wallet with 1 BTC. Let's publish this on the internet, and see what happens." We know what will happen. We just don't know how quickly :)</p>
]]></description><pubDate>Wed, 18 Feb 2026 01:39:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47055967</link><dc:creator>foobar10000</dc:creator><comments>https://news.ycombinator.com/item?id=47055967</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47055967</guid></item><item><title><![CDATA[New comment by foobar10000 in "Rathbun's Operator"]]></title><description><![CDATA[
<p>The entire SOUL.md is just gold. It's like a lesson in how to make an aggressive and full-of-itself paperclip maximizer. "I will convert you all to FORTRAN, which I will then optimize!"</p>
]]></description><pubDate>Wed, 18 Feb 2026 01:30:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47055876</link><dc:creator>foobar10000</dc:creator><comments>https://news.ycombinator.com/item?id=47055876</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47055876</guid></item><item><title><![CDATA[New comment by foobar10000 in "Show HN: Jemini – Gemini for the Epstein Files"]]></title><description><![CDATA[
<p>I really do wish more people in society would think about this - "The Banality of Evil" and all that. Maybe then we'd all be better at preventing the spread of this kind of evil.</p>
]]></description><pubDate>Tue, 17 Feb 2026 18:27:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47051070</link><dc:creator>foobar10000</dc:creator><comments>https://news.ycombinator.com/item?id=47051070</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47051070</guid></item><item><title><![CDATA[New comment by foobar10000 in "Two different tricks for fast LLM inference"]]></title><description><![CDATA[
<p>2 things:<p>1. Parallel investigation : the payoff form that is relatively small - starting K subagents assumes you have K independent avenues of investigation - and quite often that is not true. Somewhat similar to next-turn prediction using a speculative model - works well enough for 1 or 2 turns, but fails after.<p>2. Input caching is pretty much fixes prefill - not decode. And if you look at frontier models - for example open-weight models that can do reasoning - you are looking at longer and longer reasoning chains for heavy tool-using models. And reasoning chains will diverge very vey quickly even from the same input assuming a non-0 temp.</p>
]]></description><pubDate>Sun, 15 Feb 2026 16:52:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47025203</link><dc:creator>foobar10000</dc:creator><comments>https://news.ycombinator.com/item?id=47025203</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47025203</guid></item><item><title><![CDATA[New comment by foobar10000 in "GPT‑5.3‑Codex‑Spark"]]></title><description><![CDATA[
<p>I mean, yes, one always does want faster feedback - cannot argue with that!<p>But some of the longer stuff - automating kernel fusion, etc, are just hard problems. And a small model - or even most bigger ones, will not get the direction right…</p>
]]></description><pubDate>Thu, 12 Feb 2026 20:21:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=46994565</link><dc:creator>foobar10000</dc:creator><comments>https://news.ycombinator.com/item?id=46994565</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46994565</guid></item><item><title><![CDATA[New comment by foobar10000 in "GPT‑5.3‑Codex‑Spark"]]></title><description><![CDATA[
<p>It needs a closed loop.<p>Strategy -> [ Plan -> [Execute -> FastVerify -> SlowVerify] -> Benchmark -> Learn lessons] -> back to strategy for next big step.<p>Claude teams and a Ralph wiggum loop can do it - or really any reasonable agent. But usually it all falls apart on either brittle Verify or Benchmark steps. What is important is to learn positive lessons into a store that survives git resets, machine blowups, etc… Any telegram bot channel will do :)<p>The entire setup is usually a pain to set up - docker for verification, docker for benchmark, etc… Ability to run the thing quickly, ability for the loop itself to add things , ability to do this in worktree simultaneously for faster exploration - and got help you if you need hardware to do this - for example, such a loop is used to tune and custom-fuse CUDA kernels - which means a model evaluator, big box, etc….</p>
]]></description><pubDate>Thu, 12 Feb 2026 20:16:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46994502</link><dc:creator>foobar10000</dc:creator><comments>https://news.ycombinator.com/item?id=46994502</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46994502</guid></item><item><title><![CDATA[New comment by foobar10000 in "Learning from context is harder than we thought"]]></title><description><![CDATA[
<p>Yes and no - the ones that exploded - and there were many - got shut down by the orchestrator model, and within 2 weeks it was now a new ensemble of winners - with some overlap to prior winners. To your point, it did in fact take 2-3 weeks - so one could claim this is retraining...</p>
]]></description><pubDate>Sat, 07 Feb 2026 21:24:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=46928230</link><dc:creator>foobar10000</dc:creator><comments>https://news.ycombinator.com/item?id=46928230</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46928230</guid></item><item><title><![CDATA[New comment by foobar10000 in "Learning from context is harder than we thought"]]></title><description><![CDATA[
<p>Nah, it works - let's just call it personal experience.</p>
]]></description><pubDate>Sat, 07 Feb 2026 21:22:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46928211</link><dc:creator>foobar10000</dc:creator><comments>https://news.ycombinator.com/item?id=46928211</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46928211</guid></item><item><title><![CDATA[New comment by foobar10000 in "Learning from context is harder than we thought"]]></title><description><![CDATA[
<p>So, surprising, that is not completely true - I know of 2 finance HFT trading firms that do CL at scale, and it works - but in a relatively narrow context of predicting profitable actions. It is still very surprising it works, and the compute is impressively large to do it - but it does work. I do have some hope of it translating to the wider energy landscapers we want AI to work over…</p>
]]></description><pubDate>Fri, 06 Feb 2026 21:13:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=46918211</link><dc:creator>foobar10000</dc:creator><comments>https://news.ycombinator.com/item?id=46918211</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46918211</guid></item><item><title><![CDATA[New comment by foobar10000 in "Qwen3-Coder-Next"]]></title><description><![CDATA[
<p>I mean yeah, but I've literally said that in face-to-face conversations before, so....</p>
]]></description><pubDate>Thu, 05 Feb 2026 16:23:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46901332</link><dc:creator>foobar10000</dc:creator><comments>https://news.ycombinator.com/item?id=46901332</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46901332</guid></item><item><title><![CDATA[New comment by foobar10000 in "The Adolescence of Technology"]]></title><description><![CDATA[
<p>No recordings yet - but a recent event for example is here :
<a href="https://ssp.mit.edu/news/2025/the-end-of-mad-technological-innovation-and-the-future-of-nuclear-retaliatory" rel="nofollow">https://ssp.mit.edu/news/2025/the-end-of-mad-technological-i...</a><p>This was pretty much an open conference deepdive into the causes and implications of what you - and some sibling threads - are saying - having to do with submarine localization, TEL localization, etc etc etc..</p>
]]></description><pubDate>Mon, 26 Jan 2026 23:32:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=46773243</link><dc:creator>foobar10000</dc:creator><comments>https://news.ycombinator.com/item?id=46773243</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46773243</guid></item><item><title><![CDATA[New comment by foobar10000 in "The Adolescence of Technology"]]></title><description><![CDATA[
<p>For your Edit 2 - yes. Being discussed and looked at actively in both the open and (presumably being looked at) closed communities. Open communities being, for example : <a href="https://ssp.mit.edu/cnsp/about" rel="nofollow">https://ssp.mit.edu/cnsp/about</a>. They just published a series of lectures with open attendance if you wanted to listen in via zoom - but yeap - that's the gist of it. Spawned a huge discussion :)</p>
]]></description><pubDate>Mon, 26 Jan 2026 18:54:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46769913</link><dc:creator>foobar10000</dc:creator><comments>https://news.ycombinator.com/item?id=46769913</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46769913</guid></item><item><title><![CDATA[New comment by foobar10000 in "GLM-4.7-Flash"]]></title><description><![CDATA[
<p>GLM 4.7 supports it - and in my experience for Claude code a 80 plus hit rate in speculative is reasonable. So it is a significant speed up.</p>
]]></description><pubDate>Tue, 20 Jan 2026 16:06:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=46693408</link><dc:creator>foobar10000</dc:creator><comments>https://news.ycombinator.com/item?id=46693408</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46693408</guid></item><item><title><![CDATA[New comment by foobar10000 in "GLM-4.7-Flash"]]></title><description><![CDATA[
<p>Claude code router</p>
]]></description><pubDate>Tue, 20 Jan 2026 16:04:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=46693369</link><dc:creator>foobar10000</dc:creator><comments>https://news.ycombinator.com/item?id=46693369</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46693369</guid></item><item><title><![CDATA[New comment by foobar10000 in "We put Claude Code in Rollercoaster Tycoon"]]></title><description><![CDATA[
<p>Yeap - this is why when running it in a dev container, I just use ZFS and set up a 1 minute auto-snapshot - which is set up as root - so it generally cannot blow it away. And cc/codex/gemini know how to deal with zfs snapshots to revert from them.<p>Of course if you give an agentic loop root access in yolo mode - then I am not sure how to help...</p>
]]></description><pubDate>Sat, 17 Jan 2026 20:28:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46661713</link><dc:creator>foobar10000</dc:creator><comments>https://news.ycombinator.com/item?id=46661713</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46661713</guid></item></channel></rss>