<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: somebodythere</title><link>https://news.ycombinator.com/user?id=somebodythere</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 09:04:56 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=somebodythere" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by somebodythere in "Sauna effect on heart rate"]]></title><description><![CDATA[
<p>There is some evidence suggesting that "blue zones" are largely about pension fraud. <a href="https://fortune.com/europe/2024/12/14/are-blue-zones-myth-extreme-aging-pension-fraud-century-old-lies/" rel="nofollow">https://fortune.com/europe/2024/12/14/are-blue-zones-myth-ex...</a></p>
]]></description><pubDate>Mon, 20 Apr 2026 14:34:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47834983</link><dc:creator>somebodythere</dc:creator><comments>https://news.ycombinator.com/item?id=47834983</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47834983</guid></item><item><title><![CDATA[New comment by somebodythere in "Anthropic takes legal action against OpenCode"]]></title><description><![CDATA[
<p>Using your API key in third-party harnesses has always been allowed. They just don't like using the subsidized subscription plan outside of first-party harnesses. So this seems to be out of spite</p>
]]></description><pubDate>Thu, 19 Mar 2026 20:43:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47445760</link><dc:creator>somebodythere</dc:creator><comments>https://news.ycombinator.com/item?id=47445760</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47445760</guid></item><item><title><![CDATA[New comment by somebodythere in "We tasked Opus 4.6 using agent teams to build a C Compiler"]]></title><description><![CDATA[
<p>even a squirrel that needs guidance from a human grandmaster, is heavily inspired by existing games, and who can use Piece Mover library is incredible. 5 years ago the squirrel was just a squirrel. then it was able to make legal moves. now it can play a whole game from start to finish, with help. that is incredible</p>
]]></description><pubDate>Fri, 06 Feb 2026 09:40:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=46910844</link><dc:creator>somebodythere</dc:creator><comments>https://news.ycombinator.com/item?id=46910844</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46910844</guid></item><item><title><![CDATA[New comment by somebodythere in "AI2: Open Coding Agents"]]></title><description><![CDATA[
<p>No. You can point e.g. Opencode/Cline/Roo Code/Kilo Code at your inference endpoint. But CC has high install base and users are used to it, so it makes sense to target it.</p>
]]></description><pubDate>Wed, 28 Jan 2026 12:15:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=46794339</link><dc:creator>somebodythere</dc:creator><comments>https://news.ycombinator.com/item?id=46794339</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46794339</guid></item><item><title><![CDATA[New comment by somebodythere in "The future of software development is software developers"]]></title><description><![CDATA[
<p>Why would I ask the model to reverse the string 'glorbix,' especially in the context of software engineering?</p>
]]></description><pubDate>Tue, 30 Dec 2025 12:50:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=46432817</link><dc:creator>somebodythere</dc:creator><comments>https://news.ycombinator.com/item?id=46432817</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46432817</guid></item><item><title><![CDATA[New comment by somebodythere in "Mozilla appoints new CEO Anthony Enzor-Demeo"]]></title><description><![CDATA[
<p>You personally wouldn't use live captions and dubbing, so there's no point building it for the millions of people who need it as an accessibility feature?</p>
]]></description><pubDate>Wed, 17 Dec 2025 00:21:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=46296632</link><dc:creator>somebodythere</dc:creator><comments>https://news.ycombinator.com/item?id=46296632</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46296632</guid></item><item><title><![CDATA[New comment by somebodythere in "Anthropic taps IPO lawyers as it races OpenAI to go public"]]></title><description><![CDATA[
<p>Rufus is a Claude Haiku, yes.</p>
]]></description><pubDate>Wed, 03 Dec 2025 17:38:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=46137457</link><dc:creator>somebodythere</dc:creator><comments>https://news.ycombinator.com/item?id=46137457</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46137457</guid></item><item><title><![CDATA[New comment by somebodythere in "Code Wiki: Accelerating your code understanding"]]></title><description><![CDATA[
<p>I've seen a few of this type of thing pop up in search results ("DeepWiki" by Cognition.) I'm not a fan. It is just LLM contentslop, basically. Actual wikis written by humans are made of actual insight from developers and consumers. "We intend you use it in X way", "If you encounter Y issue, do Z." etc. Look at arch wiki. Peak wiki-style documentation, LLMs could never recreate. Well, maybe with a future iteration of the technology they can be useful. But for now, you do not gain much by essentially restating code, API interfaces, and tests in prose. They take up space from legitimate documentation and developer instruction in search results.</p>
]]></description><pubDate>Wed, 03 Dec 2025 00:29:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=46128818</link><dc:creator>somebodythere</dc:creator><comments>https://news.ycombinator.com/item?id=46128818</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46128818</guid></item><item><title><![CDATA[New comment by somebodythere in "Anthropic acquires Bun"]]></title><description><![CDATA[
<p>I think this wound up being close enough to true, it's just that it actually says less than what people assumed at the time.<p>It's basically the Jevons paradox for code. The price of lines of code (in human engineer-hours) has decreased a lot, so there is a bunch of code that is now economically justifiable which wouldn't have been written before. For example, I can prompt several ad-hoc benchmarking scripts in 1-2 minutes to troubleshoot an issue which might have taken 10-20 minutes each by myself, allowing me to investigate many performance angles. Not everything gets committed to source control.<p>Put another way, at least in my workflow and at my workplace, the volume of code has increased, and most of that increase comes from new code that would not have been written if not for AI, and a smaller portion is code that I would have written before AI but now let the AI write so I can focus on harder tasks. Of course, it's uneven penetration, AI helps more with tasks that are well-described in the training set (webapps, data science, Linux admin...) compared to e.g. issues arising from quirky internal architecture, Rust, etc.</p>
]]></description><pubDate>Tue, 02 Dec 2025 20:07:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=46126125</link><dc:creator>somebodythere</dc:creator><comments>https://news.ycombinator.com/item?id=46126125</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46126125</guid></item><item><title><![CDATA[New comment by somebodythere in "Many countries that said no to ChatControl in 2024 are now undecided"]]></title><description><![CDATA[
<p>Roughly, this is the Electronic Frontier Foundation (and comparable lobbying orgs in other countries.) However, an org like this doesn't have much power to compel individuals to give them $1.</p>
]]></description><pubDate>Fri, 01 Aug 2025 07:56:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=44754102</link><dc:creator>somebodythere</dc:creator><comments>https://news.ycombinator.com/item?id=44754102</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44754102</guid></item><item><title><![CDATA[New comment by somebodythere in "Learning Is Slower Than You Think"]]></title><description><![CDATA[
<p>LLM argumentative essays tend to have this "gish-gallop" energy; say a bunch of tenuously related and vaguely supported things, leave the reader wondering if it was the author who failed to connect the dots, or them</p>
]]></description><pubDate>Tue, 29 Jul 2025 15:46:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=44724825</link><dc:creator>somebodythere</dc:creator><comments>https://news.ycombinator.com/item?id=44724825</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44724825</guid></item><item><title><![CDATA[New comment by somebodythere in "The natural diamond industry is getting rocked. Thank the lab-grown variety"]]></title><description><![CDATA[
<p>Maybe it's my engineer-brain talking, but "lab-grown" actually biases me towards the diamonds. Feels precise and futuristic.</p>
]]></description><pubDate>Sun, 27 Jul 2025 00:15:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=44697926</link><dc:creator>somebodythere</dc:creator><comments>https://news.ycombinator.com/item?id=44697926</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44697926</guid></item><item><title><![CDATA[New comment by somebodythere in "There are no new ideas in AI only new datasets"]]></title><description><![CDATA[
<p>I don't know if it matters. Even if the best we can do is get really good at interpolating between solutions to cognitive tasks on the data manifold, the only economically useful human labor left asymptotes toward frontier work; work that only a single-digit percentage of people can actually perform.</p>
]]></description><pubDate>Mon, 30 Jun 2025 22:05:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=44428410</link><dc:creator>somebodythere</dc:creator><comments>https://news.ycombinator.com/item?id=44428410</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44428410</guid></item><item><title><![CDATA[New comment by somebodythere in "Claude 4"]]></title><description><![CDATA[
<p>My guess is that they did RLVR post-training for SWE tasks, and a smaller model can undergo more RL steps for the same amount of computation.</p>
]]></description><pubDate>Thu, 22 May 2025 17:48:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=44064638</link><dc:creator>somebodythere</dc:creator><comments>https://news.ycombinator.com/item?id=44064638</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44064638</guid></item><item><title><![CDATA[New comment by somebodythere in "The unreasonable effectiveness of an LLM agent loop with tool use"]]></title><description><![CDATA[
<p>I see what you are getting at. My point is that if you train and agent and verifier/governor together based on rewards from e.g. RLVR, the system (agent + governor) is what will reward hack. OpenAI demonstrated this in their "Learning to Reason with CoT" blog post, where they showed that using a model to detect and punish strings associated with reward hacking in the CoT just led the model to reward hack in ways that were harder to detect. Stacking higher and higher order verifiers maybe buys you time, but also increases false negative rates + reward hacking is a stable attractor for the system.</p>
]]></description><pubDate>Fri, 16 May 2025 18:56:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=44008741</link><dc:creator>somebodythere</dc:creator><comments>https://news.ycombinator.com/item?id=44008741</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44008741</guid></item><item><title><![CDATA[New comment by somebodythere in "The unreasonable effectiveness of an LLM agent loop with tool use"]]></title><description><![CDATA[
<p>Because if the agent and governor are trained together, the shared reward function will corrupt the governor.</p>
]]></description><pubDate>Fri, 16 May 2025 00:41:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=44000762</link><dc:creator>somebodythere</dc:creator><comments>https://news.ycombinator.com/item?id=44000762</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44000762</guid></item><item><title><![CDATA[New comment by somebodythere in "AI 2027"]]></title><description><![CDATA[
<p>I took your original post to mean that AI researchers' and AI safety researchers' expectation of AGI arrival has been slipping towards the future as AI advances fail to materialize! It's just, AI advances <i>have</i> been materializing, consistently and rapidly, and expert timelines <i>have</i> been shortening commensurately.<p>You may argue that the trendline of these expectations is moving in the wrong direction and <i>should</i> get longer with time, but that's not immediately falsifiable and you have not provided arguments to that effect.</p>
]]></description><pubDate>Fri, 04 Apr 2025 18:29:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=43586121</link><dc:creator>somebodythere</dc:creator><comments>https://news.ycombinator.com/item?id=43586121</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43586121</guid></item><item><title><![CDATA[New comment by somebodythere in "AI 2027"]]></title><description><![CDATA[
<p>AGI timelines have been steadily decreasing over time: <a href="https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/" rel="nofollow">https://www.metaculus.com/questions/5121/date-of-artificial-...</a> (switch to all-time chart)</p>
]]></description><pubDate>Fri, 04 Apr 2025 17:03:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=43585144</link><dc:creator>somebodythere</dc:creator><comments>https://news.ycombinator.com/item?id=43585144</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43585144</guid></item><item><title><![CDATA[New comment by somebodythere in "AI 2027"]]></title><description><![CDATA[
<p>Did you see the supplemental material that explains how they arrived at their timelines/capabilities forecasts? <a href="https://ai-2027.com/research" rel="nofollow">https://ai-2027.com/research</a></p>
]]></description><pubDate>Fri, 04 Apr 2025 16:16:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=43584554</link><dc:creator>somebodythere</dc:creator><comments>https://news.ycombinator.com/item?id=43584554</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43584554</guid></item><item><title><![CDATA[New comment by somebodythere in "The sins of the 90s: Questioning a puzzling claim about mass surveillance"]]></title><description><![CDATA[
<p>Federal interests can easily tell the local prosecutor "hey, don't prosecute this, it risks setting bad precedent".</p>
]]></description><pubDate>Tue, 29 Oct 2024 21:40:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=41989696</link><dc:creator>somebodythere</dc:creator><comments>https://news.ycombinator.com/item?id=41989696</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41989696</guid></item></channel></rss>