<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: dcre</title><link>https://news.ycombinator.com/user?id=dcre</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 15 May 2026 18:26:05 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=dcre" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by dcre in "Where's Ed: Anthropic Told Court $5B but Public $19B"]]></title><description><![CDATA[
<p>Weird is far too generous. It’s a travesty of thinking.</p>
]]></description><pubDate>Fri, 15 May 2026 12:34:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=48147833</link><dc:creator>dcre</dc:creator><comments>https://news.ycombinator.com/item?id=48147833</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48147833</guid></item><item><title><![CDATA[New comment by dcre in "Where's Ed: Anthropic Told Court $5B but Public $19B"]]></title><description><![CDATA[
<p>It’s not deviating up and down. It’s deviating upward. It is necessarily going to wildly overstate the previous 12 months’ revenue while wildly understating the next 12 months’ revenue. There is no way to describe exponential growth in a single number that doesn’t do this. This is why adults with a brain look at the series.</p>
]]></description><pubDate>Fri, 15 May 2026 12:33:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=48147820</link><dc:creator>dcre</dc:creator><comments>https://news.ycombinator.com/item?id=48147820</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48147820</guid></item><item><title><![CDATA[New comment by dcre in "How fast is autonomous AI cyber capability advancing?"]]></title><description><![CDATA[
<p>A new Mythos checkpoint improves significantly on the previous one (and beats GPT-5.5-Cyber) on this benchmark.</p>
]]></description><pubDate>Wed, 13 May 2026 16:19:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=48123961</link><dc:creator>dcre</dc:creator><comments>https://news.ycombinator.com/item?id=48123961</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48123961</guid></item><item><title><![CDATA[How fast is autonomous AI cyber capability advancing?]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.aisi.gov.uk/blog/how-fast-is-autonomous-ai-cyber-capability-advancing">https://www.aisi.gov.uk/blog/how-fast-is-autonomous-ai-cyber-capability-advancing</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=48123960">https://news.ycombinator.com/item?id=48123960</a></p>
<p>Points: 3</p>
<p># Comments: 1</p>
]]></description><pubDate>Wed, 13 May 2026 16:19:20 +0000</pubDate><link>https://www.aisi.gov.uk/blog/how-fast-is-autonomous-ai-cyber-capability-advancing</link><dc:creator>dcre</dc:creator><comments>https://news.ycombinator.com/item?id=48123960</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48123960</guid></item><item><title><![CDATA[New comment by dcre in "Zig → Rust porting guide"]]></title><description><![CDATA[
<p>Most of Bun’s code is already written by LLMs. If you feel that way, it’s already been too late for a while. Furthermore, we’re talking about a million line port done in a couple of days. The question of whether it’s worth the time looks extremely different if done by hand. It would take a year.</p>
]]></description><pubDate>Tue, 05 May 2026 12:43:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=48021734</link><dc:creator>dcre</dc:creator><comments>https://news.ycombinator.com/item?id=48021734</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48021734</guid></item><item><title><![CDATA[New comment by dcre in "Uber Torches 2026 AI Budget on Claude Code in Four Months"]]></title><description><![CDATA[
<p>Thank you for the link.</p>
]]></description><pubDate>Fri, 01 May 2026 17:58:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47977890</link><dc:creator>dcre</dc:creator><comments>https://news.ycombinator.com/item?id=47977890</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47977890</guid></item><item><title><![CDATA[New comment by dcre in "Uber Torches 2026 AI Budget on Claude Code in Four Months"]]></title><description><![CDATA[
<p>While this is a fundamentally stupid story to begin with, it was at least reported somewhat better in other venues. The original report came from The Information, and at least this Yahoo Finance[0] writeup mentioned that. This article has very little content and no sourcing.<p>[0]: <a href="https://finance.yahoo.com/sectors/technology/articles/ubers-anthropic-ai-push-hits-223109852.html" rel="nofollow">https://finance.yahoo.com/sectors/technology/articles/ubers-...</a></p>
]]></description><pubDate>Fri, 01 May 2026 17:58:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47977886</link><dc:creator>dcre</dc:creator><comments>https://news.ycombinator.com/item?id=47977886</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47977886</guid></item><item><title><![CDATA[New comment by dcre in "Waymo in Portland"]]></title><description><![CDATA[
<p>Is that supposed to be good?</p>
]]></description><pubDate>Tue, 28 Apr 2026 19:05:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47939032</link><dc:creator>dcre</dc:creator><comments>https://news.ycombinator.com/item?id=47939032</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47939032</guid></item><item><title><![CDATA[New comment by dcre in "The Moat or the Commons"]]></title><description><![CDATA[
<p>Too annoying; didn’t read.</p>
]]></description><pubDate>Tue, 28 Apr 2026 03:14:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47930102</link><dc:creator>dcre</dc:creator><comments>https://news.ycombinator.com/item?id=47930102</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47930102</guid></item><item><title><![CDATA[New comment by dcre in "GPT-5.5"]]></title><description><![CDATA[
<p>My question is why AGI is required for these companies to be viable, i.e., why these companies cannot be viable in the case where AGI is <i>not</i> achieved. A response about what happens when AGI <i>is</i> achieved does not address that.</p>
]]></description><pubDate>Mon, 27 Apr 2026 13:13:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47921152</link><dc:creator>dcre</dc:creator><comments>https://news.ycombinator.com/item?id=47921152</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47921152</guid></item><item><title><![CDATA[New comment by dcre in "Simulacrum of Knowledge Work"]]></title><description><![CDATA[
<p>I think it’s true that we were able to establish trust and produce good work without verifying every detail — what I’m suggesting is that signals of that kind were not a very important factor. And code smells still work!</p>
]]></description><pubDate>Sun, 26 Apr 2026 21:25:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47914695</link><dc:creator>dcre</dc:creator><comments>https://news.ycombinator.com/item?id=47914695</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47914695</guid></item><item><title><![CDATA[New comment by dcre in "Simulacrum of Knowledge Work"]]></title><description><![CDATA[
<p>It's already out of date because it makes no sense. If it's true that the superficial signals of quality were once somehow good enough to keep the entire economy on the rails (it's not true), surely you can have an LLM look at given piece of work and extract comparably useful signals of quality or effort.</p>
]]></description><pubDate>Sun, 26 Apr 2026 15:20:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47911030</link><dc:creator>dcre</dc:creator><comments>https://news.ycombinator.com/item?id=47911030</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47911030</guid></item><item><title><![CDATA[New comment by dcre in "GPT-5.5"]]></title><description><![CDATA[
<p>This does not answer my question.</p>
]]></description><pubDate>Fri, 24 Apr 2026 16:05:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47892083</link><dc:creator>dcre</dc:creator><comments>https://news.ycombinator.com/item?id=47892083</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47892083</guid></item><item><title><![CDATA[New comment by dcre in "GPT-5.5"]]></title><description><![CDATA[
<p>Why is AGI required to make the investments work out?</p>
]]></description><pubDate>Fri, 24 Apr 2026 01:40:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47884503</link><dc:creator>dcre</dc:creator><comments>https://news.ycombinator.com/item?id=47884503</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47884503</guid></item><item><title><![CDATA[New comment by dcre in "GPT-5.5"]]></title><description><![CDATA[
<p>SOTA models on medium are probably still better than free or cheap models, but you should experiment.</p>
]]></description><pubDate>Fri, 24 Apr 2026 01:22:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47884403</link><dc:creator>dcre</dc:creator><comments>https://news.ycombinator.com/item?id=47884403</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47884403</guid></item><item><title><![CDATA[New comment by dcre in "AI chatbots could be making you stupider"]]></title><description><![CDATA[
<p>I recommend people look at the actual study and think about how representative are the subjects, the tasks involved (SAT essay writing), and the way LLMs are being used.<p><a href="https://arxiv.org/abs/2506.08872" rel="nofollow">https://arxiv.org/abs/2506.08872</a><p>To be concrete, this is taking a task in isolation that LLMs can do much better than humans (writing garbage essays) and using LLMs to do that task. In the real world, tasks have parts and they exist in a larger context. When we use LLMs for one part of a task, there are other things we're doing that the LLM is not helping with. If you compared people doing arithmetic by hand and with a calculator, you would also see very big differences in how active their brains are. But it's not anyone's job to add up numbers. Adding up numbers is a subtask of a subtask in someone's job.</p>
]]></description><pubDate>Mon, 20 Apr 2026 18:48:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47838824</link><dc:creator>dcre</dc:creator><comments>https://news.ycombinator.com/item?id=47838824</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47838824</guid></item><item><title><![CDATA[New comment by dcre in "George Orwell Predicted the Rise of "AI Slop" in Nineteen Eighty-Four"]]></title><description><![CDATA[
<p>He wasn’t predicting slop; he was describing mass culture, which already existed when he was writing.</p>
]]></description><pubDate>Fri, 17 Apr 2026 02:15:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47801833</link><dc:creator>dcre</dc:creator><comments>https://news.ycombinator.com/item?id=47801833</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47801833</guid></item><item><title><![CDATA[New comment by dcre in "Claude Opus 4.7 Model Card"]]></title><description><![CDATA[
<p>LLMs can tell you exactly how to acquire the materials and manufacture the materials. They might even come up with novel formulations that rely on substances that are easier to get. There might be information about this stuff online but LLMs are much better than random idiots at adapting that information to their actual situation.<p>On top of LLMs reducing the cost/difficulty, the other reason biological and chemical weapons are such a worry is their asymmetric character — they are much much easier and cheaper to produce and deploy than they are to defend against.</p>
]]></description><pubDate>Thu, 16 Apr 2026 16:17:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47795637</link><dc:creator>dcre</dc:creator><comments>https://news.ycombinator.com/item?id=47795637</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47795637</guid></item><item><title><![CDATA[New comment by dcre in "AI-assisted cognition endangers human development?"]]></title><description><![CDATA[
<p>Sure! I don't mean they're all good. I just mean that it can't be cognitive offloading itself that is the problem, but the particular character of it.</p>
]]></description><pubDate>Thu, 16 Apr 2026 15:08:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47794307</link><dc:creator>dcre</dc:creator><comments>https://news.ycombinator.com/item?id=47794307</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47794307</guid></item><item><title><![CDATA[New comment by dcre in "AI-assisted cognition endangers human development?"]]></title><description><![CDATA[
<p>I've never seen an argument like this that, if true, wouldn't also apply to the cognitive offloading we do by relying on culture, by working with others, or working with the artifacts built by others.</p>
]]></description><pubDate>Wed, 15 Apr 2026 19:49:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47784295</link><dc:creator>dcre</dc:creator><comments>https://news.ycombinator.com/item?id=47784295</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47784295</guid></item></channel></rss>