<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: becquerel</title><link>https://news.ycombinator.com/user?id=becquerel</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 16:47:57 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=becquerel" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by becquerel in "AI assistance when contributing to the Linux kernel"]]></title><description><![CDATA[
<p>He's been vibecoding some stuff himself personally, on one of his scuba projects. You could take people as actually believing in the things they do and say.</p>
]]></description><pubDate>Sat, 11 Apr 2026 11:50:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47729772</link><dc:creator>becquerel</dc:creator><comments>https://news.ycombinator.com/item?id=47729772</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47729772</guid></item><item><title><![CDATA[New comment by becquerel in "Claude Code Found a Linux Vulnerability Hidden for 23 Years"]]></title><description><![CDATA[
<p>Are you aware of how productivity has increased over the past century in general? That didn't lead to 100x wage increases or more free time. Labour is a market commodity and follows market rules. Increased productivity means more gets done in less time. It doesn't mean you spend less time working</p>
]]></description><pubDate>Sun, 05 Apr 2026 11:35:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47648341</link><dc:creator>becquerel</dc:creator><comments>https://news.ycombinator.com/item?id=47648341</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47648341</guid></item><item><title><![CDATA[New comment by becquerel in "Erdos 281 solved with ChatGPT 5.2 Pro"]]></title><description><![CDATA[
<p>People like checking items off of lists.</p>
]]></description><pubDate>Sun, 18 Jan 2026 08:06:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46665788</link><dc:creator>becquerel</dc:creator><comments>https://news.ycombinator.com/item?id=46665788</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46665788</guid></item><item><title><![CDATA[New comment by becquerel in "Erdos 281 solved with ChatGPT 5.2 Pro"]]></title><description><![CDATA[
<p>You get AIs to prove their code is correct in precisely the same ways you get humans to prove their code is correct. You make them demonstrate it through tests or evidence (screenshots, logs of successful runs).</p>
]]></description><pubDate>Sun, 18 Jan 2026 08:05:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46665783</link><dc:creator>becquerel</dc:creator><comments>https://news.ycombinator.com/item?id=46665783</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46665783</guid></item><item><title><![CDATA[New comment by becquerel in "“Erdos problem #728 was solved more or less autonomously by AI”"]]></title><description><![CDATA[
<p>Tao's broad project, which he has spoken about a few times, is for mathematics to move beyond the current game of solving individual theorems to being able to make statements about broad categories of problems. So not 'X property is true for this specific magma' but 'X property is true for all possible magmas', as an example I just came up with. He has experimented with this via crowdsourcing problems in a given domain on GitHub before, and I think the implications of how to use AI here are obvious.</p>
]]></description><pubDate>Sat, 10 Jan 2026 09:10:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46564097</link><dc:creator>becquerel</dc:creator><comments>https://news.ycombinator.com/item?id=46564097</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46564097</guid></item><item><title><![CDATA[New comment by becquerel in "Opus 4.5 is not the normal AI agent experience that I have had thus far"]]></title><description><![CDATA[
<p>Does a system being deterministic really matter if it's complex enough you can't predict it? How many stories are there about 'you need to do it in this specific way, and not this other specific way, to get 500x better codegen'?</p>
]]></description><pubDate>Wed, 07 Jan 2026 16:36:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46528503</link><dc:creator>becquerel</dc:creator><comments>https://news.ycombinator.com/item?id=46528503</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46528503</guid></item><item><title><![CDATA[New comment by becquerel in "Opus 4.5 is not the normal AI agent experience that I have had thus far"]]></title><description><![CDATA[
<p>If you haven't tried it yet, OpenCode is quite good.</p>
]]></description><pubDate>Wed, 07 Jan 2026 16:34:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=46528470</link><dc:creator>becquerel</dc:creator><comments>https://news.ycombinator.com/item?id=46528470</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46528470</guid></item><item><title><![CDATA[New comment by becquerel in "LLM Year in Review"]]></title><description><![CDATA[
<p>It's an extension of how I've noticed that AIs will generally write very buttoned-down, cross-the-ts-and-dot-the-is code. Everything gets commented, every method has a try-catch with a log statement, every return type is checked, etc. I think it's a consequence of them not feeling fatigue. These things (accessibility included) are all things humans generally know they 'should' do, but there never seems to be enough time in the day; we'll get to it later when we're less tired. But the ghost in the machine doesn't care. It operates at the same level all the time</p>
]]></description><pubDate>Sat, 20 Dec 2025 08:58:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46334625</link><dc:creator>becquerel</dc:creator><comments>https://news.ycombinator.com/item?id=46334625</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46334625</guid></item><item><title><![CDATA[New comment by becquerel in "Three Years from GPT-3 to Gemini 3"]]></title><description><![CDATA[
<p>You can recognise that the technology has a poor user interface and is wrought with subtleties without denying its underlying capabilities. People misuse good technology all the time. It's kind of what users do. I would not expect a radically new form of computing which is under five years old to be intuitive to most people.</p>
]]></description><pubDate>Tue, 25 Nov 2025 09:07:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=46043905</link><dc:creator>becquerel</dc:creator><comments>https://news.ycombinator.com/item?id=46043905</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46043905</guid></item><item><title><![CDATA[New comment by becquerel in "Gemini 3"]]></title><description><![CDATA[
<p>The industry is still seeing how far they can take transformers. We've yet to reach a dollar value where it stops being worth pumping money into them.</p>
]]></description><pubDate>Tue, 18 Nov 2025 19:05:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=45970518</link><dc:creator>becquerel</dc:creator><comments>https://news.ycombinator.com/item?id=45970518</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45970518</guid></item><item><title><![CDATA[New comment by becquerel in "Cerebras Code now supports GLM 4.6 at 1000 tokens/sec"]]></title><description><![CDATA[
<p>Because I'm getting paid to.</p>
]]></description><pubDate>Sat, 08 Nov 2025 08:25:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=45855102</link><dc:creator>becquerel</dc:creator><comments>https://news.ycombinator.com/item?id=45855102</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45855102</guid></item><item><title><![CDATA[New comment by becquerel in "The Case That A.I. Is Thinking"]]></title><description><![CDATA[
<p>A system having terminal failure modes doesn't inherently negate the rest of the system. Human intelligences fall prey to plenty of similarly bad behaviours like addiction.</p>
]]></description><pubDate>Tue, 04 Nov 2025 07:45:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=45808427</link><dc:creator>becquerel</dc:creator><comments>https://news.ycombinator.com/item?id=45808427</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45808427</guid></item><item><title><![CDATA[New comment by becquerel in "Pig lung transplanted into a human"]]></title><description><![CDATA[
<p>That series messed me up when I was younger.</p>
]]></description><pubDate>Sun, 31 Aug 2025 07:05:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=45081019</link><dc:creator>becquerel</dc:creator><comments>https://news.ycombinator.com/item?id=45081019</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45081019</guid></item><item><title><![CDATA[New comment by becquerel in "LLMs tell bad jokes because they avoid surprises"]]></title><description><![CDATA[
<p>Yeah. To me it seems very intuitive that humor is one of those emergent capabilities that just falls out of models getting more generally intelligent. Anecdotally this has been proven true so far for me. Gemini 2.5 has made me laugh several times at this point, and did so when it was intending to be funny (old models were only funny unintentionally).<p>2.5 is also one of the few models I've found that will 'play along' with jokes set up in the user prompt. I once asked it what IDE modern necromancers were using since I'd been out of the game for a while, and it played it very straight. Other models felt they had to acknowledge the scenario as fanciful, only engaging with it under an explicit veil of make-believe.</p>
]]></description><pubDate>Sun, 17 Aug 2025 08:53:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=44930012</link><dc:creator>becquerel</dc:creator><comments>https://news.ycombinator.com/item?id=44930012</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44930012</guid></item><item><title><![CDATA[New comment by becquerel in "Ask HN: Go deep into AI/LLMs or just use them as tools?"]]></title><description><![CDATA[
<p>IIRC the guy who makes Aider (Paul Gauthier) has some videos along these lines, of him working on Aider while using Aider (how meta).</p>
]]></description><pubDate>Sat, 24 May 2025 15:17:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=44081665</link><dc:creator>becquerel</dc:creator><comments>https://news.ycombinator.com/item?id=44081665</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44081665</guid></item><item><title><![CDATA[New comment by becquerel in "Adobe deletes Bluesky posts after backlash"]]></title><description><![CDATA[
<p>If everyone hated AI images, nobody would be creating them.</p>
]]></description><pubDate>Sat, 12 Apr 2025 05:28:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=43661593</link><dc:creator>becquerel</dc:creator><comments>https://news.ycombinator.com/item?id=43661593</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43661593</guid></item><item><title><![CDATA[New comment by becquerel in "Adobe deletes Bluesky posts after backlash"]]></title><description><![CDATA[
<p>It crushes the orphans very quickly, and on command, and allows anyone to crush orphans from the comfort of their own home. Most people are low-taste enough that they don't really care about the difference between hand-crushed orphans and artisanal hand-crushed orphans.</p>
]]></description><pubDate>Sat, 12 Apr 2025 05:24:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=43661582</link><dc:creator>becquerel</dc:creator><comments>https://news.ycombinator.com/item?id=43661582</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43661582</guid></item><item><title><![CDATA[New comment by becquerel in "LLM Benchmark for 'Longform Creative Writing'"]]></title><description><![CDATA[
<p>You beat me to that idea, haha. I was making an aider for fiction, but your project looks way more useful than what I had.</p>
]]></description><pubDate>Thu, 10 Apr 2025 08:56:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=43642060</link><dc:creator>becquerel</dc:creator><comments>https://news.ycombinator.com/item?id=43642060</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43642060</guid></item><item><title><![CDATA[New comment by becquerel in "No elephants: Breakthroughs in image generation"]]></title><description><![CDATA[
<p>All labor is bad.</p>
]]></description><pubDate>Tue, 08 Apr 2025 18:33:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=43625054</link><dc:creator>becquerel</dc:creator><comments>https://news.ycombinator.com/item?id=43625054</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43625054</guid></item><item><title><![CDATA[New comment by becquerel in "A succinct email in just a subject line"]]></title><description><![CDATA[
<p>If they do, then I would agree with them.</p>
]]></description><pubDate>Tue, 11 Mar 2025 07:36:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=43330076</link><dc:creator>becquerel</dc:creator><comments>https://news.ycombinator.com/item?id=43330076</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43330076</guid></item></channel></rss>