<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: thorum</title><link>https://news.ycombinator.com/user?id=thorum</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 09 May 2026 03:09:07 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=thorum" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by thorum in "Google Chrome silently installs a 4 GB AI model on your device without consent"]]></title><description><![CDATA[
<p>You’re probably right in a literal technical sense, but a very large number of people (maybe most?) would choose “no” if properly informed and asked for consent, and lots of people are morally opposed even in principle to downloading a large AI model onto their computer. I’m not one of them, but they’re out there. So in a cultural sense, it is different.</p>
]]></description><pubDate>Tue, 05 May 2026 20:56:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=48028433</link><dc:creator>thorum</dc:creator><comments>https://news.ycombinator.com/item?id=48028433</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48028433</guid></item><item><title><![CDATA[The physics slop that YouTube wants me to make [video]]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.youtube.com/watch?v=Cd5EHfRerGI">https://www.youtube.com/watch?v=Cd5EHfRerGI</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47902415">https://news.ycombinator.com/item?id=47902415</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 25 Apr 2026 15:58:53 +0000</pubDate><link>https://www.youtube.com/watch?v=Cd5EHfRerGI</link><dc:creator>thorum</dc:creator><comments>https://news.ycombinator.com/item?id=47902415</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47902415</guid></item><item><title><![CDATA[New comment by thorum in "Google Flow Music"]]></title><description><![CDATA[
<p>The models are primitive right now, but we’re clearly heading toward “AI as sound synthesis, human as artist” - much like how producers currently use a DAW to assemble premade loops and sounds from Splice, but with the producer now able to prompt any sound, filter, or effect they can imagine into existence and then rearrange them into a song.<p>See for example Suno Studio, which is not very good in my opinion, but shows the direction they’re going.</p>
]]></description><pubDate>Fri, 24 Apr 2026 23:19:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47896999</link><dc:creator>thorum</dc:creator><comments>https://news.ycombinator.com/item?id=47896999</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47896999</guid></item><item><title><![CDATA[New comment by thorum in "Does Gas Town 'steal' usage from users' LLM credits to improve itself?"]]></title><description><![CDATA[
<p>Isn’t this a permissions issue? Your “opt out” is using a GitHub access token that doesn’t allow it to happen.</p>
]]></description><pubDate>Wed, 15 Apr 2026 21:48:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47785747</link><dc:creator>thorum</dc:creator><comments>https://news.ycombinator.com/item?id=47785747</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47785747</guid></item><item><title><![CDATA[New comment by thorum in "Muse Spark: Scaling towards personal superintelligence"]]></title><description><![CDATA[
<p>I have the opposite experience: random HN/Reddit comments saying “this sucks” or “whoa this is a huge improvement” are the only benchmark that means anything. Standard benchmarks are all gamed and don’t capture the complexity of the real world.</p>
]]></description><pubDate>Wed, 08 Apr 2026 21:27:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47696505</link><dc:creator>thorum</dc:creator><comments>https://news.ycombinator.com/item?id=47696505</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47696505</guid></item><item><title><![CDATA[New comment by thorum in "90% of Claude-linked output going to GitHub repos w <2 stars"]]></title><description><![CDATA[
<p>Stars have been useless as signals for project quality for a while. They’re mostly bought, at this point. I regularly see obviously vibe-coded nonsense projects on GitHub’s Trending page with 10,000 stars. I don’t believe 10,000 people have even cloned the repo, much less gotten any personal value from it. It’s meaningless.</p>
]]></description><pubDate>Wed, 25 Mar 2026 21:18:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47523406</link><dc:creator>thorum</dc:creator><comments>https://news.ycombinator.com/item?id=47523406</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47523406</guid></item><item><title><![CDATA[New comment by thorum in "Goodbye to Sora"]]></title><description><![CDATA[
<p>Good day for Kling.</p>
]]></description><pubDate>Tue, 24 Mar 2026 23:25:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47511028</link><dc:creator>thorum</dc:creator><comments>https://news.ycombinator.com/item?id=47511028</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47511028</guid></item><item><title><![CDATA[New comment by thorum in "Ape Coding [fiction]"]]></title><description><![CDATA[
<p>Ape thinking is a cognitive practice where a human deliberately solves problems with their own mind. Practitioners of ape thinking will typically author thoughts by thinking them with their own brain, using neurons and synapses.<p>The term was popularized when asking a computer to do it for you became the dominant form of cognition. "Ape thinking" first appeared in online communities as derogatory slang, referring to humans who were unable to outsource all their thinking to a computer. Despite the quick spread of asking a computer to do it for you, institutional inertia, affordability, and limitations in human complacency were barriers to universal adoption of the new technology.</p>
]]></description><pubDate>Sun, 01 Mar 2026 17:32:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47208751</link><dc:creator>thorum</dc:creator><comments>https://news.ycombinator.com/item?id=47208751</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47208751</guid></item><item><title><![CDATA[New comment by thorum in "The most-seen UI on the internet? Redesigning turnstile and challenge pages"]]></title><description><![CDATA[
<p>Their design approach wasn’t particularly unusual, so I’m not sure what that sentence means.<p>I do miss the days when technical reports were clear and concise. This one has some interesting information, but it’s buried under a mountain of empty AI-written bloat.</p>
]]></description><pubDate>Fri, 27 Feb 2026 23:35:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47187574</link><dc:creator>thorum</dc:creator><comments>https://news.ycombinator.com/item?id=47187574</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47187574</guid></item><item><title><![CDATA[New comment by thorum in "Microsoft account bugs locked me out of Notepad – Are thin clients ruining PCs?"]]></title><description><![CDATA[
<p>AI for help figuring things out and Timeshift for when you accidentally break something. One reboot and it’s fixed.</p>
]]></description><pubDate>Sun, 08 Feb 2026 04:57:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46931422</link><dc:creator>thorum</dc:creator><comments>https://news.ycombinator.com/item?id=46931422</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46931422</guid></item><item><title><![CDATA[New comment by thorum in "I miss thinking hard"]]></title><description><![CDATA[
<p>> but the number of problems requiring deep creative solutions feels like it is diminishing rapidly.<p>If anything, we have more intractable problems needing deep creative solutions than ever before. People are dying as I write this. We’ve got mass displacement, poverty, polarization in politics. The education and healthcare systems are broken. Climate change marches on. Not to mention the social consequences of new technologies like AI (including the ones discussed in this post) that frankly no one knows what to do about.<p>The solution is indeed to work on bigger problems. If you can’t find any, look harder.</p>
]]></description><pubDate>Wed, 04 Feb 2026 08:48:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46883243</link><dc:creator>thorum</dc:creator><comments>https://news.ycombinator.com/item?id=46883243</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46883243</guid></item><item><title><![CDATA[New comment by thorum in "Generative AI and Wikipedia editing: What we learned in 2025"]]></title><description><![CDATA[
<p>I’m honestly surprised LLMs are still screwing up citations. It does not feel like a harder task than building software or generating novel math proofs. In both those cases, of course, there is a verifier, but self-verification with “Does this text support this claim?” seems like it ought to be within the capabilities of a good reasoning model.<p>But as I understand the situation, even the major Deep Research systems still have this issue.</p>
]]></description><pubDate>Sun, 01 Feb 2026 07:22:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=46844238</link><dc:creator>thorum</dc:creator><comments>https://news.ycombinator.com/item?id=46844238</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46844238</guid></item><item><title><![CDATA[New comment by thorum in "AGENTS.md outperforms skills in our agent evals"]]></title><description><![CDATA[
<p>The article presents AGENTS.md as something distinct from Skills, but it is actually a simplified instance of the same concept. Their AGENTS.md approach tells the AI where to find instructions for performing a task. That’s a Skill.<p>I expect the benefit is from better Skill design, specifically, minimizing the number of steps and decisions between the AI’s starting state and the correct information. Fewer transitions -> fewer chances for error to compound.</p>
]]></description><pubDate>Thu, 29 Jan 2026 22:10:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=46817461</link><dc:creator>thorum</dc:creator><comments>https://news.ycombinator.com/item?id=46817461</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46817461</guid></item><item><title><![CDATA[New comment by thorum in "Gas Town's agent patterns, design bottlenecks, and vibecoding at scale"]]></title><description><![CDATA[
<p>Agree that planning time is the bottleneck, but<p>> 3 days<p>still seems slow! I’m saying what happens in 2028 when your entire project is 5-10 minutes of total agent runtime - time actually spent writing code and implementing your plan? Trying to parallelize 10m of work with a “town” of agents seems like unnecessary complexity.</p>
]]></description><pubDate>Fri, 23 Jan 2026 21:11:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46737982</link><dc:creator>thorum</dc:creator><comments>https://news.ycombinator.com/item?id=46737982</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46737982</guid></item><item><title><![CDATA[New comment by thorum in "Gas Town's agent patterns, design bottlenecks, and vibecoding at scale"]]></title><description><![CDATA[
<p>Am I wrong that this entire approach to agent design patterns is based on the assumption that agents are slow? Which yeah, is very true in January 2026, but we’ve seen that inference gets faster over time. When an agent can complete most tasks in 1 minute, or 1 second, parallel agents seem like the wrong direction. It’s not clear how this would be any better than a single Claude Code session (as “orchestrator”) running subagents (which already exist) one at a time.</p>
]]></description><pubDate>Fri, 23 Jan 2026 20:31:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=46737476</link><dc:creator>thorum</dc:creator><comments>https://news.ycombinator.com/item?id=46737476</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46737476</guid></item><item><title><![CDATA[New comment by thorum in "Talking to LLMs has improved my thinking"]]></title><description><![CDATA[
<p>I agree that LLMs can be useful companions for thought when used correctly. I don’t agree that LLMs are good at “supplying clean verbal form” of vaguely expressed, half-formed ideas and that this results in clearer thinking.<p>Most of the time, the LLM’s framing of my idea is more generic and superficial than what I was actually getting at. It looks good, but when you look closer it often misses the point, on some level.<p>There is a real danger, to the extent you allow yourself to accept the LLM’s version of your idea, that you will lose the originality and uniqueness that made the idea interesting in the first place.<p>I think the struggle to frame a complex idea and the frustration that you feel when the right framing eludes you, is actually where most of the value is, and the LLM cheat code to skip past this pain is not really a good thing.</p>
]]></description><pubDate>Fri, 23 Jan 2026 05:14:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=46728696</link><dc:creator>thorum</dc:creator><comments>https://news.ycombinator.com/item?id=46728696</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46728696</guid></item><item><title><![CDATA[New comment by thorum in "Nanolang: A tiny experimental language designed to be targeted by coding LLMs"]]></title><description><![CDATA[
<p>Your other comment sounded like you were interested in learning about how AI labs are applying RL to improve programming capability. If so, the DeepSeek R1 paper is a good introduction to the topic (maybe a bit out of date at this point, but very approachable). RL training works fine for low resource languages as long as you have tooling to verify outputs and enough compute to throw at the problem.</p>
]]></description><pubDate>Tue, 20 Jan 2026 09:50:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=46689986</link><dc:creator>thorum</dc:creator><comments>https://news.ycombinator.com/item?id=46689986</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46689986</guid></item><item><title><![CDATA[New comment by thorum in "Nanolang: A tiny experimental language designed to be targeted by coding LLMs"]]></title><description><![CDATA[
<p>Go read the DeepSeek R1 paper</p>
]]></description><pubDate>Tue, 20 Jan 2026 07:59:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=46689104</link><dc:creator>thorum</dc:creator><comments>https://news.ycombinator.com/item?id=46689104</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46689104</guid></item><item><title><![CDATA[New comment by thorum in "Nanolang: A tiny experimental language designed to be targeted by coding LLMs"]]></title><description><![CDATA[
<p>Developed by Jordan Hubbard of NVIDIA (and FreeBSD).<p>My understanding/experience is that LLM performance in a language scales with how well the language is represented in the training data.<p>From that assumption, we might expect LLMs to actually do better with an existing language for which more training code is available, even if that language is more complex and seems like it should be “harder” to understand.</p>
]]></description><pubDate>Mon, 19 Jan 2026 23:35:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46686086</link><dc:creator>thorum</dc:creator><comments>https://news.ycombinator.com/item?id=46686086</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46686086</guid></item><item><title><![CDATA[New comment by thorum in "The unbearable frustration of figuring out APIs"]]></title><description><![CDATA[
<p>I remember reading and hearing similar rants from programmers 15 years ago, long before LLMs. The author kept going and figured it out, and probably got some pride and enjoyment from finishing the project in spite of the frustrating moments. That’s what learning to code has always been like.</p>
]]></description><pubDate>Wed, 14 Jan 2026 20:36:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=46622828</link><dc:creator>thorum</dc:creator><comments>https://news.ycombinator.com/item?id=46622828</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46622828</guid></item></channel></rss>