<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: thepasch</title><link>https://news.ycombinator.com/user?id=thepasch</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 14 May 2026 15:16:56 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=thepasch" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by thepasch in "The US is winning the AI race where it matters most: commercialization"]]></title><description><![CDATA[
<p>Article title: “The US is winning the AI Race”<p>Article content: “The US are capitalizing on AI the best”<p>A <i>lot</i> of assumptions there that no one can actually verify as true right now. If commercialization into rent-seeking SaaS landscapes is the endgame, then yeah, the US is winning the AI race. If individualization, local LLMs, and consumer hardware are the endgame, China is winning the AI race. If it’s something entirely different - if LLMs are the wall and research is what grants the next breakthrough, or if compute and memory requirements take a dive, or whatever; then we have no idea who’s winning the race because that stuff is mostly happening behind closed doors.</p>
]]></description><pubDate>Wed, 13 May 2026 14:17:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=48122233</link><dc:creator>thepasch</dc:creator><comments>https://news.ycombinator.com/item?id=48122233</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48122233</guid></item><item><title><![CDATA[New comment by thepasch in "LLMorphism: When humans come to see themselves as language models"]]></title><description><![CDATA[
<p>That’s got nothing at all to do with LLMs or “LLMorphism” though.</p>
]]></description><pubDate>Mon, 11 May 2026 13:34:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=48094808</link><dc:creator>thepasch</dc:creator><comments>https://news.ycombinator.com/item?id=48094808</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48094808</guid></item><item><title><![CDATA[New comment by thepasch in "LLMorphism: When humans come to see themselves as language models"]]></title><description><![CDATA[
<p>This paper introduces a term and instantly defines it as a definitely biased thing that is definitely happening, then spends its entirety arguing against the strawman it built itself. Not a single sentence is spent actually <i>arguing</i> with the idea or any of its points (other than the “partial similarities” paragraph on page I just realized the pages aren’t even numbered).<p>In general, the terms “LLM-like” and “human-like” are used all over the place, and in contrast with each other, but they’re never actually <i>defined</i>. It all  just seems more vibes-based than anything else.<p>And “treating the human cognitive process like it’s similar to the LLM cognitive process might lead to a society where epistemics turns into a discipline where plausibility is an acceptable substitute for empiricism” has got to be one of the most ridiculous notions I’ve ever read in a paper (ctrl+F “fifth pathway is epistemic” for the exact quote).</p>
]]></description><pubDate>Sun, 10 May 2026 11:13:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=48082903</link><dc:creator>thepasch</dc:creator><comments>https://news.ycombinator.com/item?id=48082903</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48082903</guid></item><item><title><![CDATA[New comment by thepasch in "GPT-5.5 Price Increase: What It Costs"]]></title><description><![CDATA[
<p>With how much vendor harnesses are now actively steering the agent with their own instructions on top of user prompts, I think it’d be super interesting to see a comparison of one of the <i>already tested</i> models - so Opus 4.7 or GPT-5.5 - across a range of different harnesses that aren’t their native. OpenCode, Pi, Hermes, Kilo Code. The most popular coding-focused harnesses, basically.</p>
]]></description><pubDate>Fri, 08 May 2026 16:08:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=48065098</link><dc:creator>thepasch</dc:creator><comments>https://news.ycombinator.com/item?id=48065098</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48065098</guid></item><item><title><![CDATA[New comment by thepasch in "Mark Cuban: OpenAI Will Never Return the $1T It's Investing [video]"]]></title><description><![CDATA[
<p>> Open-weight models aren't going to be free forever.<p>The ones that are already released are, and they're already very good for most purposes and can be fine-tuned indefinitely, includin months or years down the line when processes have been optimized and things aren't as compute-heavy as they are now.</p>
]]></description><pubDate>Wed, 06 May 2026 09:22:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=48034125</link><dc:creator>thepasch</dc:creator><comments>https://news.ycombinator.com/item?id=48034125</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48034125</guid></item><item><title><![CDATA[New comment by thepasch in "Show HN: The Cat Is Under Mayonnaise – Modifying LLM Behavior Without Retraining"]]></title><description><![CDATA[
<p>What distinguishes this from the likes of LoRA or ControlNet? Particularly Houlsby?<p>There is zero reference to prior art I could find anywhere in the repo. Unless I'm missing something substantial, this is nothing new, neither conceptually, nor practically.</p>
]]></description><pubDate>Mon, 04 May 2026 15:39:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=48010095</link><dc:creator>thepasch</dc:creator><comments>https://news.ycombinator.com/item?id=48010095</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48010095</guid></item><item><title><![CDATA[New comment by thepasch in "Claude.ai unavailable and elevated errors on the API"]]></title><description><![CDATA[
<p>> Would I theoretically have a more stable harness backing my usage?<p>If you don’t mind an opinionated harness that asks for a pretty specific workflow, but one that works well, use OpenCode.<p>If you want to spread your wings and feel the sweet kiss of freedom, use Pi.</p>
]]></description><pubDate>Tue, 28 Apr 2026 20:18:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47940110</link><dc:creator>thepasch</dc:creator><comments>https://news.ycombinator.com/item?id=47940110</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47940110</guid></item><item><title><![CDATA[New comment by thepasch in "I bought Friendster for $30k – Here's what I'm doing with it"]]></title><description><![CDATA[
<p>> 1. Make it QR code scanning instead of tapping so it can be a PWA.<p>Misses the point completely. The entire idea is that this enforces in-person meetings, which QR codes do not.</p>
]]></description><pubDate>Mon, 27 Apr 2026 00:34:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47916413</link><dc:creator>thepasch</dc:creator><comments>https://news.ycombinator.com/item?id=47916413</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47916413</guid></item><item><title><![CDATA[New comment by thepasch in "I cancelled Claude: Token issues, declining quality, and poor support"]]></title><description><![CDATA[
<p>I’ve started co-opting it specifically in situations where someone claims something untrue that is both easy to verify <i>and</i> stated confidently, but also ostensibly <i>isn’t</i> intentionally spreading misinformation.</p>
]]></description><pubDate>Mon, 27 Apr 2026 00:29:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47916385</link><dc:creator>thepasch</dc:creator><comments>https://news.ycombinator.com/item?id=47916385</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47916385</guid></item><item><title><![CDATA[New comment by thepasch in "I cancelled Claude: Token issues, declining quality, and poor support"]]></title><description><![CDATA[
<p>> They changed it do all of the changes in a virtual cloud environment, then dump the final result at the end of the response.<p>That’s a hallucination. All they did was hide thinking by default. Quick Google search should easily teach you how to turn it back on (I literally have it enabled in my harness).</p>
]]></description><pubDate>Fri, 24 Apr 2026 19:57:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47895052</link><dc:creator>thepasch</dc:creator><comments>https://news.ycombinator.com/item?id=47895052</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47895052</guid></item><item><title><![CDATA[New comment by thepasch in "NSA is using Anthropic's Mythos despite blacklist"]]></title><description><![CDATA[
<p>There’s definitely a ceiling for what LLMs are capable of, and I think aerospace engineering might just currently be it, haha.</p>
]]></description><pubDate>Wed, 22 Apr 2026 11:51:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47862261</link><dc:creator>thepasch</dc:creator><comments>https://news.ycombinator.com/item?id=47862261</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47862261</guid></item><item><title><![CDATA[New comment by thepasch in "NSA is using Anthropic's Mythos despite blacklist"]]></title><description><![CDATA[
<p>It depends on how you review. In an orchestrated per-task review workflow with clearly defined acceptance criteria and implementation requirements, using anything other than Sonnet (handed those criteria and requirements) hasn’t really led to much improvement, but it drives up usage and takes longer. I even tried Haiku, but, yeah, Haiku is just not viable for review, even tightly scoped, lol.<p>Siccing Sonnet on a codebase or PR without guidance does indeed lead to worse results than using Opus, though.</p>
]]></description><pubDate>Mon, 20 Apr 2026 16:14:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47836379</link><dc:creator>thepasch</dc:creator><comments>https://news.ycombinator.com/item?id=47836379</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47836379</guid></item><item><title><![CDATA[New comment by thepasch in "AI writes code 100x faster – why hasn't productivity?"]]></title><description><![CDATA[
<p>Because the code was never the hard part?</p>
]]></description><pubDate>Sat, 18 Apr 2026 23:10:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47820330</link><dc:creator>thepasch</dc:creator><comments>https://news.ycombinator.com/item?id=47820330</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47820330</guid></item><item><title><![CDATA[New comment by thepasch in "Claude Code Opus 4.7 keeps checking on malware"]]></title><description><![CDATA[
<p>> I do see big problems around motivation of the next generation of engineers to keep looking under the hood if avoiding it is becoming so easy, but you should, individually, arguably feel more enabled to do so than ever.<p>This is what gets me every single time. I <i>genuinely</i> don’t think this is a hard realization to come to, and yet, the vast majority of arguments from <i>both</i> sides of the aisle, both proponents and antis, always assume that EITHER you do everything yourself, OR you have the AI do everything for you. If you use AI, you’re DOOMED to never think critically about anything anyone ever tells you ever again. If you don’t, you’re an idiot, because everyone else is using it, and skills and experience no longer matter because everyone can now do everything.<p>And this is <i>on HN</i>, too; supposedly, a site where experienced engineers, developers, and builders converge; the exact kind of demographic you’d <i>expect</i> to understand such a thing as nuance. And yet, your comment is one of very few. There’s someone RIGHT HERE, a few comments down, saying, verbatim, “it’s a solution engine not a curiosity engine. Getting effortless answers at every turn is the opposite of curiosity.” Treating curiosity as the end rather than the means, as if I stop being a curious person once I find an answer to a question I’ve been asking myself, or as if curiosity is some sort of “temporary status effect” that an answer/solution “consumes.”<p>And it seems to be worse than just “no one’s thought it through properly.” I’ve literally had someone show a fundamental incapability <i>to understand the concept</i>. I spent a non-trivial amount of effort writing out three comments with several paragraphs about how knowing your knowns and unknowns, and the fact that you have unknown unknowns, is the most important thing in <i>any</i> project, not just when it comes to AI. That these tools aren’t just <i>doers</i>, but also <i>searchers</i>. That they’re pretty much the best rubber ducky that’s ever been created, and that I argue a rubber ducky is exactly what you should be using for in any contexts that don’t have it automate trivial and testable work. The guy refused to read any of it and, after three walls of text, continued claiming I’m “advocating for the LLM to guide me.” There is some sort of deeply instinctive and intrinsically defensive reflex that a lot of people seem to immediately collapse into when the topic comes up, and it seems to seriously impair the ability to acknowledge nuance or concede a single fraction of an inch. It’s baffling.</p>
]]></description><pubDate>Sat, 18 Apr 2026 12:57:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47815553</link><dc:creator>thepasch</dc:creator><comments>https://news.ycombinator.com/item?id=47815553</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47815553</guid></item><item><title><![CDATA[New comment by thepasch in "Claude Opus 4.7"]]></title><description><![CDATA[
<p>They also sometimes flag stuff in their reasoning and then think themselves out of mentioning it in the response, when it would actually have been a very welcome flag.</p>
]]></description><pubDate>Thu, 16 Apr 2026 16:20:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47795690</link><dc:creator>thepasch</dc:creator><comments>https://news.ycombinator.com/item?id=47795690</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47795690</guid></item><item><title><![CDATA[New comment by thepasch in "AI-assisted cognition endangers human development?"]]></title><description><![CDATA[
<p>AI-<i>assisted</i>, I can see. I believe it doesn’t have to be that way, though. If you use AI as a <i>grounding tool</i> - essentially something that can take your stream of consciousness and parse it into a series of concerete and pointed search terms to <i>do real-time research with</i> instead of falling back on what’s in the weights - then it’s honestly hard to think of a technology that had the potential to be more useful in the history of the species - it gives you much more direct access to both your unknown unknowns <i>and</i> your unknown knowns.<p>That is, of course, provided that you pay attention <i>it actually does research</i>. In their current state, LLMs are practically useless for this purpose for the vast majority of users, as no one knows how they work, what to watch out for, what the failure modes look like, and how to keep nonsense apart from facts when both are presented with an equal amount of conviction. That’s not a user problem, it’s an education problem.</p>
]]></description><pubDate>Wed, 15 Apr 2026 19:06:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47783716</link><dc:creator>thepasch</dc:creator><comments>https://news.ycombinator.com/item?id=47783716</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47783716</guid></item><item><title><![CDATA[New comment by thepasch in "Anthropic's rise is giving some OpenAI investors second thoughts"]]></title><description><![CDATA[
<p>> Jai Das, president of investment firm Sapphire Ventures (who has no stake in either company), told the FT he saw OpenAI as “the Netscape of AI,” a reference to the once-dominant browser that was overtaken by Microsoft and eventually absorbed by AOL.<p>One can only pray and hope, I’d say. May they be absorbed by a company with just as much lasting staying power as AOL.</p>
]]></description><pubDate>Wed, 15 Apr 2026 18:51:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47783511</link><dc:creator>thepasch</dc:creator><comments>https://news.ycombinator.com/item?id=47783511</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47783511</guid></item><item><title><![CDATA[New comment by thepasch in "Stop Flock"]]></title><description><![CDATA[
<p>Actually, give them small rotors - then they can even move and aim their guns at things!</p>
]]></description><pubDate>Wed, 15 Apr 2026 10:37:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47777173</link><dc:creator>thepasch</dc:creator><comments>https://news.ycombinator.com/item?id=47777173</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47777173</guid></item><item><title><![CDATA[New comment by thepasch in "Ask HN: Preferred pricing model for sound effects libraries?"]]></title><description><![CDATA[
<p>I’m more of a prosumer than a professional, but when I look for sounds, I look for individual ones; never for packs. What I’d appreciate more than anything else is the <i>choice</i> of either buying individual sounds for smaller money, or load up on a sub or credits if I have more of a bulk need.<p>Basically, look at FL Cloud and do exactly what they’re doing, haha. Image-Line is the prime example of a company worth trusting, and they get to reap the rewards of that trust as a result.</p>
]]></description><pubDate>Tue, 14 Apr 2026 11:22:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47764133</link><dc:creator>thepasch</dc:creator><comments>https://news.ycombinator.com/item?id=47764133</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47764133</guid></item><item><title><![CDATA[New comment by thepasch in "Show HN: VibeDrift – Measure drift in AI-generated codebases"]]></title><description><![CDATA[
<p>What does this offer over a decent orchestration layer and a… prompt?</p>
]]></description><pubDate>Tue, 14 Apr 2026 11:16:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47764078</link><dc:creator>thepasch</dc:creator><comments>https://news.ycombinator.com/item?id=47764078</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47764078</guid></item></channel></rss>