<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: anon373839</title><link>https://news.ycombinator.com/user?id=anon373839</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 00:52:17 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=anon373839" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by anon373839 in "Trinity Large Thinking"]]></title><description><![CDATA[
<p>Thanks for the tip! Hadn't seen that one.</p>
]]></description><pubDate>Thu, 02 Apr 2026 05:41:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47610380</link><dc:creator>anon373839</dc:creator><comments>https://news.ycombinator.com/item?id=47610380</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47610380</guid></item><item><title><![CDATA[New comment by anon373839 in "Trinity Large Thinking"]]></title><description><![CDATA[
<p>Bit of a tangent, but I'm pleased to see that Qwen 3.5 35B is tied with GPT-5.4 and just 2 points behind 4.6 Opus. That little model is so impressively capable and fast! I'm frequently still surprised that I have that level of capability and speed running locally on my laptop.</p>
]]></description><pubDate>Thu, 02 Apr 2026 05:30:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47610323</link><dc:creator>anon373839</dc:creator><comments>https://news.ycombinator.com/item?id=47610323</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47610323</guid></item><item><title><![CDATA[New comment by anon373839 in "Ollama is now powered by MLX on Apple Silicon in preview"]]></title><description><![CDATA[
<p>They’re not far behind, unless you mean for “vibe coding”.  And for probably 85% of queries that people use LLMs for, you can’t even really perceive the difference between frontier and local.</p>
]]></description><pubDate>Tue, 31 Mar 2026 20:52:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47593330</link><dc:creator>anon373839</dc:creator><comments>https://news.ycombinator.com/item?id=47593330</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47593330</guid></item><item><title><![CDATA[New comment by anon373839 in "Be careful: chatting with AI about your case is discoverable"]]></title><description><![CDATA[
<p>This is a really interesting and well written case update/critique. I agree with the author's that the judge's reliance on Anthropic's fine-print privacy policy does not satisfy the actual legal standard governing privilege. Or if it did, it would raise extremely thorny issues around all of the cloud-based technology products that lawyers and clients use every day.<p>That said, I note that the court's opinion specifically calls out Anthropic's practice of *training models on user data* as a reason why the defendant could not have expected confidentiality. I do not use these cloud models for anything important precisely because they are operated by companies, like Anthropic, that are completely untrustworthy.</p>
]]></description><pubDate>Sun, 29 Mar 2026 03:12:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47560074</link><dc:creator>anon373839</dc:creator><comments>https://news.ycombinator.com/item?id=47560074</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47560074</guid></item><item><title><![CDATA[New comment by anon373839 in "Mistral AI Releases Forge"]]></title><description><![CDATA[
<p>I think they are referring to “continued pretraining”.</p>
]]></description><pubDate>Wed, 18 Mar 2026 03:38:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47421373</link><dc:creator>anon373839</dc:creator><comments>https://news.ycombinator.com/item?id=47421373</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47421373</guid></item><item><title><![CDATA[New comment by anon373839 in "Open Weights isn't Open Training"]]></title><description><![CDATA[
<p>> "Open weights" borrows the legitimacy of open source<p>I don't really see how open-weights models need to borrow any legitimacy. They are valuable artifacts being given away that can be used, tested and repurposed forever. Fully open models like the OLMo series and Nvidia's Nemotron are much more valuable in some contexts, but they haven't quite cracked the level of performance that the best open-weights models are hitting. And I think that's why most startups are reaching for Chinese base LLMs when they want to tune custom models: the performance is better and they were never going to bother with pretraining anyway.</p>
]]></description><pubDate>Wed, 11 Mar 2026 05:54:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47332119</link><dc:creator>anon373839</dc:creator><comments>https://news.ycombinator.com/item?id=47332119</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47332119</guid></item><item><title><![CDATA[New comment by anon373839 in "Claude struggles to cope with ChatGPT exodus"]]></title><description><![CDATA[
<p>But the end results aren’t actually close. That is why frontier LLMs don’t know you need to drive your car to the car wash (until they are inevitably fine-tuned on this specific failure mode). I don’t think there is much true generalization happening with these models - more a game of whack-a-mole all the way down.</p>
]]></description><pubDate>Mon, 09 Mar 2026 02:07:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47304046</link><dc:creator>anon373839</dc:creator><comments>https://news.ycombinator.com/item?id=47304046</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47304046</guid></item><item><title><![CDATA[New comment by anon373839 in "Claude Sonnet 4.6"]]></title><description><![CDATA[
<p>Correct. Anthropic keeps pushing these weird sci-fi narratives to maintain some kind of mystique around their slightly-better-than-others commodity product. But Occam’s Razor is not dead.</p>
]]></description><pubDate>Tue, 17 Feb 2026 18:43:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47051278</link><dc:creator>anon373839</dc:creator><comments>https://news.ycombinator.com/item?id=47051278</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47051278</guid></item><item><title><![CDATA[New comment by anon373839 in "OpenClaw is what Apple intelligence should have been"]]></title><description><![CDATA[
<p>Exactly. Apple operates at a scale where it's very difficult to deploy this technology for its sexy applications. The tech is simply too broken and flawed at this point. (Whatever Apple does deploy, you can bet it will be heavily guardrailed.) With ~2.5 billion devices in active use, they can't take the Tesla approach of letting AI drive cars into fire trucks.</p>
]]></description><pubDate>Thu, 05 Feb 2026 02:36:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46894938</link><dc:creator>anon373839</dc:creator><comments>https://news.ycombinator.com/item?id=46894938</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46894938</guid></item><item><title><![CDATA[New comment by anon373839 in "Claude Code: connect to a local model when your quota runs out"]]></title><description><![CDATA[
<p>It's true that open models are a half-step behind the frontier, but I can't say that I've seen "sheer intelligence" from the models you mentioned. Just a couple of days ago Gemini 3 Pro was happily writing naive graph traversal code without any cycle detection or safety measures. If nothing else, I would have thought these models could nail basic algorithms by now?</p>
]]></description><pubDate>Thu, 05 Feb 2026 02:29:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=46894886</link><dc:creator>anon373839</dc:creator><comments>https://news.ycombinator.com/item?id=46894886</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46894886</guid></item><item><title><![CDATA[New comment by anon373839 in "Qwen3-Coder-Next"]]></title><description><![CDATA[
<p>Yeah. Q2 in any model is just severely damaged, unfortunately. Wish it weren’t so.</p>
]]></description><pubDate>Wed, 04 Feb 2026 12:30:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46885054</link><dc:creator>anon373839</dc:creator><comments>https://news.ycombinator.com/item?id=46885054</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46885054</guid></item><item><title><![CDATA[New comment by anon373839 in "Qwen3-Coder-Next"]]></title><description><![CDATA[
<p>There’s this issue/outstanding PR: <a href="https://github.com/lmstudio-ai/mlx-engine/pull/188#issuecomment-3832453075" rel="nofollow">https://github.com/lmstudio-ai/mlx-engine/pull/188#issuecomm...</a></p>
]]></description><pubDate>Tue, 03 Feb 2026 23:56:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=46879257</link><dc:creator>anon373839</dc:creator><comments>https://news.ycombinator.com/item?id=46879257</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46879257</guid></item><item><title><![CDATA[New comment by anon373839 in "The Codex App"]]></title><description><![CDATA[
<p>The numbers you stated sound off ($500k capex + electricity per 3 concurrent requests?). Especially now that the frontier has moved to ultra sparse MoE architectures. I’ve also read a couple of commodity inference providers claiming that their unit economics are profitable.</p>
]]></description><pubDate>Tue, 03 Feb 2026 05:01:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46866728</link><dc:creator>anon373839</dc:creator><comments>https://news.ycombinator.com/item?id=46866728</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46866728</guid></item><item><title><![CDATA[New comment by anon373839 in "Trinity large: An open 400B sparse MoE model"]]></title><description><![CDATA[
<p>Much of these gains can be attributed to better tooling and harnesses around the models. Yes, the models also had to be retrained to work  with the new tooling, but that doesn’t mean there was a step change in their general “intelligence” or capabilities. And sure enough, I’m seeing the same old flaws as always: frontier models fabricating info not present in the context, having blindness to what is present, getting into loops, failing to follow simple instructions…</p>
]]></description><pubDate>Thu, 29 Jan 2026 11:49:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46808947</link><dc:creator>anon373839</dc:creator><comments>https://news.ycombinator.com/item?id=46808947</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46808947</guid></item><item><title><![CDATA[New comment by anon373839 in "LM Studio 0.4"]]></title><description><![CDATA[
<p>> But then I decided I'm just a chemical reaction<p>That doesn’t address the practical significance of privacy, though. The real risk isn’t that OpenAI employees will read your chats for personal amusement. The risk is that OpenAI will exploit the secrets you’ve entrusted to them, to manipulate you, or to enable others to manipulate you.<p>The more information an unscrupulous actor has about you, the more damage they can do.</p>
]]></description><pubDate>Thu, 29 Jan 2026 11:40:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=46808885</link><dc:creator>anon373839</dc:creator><comments>https://news.ycombinator.com/item?id=46808885</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46808885</guid></item><item><title><![CDATA[New comment by anon373839 in "LM Studio 0.4"]]></title><description><![CDATA[
<p>I have seen ~1,300 tokens/sec of total throughout with Llama 3 8B on a MacBook Pro. So no, you don’t halve the performance. But running batched inference takes more memory, so you have to use shorter contexts than if you weren’t batching.</p>
]]></description><pubDate>Thu, 29 Jan 2026 11:37:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46808854</link><dc:creator>anon373839</dc:creator><comments>https://news.ycombinator.com/item?id=46808854</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46808854</guid></item><item><title><![CDATA[New comment by anon373839 in "Bypassing Gemma and Qwen safety with raw strings"]]></title><description><![CDATA[
<p>I think this is 100% in your mind. The article does not in any way read to me as having AI-generated prose.</p>
]]></description><pubDate>Mon, 19 Jan 2026 22:16:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46685258</link><dc:creator>anon373839</dc:creator><comments>https://news.ycombinator.com/item?id=46685258</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46685258</guid></item><item><title><![CDATA[New comment by anon373839 in "Anthropic Explicitly Blocking OpenCode"]]></title><description><![CDATA[
<p>> protect their investment<p>Viewed another way, the preferential pricing they're giving to Claude Code (and only Claude Code) is anticompetitive behavior that may be illegal.</p>
]]></description><pubDate>Thu, 15 Jan 2026 01:46:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46626847</link><dc:creator>anon373839</dc:creator><comments>https://news.ycombinator.com/item?id=46626847</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46626847</guid></item><item><title><![CDATA[New comment by anon373839 in "Unpopular Opinion: Bootstrap is a better front-end framework than Tailwind"]]></title><description><![CDATA[
<p>> But it boggles the mind that apparently a large chunk of "developers" cannot see the insanity in writing XML to generate JavaScript which generates HTML and CSS because they want to write `<Button variant="primary">Save</Button>` rather than... `<button class="primary">Save</button>`.<p>I'm wondering if some of the disconnect here is that you don't have personal experience with this type of development, so you might not see what pain points it solves.<p>The first thing I would mention is that components encapsulate function <i>and</i> styling. Buttons don't illustrate this well because they're trivial. But you can imagine a `<DatePicker>` that takes a `variant` property ("range" or "single"), `month` and `year` properties, and perhaps a property called `annotations` which accepts an array of special dates and their categories (`[{date: "2026-07-04", code: "premium_rate"}, {date: "2027-07-07", code: "sold_out"} ...]`). The end result is an interactive picker that shows the desired span, with certain dates unselectable and others marked with special color codes or symbols. You're going to have a very unpleasant time implementing that with globally scoped CSS classes.<p>And this isn't a string sent over the wire. The "document" that the browser renders is changing continuously as you interact with it. If you were to open Chrome Devtools and look at the subtree of the DOM containing the date picker, you would see elements appearing and disappearing, gaining or losing classes and attributes, in real time as you select/deselect/skip forward/etc.  That's what makes it work, rather than being a static drawing of a calendar.<p>I personally do not like the Javascript frontend ecosystem. It's hacks on top of hacks on top of hacks. But, do you know another way to deploy software that's cross-platform and basically free of gatekeepers? Sometimes we just have to do weird things because they're really useful.</p>
]]></description><pubDate>Wed, 14 Jan 2026 06:39:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46613056</link><dc:creator>anon373839</dc:creator><comments>https://news.ycombinator.com/item?id=46613056</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46613056</guid></item><item><title><![CDATA[New comment by anon373839 in "Unpopular Opinion: Bootstrap is a better front-end framework than Tailwind"]]></title><description><![CDATA[
<p>I think these debates ultimately come down to what you’re making with these tools: is it <i>documents</i> or <i>application interfaces</i>? If it’s documents, then plain HTML, CSS and a touch of JS sprinkles on top works very well, as they were designed for this. If you’re making software, though, at some point you’re going to need some additional tooling to make it feasible.</p>
]]></description><pubDate>Wed, 14 Jan 2026 03:18:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=46611886</link><dc:creator>anon373839</dc:creator><comments>https://news.ycombinator.com/item?id=46611886</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46611886</guid></item></channel></rss>