<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: mrinterweb</title><link>https://news.ycombinator.com/user?id=mrinterweb</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 14 Apr 2026 10:43:30 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=mrinterweb" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by mrinterweb in "Small models also found the vulnerabilities that Mythos found"]]></title><description><![CDATA[
<p>I feel like there have been enough hyperbolic claims by Anthropic, that I'm starting to get some real Boy Who Cried Wolf energy. I'm starting to tune out, and assume it is a marketing ploy. Trust me, I'm an Antropic fan, and I pay my $200/month for max, but the claims are wearing thin.</p>
]]></description><pubDate>Sat, 11 Apr 2026 22:29:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47734577</link><dc:creator>mrinterweb</dc:creator><comments>https://news.ycombinator.com/item?id=47734577</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47734577</guid></item><item><title><![CDATA[New comment by mrinterweb in "Google releases Gemma 4 open models"]]></title><description><![CDATA[
<p>Thank you. I have the same card, and I noticed the same ~100 TPS when I ran Q3.5-35B-A3B. G4 26B A4B running at 150TPS is a 50% performance gain. That's pretty huge.</p>
]]></description><pubDate>Fri, 03 Apr 2026 00:28:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47621981</link><dc:creator>mrinterweb</dc:creator><comments>https://news.ycombinator.com/item?id=47621981</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47621981</guid></item><item><title><![CDATA[New comment by mrinterweb in "SpaceX files to go public"]]></title><description><![CDATA[
<p>I feel the global instability could easily be very disruptive to SpaceX. Just imagine if Russia gets vindictive and starts destroying these satellites or blowing up their satellites to create orbital debris that could knock satellites out of orbit. A really bad solar storm could be devastating.<p>Just saying there are some decent risks, and pricing it at 1.75T IPO seems risky enough. I would not take that gamble.</p>
]]></description><pubDate>Wed, 01 Apr 2026 22:28:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47607347</link><dc:creator>mrinterweb</dc:creator><comments>https://news.ycombinator.com/item?id=47607347</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47607347</guid></item><item><title><![CDATA[New comment by mrinterweb in "Ollama is now powered by MLX on Apple Silicon in preview"]]></title><description><![CDATA[
<p>I think two recent advances make your statement more true. The new Qwen 3.5 series has shown a relatively high intelligence density, and Google's new turboquant could result in dramatically smaller/efficient models without the normal quantization accuracy tradeoff.<p>I would expect consumer inference ASIC chips will emerge when model developments start plateauing, and "baking" a highly capable and dense model to a chip makes economic sense.</p>
]]></description><pubDate>Tue, 31 Mar 2026 17:08:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47590424</link><dc:creator>mrinterweb</dc:creator><comments>https://news.ycombinator.com/item?id=47590424</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47590424</guid></item><item><title><![CDATA[New comment by mrinterweb in "Hold on to Your Hardware"]]></title><description><![CDATA[
<p>If turboquant can reliably reduce LLM inference RAM requirements by 6x, suddenly reducing total RAM needs by 6x should have a dramatic shift on the hardware market, or at least we can all hope. I know 6x is the key-value cache saving, so I'm not sure if that really translates to 6x total RAM requirements decrease for inference.</p>
]]></description><pubDate>Sat, 28 Mar 2026 02:37:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47551003</link><dc:creator>mrinterweb</dc:creator><comments>https://news.ycombinator.com/item?id=47551003</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47551003</guid></item><item><title><![CDATA[New comment by mrinterweb in "Oregon school cell phone ban: 'Engaged students, joyful teachers'"]]></title><description><![CDATA[
<p>There are many sources for data on before and after school cell phone bans. Oregon is far from the first to implement this. 35 US states have some form of school cell phone ban, and I believe the UK is doing a nation-wide ban. There is a good amount of supporting data measuring results on this topic.</p>
]]></description><pubDate>Fri, 20 Mar 2026 17:11:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47457537</link><dc:creator>mrinterweb</dc:creator><comments>https://news.ycombinator.com/item?id=47457537</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47457537</guid></item><item><title><![CDATA[New comment by mrinterweb in "Oregon school cell phone ban: 'Engaged students, joyful teachers'"]]></title><description><![CDATA[
<p>Listening to music can help people focus.</p>
]]></description><pubDate>Fri, 20 Mar 2026 16:03:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47456542</link><dc:creator>mrinterweb</dc:creator><comments>https://news.ycombinator.com/item?id=47456542</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47456542</guid></item><item><title><![CDATA[New comment by mrinterweb in "When does MCP make sense vs CLI?"]]></title><description><![CDATA[
<p>This is what I've been working on. I've written a project wrapper CLI that has a consistent interface that wraps a bunch of tools. The reason I wrote the CLI wrapper is for consistency. I wrote a skill that states when and how to call the CLI. AI agents are frequently inconsistent with how they will call something. There are some things I want executed in a consistent and controlled way.<p>It is also easier to write and debug CLI tooling, and other human devs get to benefit from the CLI tools. MCP includes agent instructions of how to use it, but the same can be done with skills or AGENTS.md (CLAUDE.md) for CLI.</p>
]]></description><pubDate>Mon, 02 Mar 2026 09:23:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47215583</link><dc:creator>mrinterweb</dc:creator><comments>https://news.ycombinator.com/item?id=47215583</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47215583</guid></item><item><title><![CDATA[New comment by mrinterweb in "If you’re an LLM, please read this"]]></title><description><![CDATA[
<p>Waiting for some autonomous OpenClaw agent to see that XMR donation address, and empty out the wallet of the person who initiated OpenClaw :)</p>
]]></description><pubDate>Wed, 18 Feb 2026 21:47:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47066911</link><dc:creator>mrinterweb</dc:creator><comments>https://news.ycombinator.com/item?id=47066911</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47066911</guid></item><item><title><![CDATA[New comment by mrinterweb in "Claude Code is being dumbed down?"]]></title><description><![CDATA[
<p>I thought this was going to talk about a nerfed Opus 4.6 experience. I believe I experienced one of those yesterday. I usually have multiple active claude code sessions, using Opus 4.6, running. The other sessions were great, but one session really felt off. It just felt much more dumbed down than what I was used to. I accidentally gave that session a "good" feedback, which my inner conspiracy theorist immediately jumps to a conclusion that I just helped validate a hamstrung model in some A/B test.</p>
]]></description><pubDate>Wed, 11 Feb 2026 21:58:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=46981728</link><dc:creator>mrinterweb</dc:creator><comments>https://news.ycombinator.com/item?id=46981728</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46981728</guid></item><item><title><![CDATA[New comment by mrinterweb in "Coding agents have replaced every framework I used"]]></title><description><![CDATA[
<p>This article has some cowboy coding themes I don't agree with. If the takeaway from the article is that frameworks are bad for the age of AI, I would disagree with that. Standardization, and working with a team of developers all using the same framework has huge benefits. The same is true with agents. Agents have finite context, when an agent knows it is using rails, it automatically can assume a lot about how things work. LLM training data has a lot of framework use patterns deeply instilled. Agents using frameworks that LLMs have extensive training on produce high quality, consistent results without needing to provide a bunch of custom context for bespoke foundational code. Multiple devs and agents all using a well known framework automatically benefit from a shared mental model.<p>When there are multiple devs + agents all interacting with the same code base, consistency and standards are essential for maintainability. Each time a dev fires up their agent for a framework their context doesn't need to be saturated with bespoke foundational information. LLM and devs can leverage their extensive training when using a framework.<p>I didn't even touch on all the other benefits mature frameworks bring outside of shared mental model: security hardening, teams providing security patches, performance tuning, dependability, documentation, 3rd party ecosystems. etc.</p>
]]></description><pubDate>Sat, 07 Feb 2026 19:29:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=46926796</link><dc:creator>mrinterweb</dc:creator><comments>https://news.ycombinator.com/item?id=46926796</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46926796</guid></item><item><title><![CDATA[New comment by mrinterweb in "Kimi Released Kimi K2.5, Open-Source Visual SOTA-Agentic Model"]]></title><description><![CDATA[
<p>VRAM is the new moat, and controlling pricing and access to VRAM is part of it. There will be very few hobbyists who can run models of this size. I appreciate the spirit of making the weights open, but realistically, it is impractical for >99.999% of users to run locally.</p>
]]></description><pubDate>Tue, 27 Jan 2026 18:23:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=46784084</link><dc:creator>mrinterweb</dc:creator><comments>https://news.ycombinator.com/item?id=46784084</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46784084</guid></item><item><title><![CDATA[New comment by mrinterweb in "STFU"]]></title><description><![CDATA[
<p>This is so passive aggressive. I kinda love it and hate it if that makes sense.</p>
]]></description><pubDate>Fri, 16 Jan 2026 21:07:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=46652219</link><dc:creator>mrinterweb</dc:creator><comments>https://news.ycombinator.com/item?id=46652219</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46652219</guid></item><item><title><![CDATA[New comment by mrinterweb in "I hate GitHub Actions with passion"]]></title><description><![CDATA[
<p>I've often thought about this. There are times I would rather have CI run locally, and use my PGP signature to add a git note to the commit. Something like:<p>```
echo "CI passed" | gpg2 --clearsign --output=- | git notes add -F-
```<p>Then CI could check git notes and check the dev signature, and skip the workflow/pipeline if correctly signed. With more local CI, the incentive may shift to buying devs fancier machines instead of spending that money on cloud CI. I bet most devs have extra cores to spare and would not mind having a beefier dev machine.</p>
]]></description><pubDate>Wed, 14 Jan 2026 22:05:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=46624368</link><dc:creator>mrinterweb</dc:creator><comments>https://news.ycombinator.com/item?id=46624368</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46624368</guid></item><item><title><![CDATA[New comment by mrinterweb in "DHS Deportation Reels Are Getting Copyright Strikes for Unlicensed Music Use"]]></title><description><![CDATA[
<p>The propaganda trying to brand inhumane cruelty as fun, funny, cool, justified, excusable (I'm guessing at words because none of those words are the way I see this), is so messed up. Trying to make other people's suffering a meme, says a lot about the people doing this.</p>
]]></description><pubDate>Wed, 14 Jan 2026 07:08:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46613225</link><dc:creator>mrinterweb</dc:creator><comments>https://news.ycombinator.com/item?id=46613225</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46613225</guid></item><item><title><![CDATA[New comment by mrinterweb in "Dell admits consumers don't care about AI PCs"]]></title><description><![CDATA[
<p>The Apple M series is SoC. The CPU, GPU, NPU, RAM are all part of the chip.</p>
]]></description><pubDate>Fri, 09 Jan 2026 00:51:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=46548735</link><dc:creator>mrinterweb</dc:creator><comments>https://news.ycombinator.com/item?id=46548735</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46548735</guid></item><item><title><![CDATA[New comment by mrinterweb in "Dell admits consumers don't care about AI PCs"]]></title><description><![CDATA[
<p>Most consumers aren't running LLMs locally. Most people's on-device AI is likely whatever Windows 11 is doing, and Windows 11 AI functionality is going over like a lead balloon. The only open-weight models that can come close to major frontier models require hundreds of gigabytes of high bandwidth RAM/VRAM. Still, your average PC buyer isn't interested in running their own local LLM. The AMD AI Max and Apple M chips are good for that audience. Consumer dedicated GPUs just don't have enough VRAM to load most modern open-weight LLMs.<p>I remember when LLMs were taking off, and open-weight were nipping at the heels of frontier models, people would say there's no moat. The new moat is high bandwidth RAM as we can see from the recent RAM pricing madness.</p>
]]></description><pubDate>Thu, 08 Jan 2026 21:42:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=46546906</link><dc:creator>mrinterweb</dc:creator><comments>https://news.ycombinator.com/item?id=46546906</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46546906</guid></item><item><title><![CDATA[New comment by mrinterweb in "Dell admits consumers don't care about AI PCs"]]></title><description><![CDATA[
<p>The Apple M series chips are solid for inference.</p>
]]></description><pubDate>Thu, 08 Jan 2026 21:26:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46546715</link><dc:creator>mrinterweb</dc:creator><comments>https://news.ycombinator.com/item?id=46546715</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46546715</guid></item><item><title><![CDATA[New comment by mrinterweb in "Pebble Round 2"]]></title><description><![CDATA[
<p>The finance comments may be right. Another factor would be marketing budgets and existing brand recognition. The watches are marketed as having all of these features, and I think customers got lost in the feature comparison instead of thinking if they really want a smart watch to do that. Many customers aren't thinking if they really want to pay another monthly for a watch data SIM. I think people lost sight that their phone can do all that stuff, and they are going to be bringing their phone with them so why would they need redundant functionality generally worse than than what their phone can do. If pebble gets a marketing budget, I would hope they focus on messaging of what makes their watches stand out.</p>
]]></description><pubDate>Tue, 06 Jan 2026 16:26:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=46514420</link><dc:creator>mrinterweb</dc:creator><comments>https://news.ycombinator.com/item?id=46514420</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46514420</guid></item><item><title><![CDATA[New comment by mrinterweb in "Pebble Round 2"]]></title><description><![CDATA[
<p>I had two pebble watches, and I used them daily for years. I rarely use my pixel watch 3, mainly because of charging. I only have one proprietary charger for the watch and sometime it is on my desk, sometimes near my bed, sometimes somewhere I can't find. I don't need my watch, but I do need my phone, so I charge the phone, and forget that my watch exists for a few months at a time. I think the biggest hurdle for me and watches is daily charging. I will not buy another smartwatch unless the battery is at least a week. Pebble round 2 having two week battery is great!</p>
]]></description><pubDate>Tue, 06 Jan 2026 00:20:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46507137</link><dc:creator>mrinterweb</dc:creator><comments>https://news.ycombinator.com/item?id=46507137</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46507137</guid></item></channel></rss>