<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: AzN1337c0d3r</title><link>https://news.ycombinator.com/user?id=AzN1337c0d3r</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 14:25:32 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=AzN1337c0d3r" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by AzN1337c0d3r in "TSA lines are so out of control that travelers are hiring line-sitters"]]></title><description><![CDATA[
<p>Were you born after 2001? Did you remember those planes that flew into the buildings?<p>Private planes can do the same thing.</p>
]]></description><pubDate>Sun, 29 Mar 2026 14:43:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47563595</link><dc:creator>AzN1337c0d3r</dc:creator><comments>https://news.ycombinator.com/item?id=47563595</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47563595</guid></item><item><title><![CDATA[New comment by AzN1337c0d3r in "AirPods Max 2"]]></title><description><![CDATA[
<p>What's new about this with the H2 chip?<p>My H1-chipped USB-C Airpods Max (OG) seem to switch seamlessly between my iphone, ipad, and macbook pro already.</p>
]]></description><pubDate>Mon, 16 Mar 2026 18:15:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47402676</link><dc:creator>AzN1337c0d3r</dc:creator><comments>https://news.ycombinator.com/item?id=47402676</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47402676</guid></item><item><title><![CDATA[New comment by AzN1337c0d3r in "Can I run AI locally?"]]></title><description><![CDATA[
<p>Maybe things have changed but the last time I looked at this, it was only max 96GB to the GPU. And it isn't dynamic in the sense you still have to tweak the kernel parameters, which require a reboot.<p>Apple has none of this.</p>
]]></description><pubDate>Fri, 13 Mar 2026 20:29:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47369415</link><dc:creator>AzN1337c0d3r</dc:creator><comments>https://news.ycombinator.com/item?id=47369415</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47369415</guid></item><item><title><![CDATA[New comment by AzN1337c0d3r in "Can I run AI locally?"]]></title><description><![CDATA[
<p>Most workstation class laptops (i.e. Lenovo P-series, Dell Precision) have 4 DIMM slots and you can get them with 256 GB (at least, before the current RAM shortages).<p>There's also the Ryzen AI Max+ 395 that has 128GB unified in laptop form factor.<p>Only Apple has the unique dynamic allocation though.</p>
]]></description><pubDate>Fri, 13 Mar 2026 19:35:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47368704</link><dc:creator>AzN1337c0d3r</dc:creator><comments>https://news.ycombinator.com/item?id=47368704</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47368704</guid></item><item><title><![CDATA[New comment by AzN1337c0d3r in "I'm reluctant to verify my identity or age for any online services"]]></title><description><![CDATA[
<p>Insurance is likely using that same data to adjust rates.</p>
]]></description><pubDate>Tue, 03 Mar 2026 17:45:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47235987</link><dc:creator>AzN1337c0d3r</dc:creator><comments>https://news.ycombinator.com/item?id=47235987</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47235987</guid></item><item><title><![CDATA[New comment by AzN1337c0d3r in "Read Locks Are Not Your Friends"]]></title><description><![CDATA[
<p>Left-right concurrency control might be a good fit for this problem.<p><a href="https://concurrencyfreaks.blogspot.com/2013/12/left-right-concurrency-control.html" rel="nofollow">https://concurrencyfreaks.blogspot.com/2013/12/left-right-co...</a></p>
]]></description><pubDate>Tue, 24 Feb 2026 04:17:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47132801</link><dc:creator>AzN1337c0d3r</dc:creator><comments>https://news.ycombinator.com/item?id=47132801</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47132801</guid></item><item><title><![CDATA[New comment by AzN1337c0d3r in "Ask HN: What are people doing to get off of VMware?"]]></title><description><![CDATA[
<p>Bought by Broadcom, now implementing classic strategy of leveraging vendor lock-in to milk customers.</p>
]]></description><pubDate>Sun, 19 Oct 2025 18:30:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=45636657</link><dc:creator>AzN1337c0d3r</dc:creator><comments>https://news.ycombinator.com/item?id=45636657</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45636657</guid></item><item><title><![CDATA[New comment by AzN1337c0d3r in "The great software quality collapse or, how we normalized catastrophe"]]></title><description><![CDATA[
<p>There's almost never a death where there is a single factor, regardless of aviation or not. You can always decompose systems into various layers of abstractions and relationships. But software bugs are definitely a contributing cause.</p>
]]></description><pubDate>Fri, 10 Oct 2025 00:04:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=45534276</link><dc:creator>AzN1337c0d3r</dc:creator><comments>https://news.ycombinator.com/item?id=45534276</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45534276</guid></item><item><title><![CDATA[New comment by AzN1337c0d3r in "The great software quality collapse or, how we normalized catastrophe"]]></title><description><![CDATA[
<p>> If all the examples you can conjure are decades old<p>They're not ALL the examples I can conjure up. MCAS would probably be an example of a modern software bug that killed a bunch of people.<p>How about the 1991 failure of the Patriot missile to defend against a SCUD missile due to a software bug not accounting for clock drift, causing 28 lives lost?<p>Or the 2009 loss of Air France 447 where the software displayed all sorts of confusing information in what was an unreliable airspeed situation?<p>Old incidents are the most likely to be widely disseminated, which is why they're most likely to be discussed, but that doesn't mean that the discussion resolving around old events mean the situation isn't happening now.</p>
]]></description><pubDate>Thu, 09 Oct 2025 18:36:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=45531409</link><dc:creator>AzN1337c0d3r</dc:creator><comments>https://news.ycombinator.com/item?id=45531409</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45531409</guid></item><item><title><![CDATA[New comment by AzN1337c0d3r in "The great software quality collapse or, how we normalized catastrophe"]]></title><description><![CDATA[
<p>Not to be pedantic, but people have died from software programming bugs being a primary contributing factor. One example: Therac-25 (<a href="https://en.wikipedia.org/wiki/Therac-25" rel="nofollow">https://en.wikipedia.org/wiki/Therac-25</a>)</p>
]]></description><pubDate>Thu, 09 Oct 2025 15:57:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=45529495</link><dc:creator>AzN1337c0d3r</dc:creator><comments>https://news.ycombinator.com/item?id=45529495</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45529495</guid></item><item><title><![CDATA[New comment by AzN1337c0d3r in "Microsoft CTO says he wants to swap most AMD and Nvidia GPUs for homemade chips"]]></title><description><![CDATA[
<p>I would submit Google's TPUs are not GPUs.<p>Similarly, Tenstorrent seems to be building something that you could consider "better", at least insofar that the goal is to be open.</p>
]]></description><pubDate>Fri, 03 Oct 2025 16:05:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=45464501</link><dc:creator>AzN1337c0d3r</dc:creator><comments>https://news.ycombinator.com/item?id=45464501</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45464501</guid></item><item><title><![CDATA[New comment by AzN1337c0d3r in "Doom crash after 2.5 years of real-world runtime confirmed on real hardware"]]></title><description><![CDATA[
<p>> Back in the real world, no race team would agree that their cars should disintegrate after one race.<p>Wasn't F1 teams basically doing this by replacing their engines and transmissions until the rules introduced penalties for component swaps in 2014?</p>
]]></description><pubDate>Wed, 17 Sep 2025 15:01:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=45276683</link><dc:creator>AzN1337c0d3r</dc:creator><comments>https://news.ycombinator.com/item?id=45276683</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45276683</guid></item><item><title><![CDATA[New comment by AzN1337c0d3r in "Nvidia is full of shit"]]></title><description><![CDATA[
<p>They don't just specify 12 smaller cables for nothing if 2 larger ones will do. There are concerns here with mechanical compatibility (12 wires have smaller allowable bend radius than 2 larger ones with the same ampacity).</p>
]]></description><pubDate>Sat, 05 Jul 2025 05:55:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=44470380</link><dc:creator>AzN1337c0d3r</dc:creator><comments>https://news.ycombinator.com/item?id=44470380</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44470380</guid></item><item><title><![CDATA[New comment by AzN1337c0d3r in "Nvidia won, we all lost"]]></title><description><![CDATA[
<p>They are not the exact same thing.<p><a href="https://www.corsair.com/us/en/explorer/diy-builder/power-supply-units/evolving-standards-12vhpwr-and-12v-2x6/" rel="nofollow">https://www.corsair.com/us/en/explorer/diy-builder/power-sup...</a></p>
]]></description><pubDate>Sat, 05 Jul 2025 05:35:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=44470305</link><dc:creator>AzN1337c0d3r</dc:creator><comments>https://news.ycombinator.com/item?id=44470305</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44470305</guid></item><item><title><![CDATA[New comment by AzN1337c0d3r in "Apple has announced its final version of macOS for Intel"]]></title><description><![CDATA[
<p>Actually, the keyboard mechanism that the M1 MacBook Pros got was from the 2019 16-inch Intel MacBook Pro (which was the first 16-inch MacBook Pro).<p>So the 2019 16-inch MacBook Pro and the 2020 13-inch MacBook Pros got non-butterfly keyboards.</p>
]]></description><pubDate>Tue, 10 Jun 2025 08:32:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=44234124</link><dc:creator>AzN1337c0d3r</dc:creator><comments>https://news.ycombinator.com/item?id=44234124</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44234124</guid></item><item><title><![CDATA[New comment by AzN1337c0d3r in "Building an AI server on a budget"]]></title><description><![CDATA[
<p>Depends on the server. Probably not going to be cost effective. I get barely ~0.5 tokens/sec.<p>I have Dual E5-2699A v4 w/1.5 TB DDR4-2933 spread across 2 sockets.<p>The full Deepseek-R1 671B (~1.4 TB) with llama.cpp seems to have a in that local engines that run the LLMs don't do NUMA aware allocation, so cores will often have to pull the weights in from another socket's memory controllers through the inter-socket links (QPI/UPI/Hypertransport) and bottleneck there.<p>For my platform that's 2x QPI links @ ~39.2GB/s/link that get saturated.<p>I give it a prompt, go to work and check back on it at lunch and sometimes it's still going.<p>If you're going to want to achieve interactively I'd aim for 7-10 tokens/s, so realistically it means you'll run one of the 8b models on a GPU (~30 tokens/s) or maybe a 70b model on an M4 Max (~8 tokens/s).</p>
]]></description><pubDate>Tue, 10 Jun 2025 06:40:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=44233387</link><dc:creator>AzN1337c0d3r</dc:creator><comments>https://news.ycombinator.com/item?id=44233387</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44233387</guid></item><item><title><![CDATA[New comment by AzN1337c0d3r in "Hundreds of smartphone apps are monitoring users through their microphones"]]></title><description><![CDATA[
<p>Duplicate HN submission from over 7! years ago:<p><a href="https://news.ycombinator.com/item?id=16119981">https://news.ycombinator.com/item?id=16119981</a></p>
]]></description><pubDate>Sun, 27 Apr 2025 04:49:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=43809500</link><dc:creator>AzN1337c0d3r</dc:creator><comments>https://news.ycombinator.com/item?id=43809500</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43809500</guid></item><item><title><![CDATA[New comment by AzN1337c0d3r in "Microsoft researchers developed a hyper-efficient AI model that can run on CPUs"]]></title><description><![CDATA[
<p>The original BitNet paper (<a href="https://arxiv.org/pdf/2310.11453" rel="nofollow">https://arxiv.org/pdf/2310.11453</a>)<p><pre><code>  BitNet: Scaling 1-bit Transformers for Large Language Models
</code></pre>
was actually binary (weights of -1 or 1),<p>but then in the follow-up paper they started using 1.58bit weights (<a href="https://arxiv.org/pdf/2402.17764" rel="nofollow">https://arxiv.org/pdf/2402.17764</a>)<p><pre><code>  The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
</code></pre>
This seems to be first source of the confounding of "1-bit LLM" and ternary weights that I could find.<p><pre><code>  In this work, we introduce a 1-bit LLM variant, namely BitNet b1.58, in which every single parameter (or weight) of the LLM is ternary {-1, 0, 1}.</code></pre></p>
]]></description><pubDate>Thu, 17 Apr 2025 01:17:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=43712100</link><dc:creator>AzN1337c0d3r</dc:creator><comments>https://news.ycombinator.com/item?id=43712100</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43712100</guid></item><item><title><![CDATA[New comment by AzN1337c0d3r in "SVDQuant: 4-Bit Quantization Powers 12B Flux on a 16GB 4090 GPU with 3x Speedup"]]></title><description><![CDATA[
<p>It's worth noting this is laptop 4090 GPU which is more like in the range of desktop 4070 performance.</p>
]]></description><pubDate>Sat, 09 Nov 2024 16:21:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=42095200</link><dc:creator>AzN1337c0d3r</dc:creator><comments>https://news.ycombinator.com/item?id=42095200</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42095200</guid></item><item><title><![CDATA[New comment by AzN1337c0d3r in "The Pumpkin Eclipse"]]></title><description><![CDATA[
<p>Windstream uses T3200 and T3260.</p>
]]></description><pubDate>Thu, 30 May 2024 16:21:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=40525496</link><dc:creator>AzN1337c0d3r</dc:creator><comments>https://news.ycombinator.com/item?id=40525496</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40525496</guid></item></channel></rss>