<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: kllrnohj</title><link>https://news.ycombinator.com/user?id=kllrnohj</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 16 Apr 2026 17:07:37 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=kllrnohj" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by kllrnohj in "Turn your best AI prompts into one-click tools in Chrome"]]></title><description><![CDATA[
<p>Presumably the upside for Google is they'll just lock it behind the "Google AI Plus" subscription plan if it isn't already</p>
]]></description><pubDate>Tue, 14 Apr 2026 20:48:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47771306</link><dc:creator>kllrnohj</dc:creator><comments>https://news.ycombinator.com/item?id=47771306</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47771306</guid></item><item><title><![CDATA[New comment by kllrnohj in "Netflix Prices Went Up Again – I Bought a DVD Player Instead"]]></title><description><![CDATA[
<p>The only reason I haven't canceled my Plex is because I bought a lifetime pass a decade ago so I literally can't. :/ I almost wish I hadn't specifically so I could cancel it and send that signal.<p>But yes Plex is quite enshittified now. Would definitely start with Jellyfin or something else these days.</p>
]]></description><pubDate>Thu, 09 Apr 2026 20:59:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47709993</link><dc:creator>kllrnohj</dc:creator><comments>https://news.ycombinator.com/item?id=47709993</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47709993</guid></item><item><title><![CDATA[New comment by kllrnohj in "Netflix Prices Went Up Again – I Bought a DVD Player Instead"]]></title><description><![CDATA[
<p>Since DVDs are ~5mbps mpeg2, no, no it isn't. 5 mbps h264 is <i>dramatically</i> better.<p>Now, when compared to <i>blu-ray</i>... That's different. Very, very different.</p>
]]></description><pubDate>Thu, 09 Apr 2026 20:56:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47709946</link><dc:creator>kllrnohj</dc:creator><comments>https://news.ycombinator.com/item?id=47709946</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47709946</guid></item><item><title><![CDATA[New comment by kllrnohj in "Netflix Prices Went Up Again – I Bought a DVD Player Instead"]]></title><description><![CDATA[
<p>> OP here might be misremembering DVDs, here: the physical media skipped or froze intermittently and the players themselves were finicky<p>In my teens my friends and I watched probably <i>hundreds</i> of DVDs, and they almost never had a problem. Skips & freezes were almost only ever a factor for highly scratched copies, more typical of those from Blockbuster than anything we picked up in the $5 bargain bins.<p>I don't think I've ever encountered a "finicky" player, either. I don't even know what that'd mean.</p>
]]></description><pubDate>Thu, 09 Apr 2026 20:54:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47709907</link><dc:creator>kllrnohj</dc:creator><comments>https://news.ycombinator.com/item?id=47709907</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47709907</guid></item><item><title><![CDATA[New comment by kllrnohj in "Netflix Prices Went Up Again – I Bought a DVD Player Instead"]]></title><description><![CDATA[
<p>> You can get the 4 lego movies for $5 on DVD on Amazon right now. A "Tom Cruise 10-Movie Collection" is $12. You get the idea.<p>The image quality on these is also quite bad, especially with cost cutting resulting in these being compressed further to fit on a single-layer DVD. Often without any indication that it happened, as well. Whether or not you find it acceptable is definitely a matter of personal taste, but it's very much apples & oranges vs. Netflix. Blu-ray by contrast is generally better quality than what you'll get from streaming services.</p>
]]></description><pubDate>Thu, 09 Apr 2026 20:50:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47709851</link><dc:creator>kllrnohj</dc:creator><comments>https://news.ycombinator.com/item?id=47709851</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47709851</guid></item><item><title><![CDATA[New comment by kllrnohj in "Every GPU That Mattered"]]></title><description><![CDATA[
<p>The MX440 was an entry level budget card? If it was all the rage in pro gaming circles at the time that's really just a reflection of how poor pro gamers were back then rather than anything to do with the MX440 being particularly noteworthy. In fact looking back at old reviews, it was if anything a flop. Launch MSRP was too expensive for the performance it offered. Especially when it was a DX7 card surrounded by DX8 cards at almost the same price point (including Nvidia's own Ti4200 for just $50 more)</p>
]]></description><pubDate>Tue, 07 Apr 2026 22:48:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47682338</link><dc:creator>kllrnohj</dc:creator><comments>https://news.ycombinator.com/item?id=47682338</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47682338</guid></item><item><title><![CDATA[New comment by kllrnohj in "Every GPU That Mattered"]]></title><description><![CDATA[
<p>The GeForce 4 generation as a whole, while being solid enough cards, were historically not interesting. They were just basic spec bumps over the GeForce 3. No new features or similar. And, critically, the 9700 Pro released the same year as the GeForce 4 and absolutely smoked the living shit out of it.</p>
]]></description><pubDate>Tue, 07 Apr 2026 15:07:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47676553</link><dc:creator>kllrnohj</dc:creator><comments>https://news.ycombinator.com/item?id=47676553</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47676553</guid></item><item><title><![CDATA[New comment by kllrnohj in "Apple approves driver that lets Nvidia eGPUs work with Arm Macs"]]></title><description><![CDATA[
<p>Why? Just make iMessage users put up with green bubbles if they want to talk to you?<p>Thanks to Apple co-opting phone numbers, there's literally no need to ever have iMessage for anyone</p>
]]></description><pubDate>Sat, 04 Apr 2026 19:31:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47642482</link><dc:creator>kllrnohj</dc:creator><comments>https://news.ycombinator.com/item?id=47642482</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47642482</guid></item><item><title><![CDATA[New comment by kllrnohj in "Apple discontinues the Mac Pro"]]></title><description><![CDATA[
<p>They can abandon it multiple times ;)<p>When they introduced the cheese grater Mac Pro the new high end GPUs were a showcase feature of it. Complete with the bespoke "Duo" variants and the special power connector doohickey (MPX iirc?). So I'd consider that an attempt to re-enter that market at least.</p>
]]></description><pubDate>Fri, 27 Mar 2026 17:44:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47545891</link><dc:creator>kllrnohj</dc:creator><comments>https://news.ycombinator.com/item?id=47545891</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47545891</guid></item><item><title><![CDATA[New comment by kllrnohj in "Apple discontinues the Mac Pro"]]></title><description><![CDATA[
<p>The Thunderbolt offerings on the current Mac lineup offer <i>dramatically</i> less bandwidth in total if that matters for a given use case. Thunderbolt 5 is the equivalent of PCI-E Gen 4 x4. So if all 4 of the Thunderbolt 5 ports on a Mac Studio can run at full speed, that's still only the equivalent of a single gen 4 x16 slot. That's less than half the bandwidth of a basic consumer x86 CPU, to say nothing of the Xeon that was in the previous Intel Mac Pro or a modern Epyc/Threadripper (Pro).<p>This is a big reason why things like eGPUs kinda suck. Thunderbolt is fast for external I/O, but it's quite pathetic compared to internal PCI-E.</p>
]]></description><pubDate>Fri, 27 Mar 2026 16:31:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47544860</link><dc:creator>kllrnohj</dc:creator><comments>https://news.ycombinator.com/item?id=47544860</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47544860</guid></item><item><title><![CDATA[New comment by kllrnohj in "Apple discontinues the Mac Pro"]]></title><description><![CDATA[
<p>Their compact solution doesn't cover all needs, they just decided that they didn't care about some of those needs. The Intel Mac Pro was the last Apple offering with high end GPU capabilities. That's now a market segment they just aren't supporting at all. They didn't figure out how to do it compactly, they just abandoned it wholesale.<p>Similarly if your use case depends on a whole lot of fast storage (eg, the 4x NVME to PCI-E x16 bifurcation boards), well that's also now something Apple just doesn't support. They didn't figure out something else. They didn't do super innovative engineering for it. They just walked away from those markets completely, which they're allowed to do of course. It's just not exactly inspiring or "deserves credit" worthy.</p>
]]></description><pubDate>Fri, 27 Mar 2026 16:23:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47544764</link><dc:creator>kllrnohj</dc:creator><comments>https://news.ycombinator.com/item?id=47544764</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47544764</guid></item><item><title><![CDATA[New comment by kllrnohj in "Why I love NixOS"]]></title><description><![CDATA[
<p>I use Nix for my homelab servers, and I'm using AI to be my IT support staff essentially. I don't need to ask AI for help installing hyperland, that is trivial as you say, but setting up nginx port forwarding? samba configs? k3s or k8s? Yeah individually any one of those things isn't very hard. But instead of spending 30 minutes reading through config examples and figuring out where it's setup I can instead spend 30 seconds just telling AI what I want, skimming the output to see if it's looks reasonable, and then doing a good ol' `git commit` of the config file & kicking off the "now go do it" nix build command.<p>And, critically, at no point does an LLM ever have access to sudo, shell, etc.. It just works with plain text files that aren't even on the machine I'm deploying it to.</p>
]]></description><pubDate>Mon, 23 Mar 2026 13:51:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47489550</link><dc:creator>kllrnohj</dc:creator><comments>https://news.ycombinator.com/item?id=47489550</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47489550</guid></item><item><title><![CDATA[New comment by kllrnohj in "Apple's intentional crippling of Mobile Safari"]]></title><description><![CDATA[
<p>yeah I'm using mobile Firefox and it has an awfully high overlap with Safari. Almost like a bunch of the stuff Chrome supports isn't actually a standard at all yet...</p>
]]></description><pubDate>Sun, 22 Mar 2026 15:06:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47478297</link><dc:creator>kllrnohj</dc:creator><comments>https://news.ycombinator.com/item?id=47478297</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47478297</guid></item><item><title><![CDATA[New comment by kllrnohj in "Java is fast, code might not be"]]></title><description><![CDATA[
<p>javac for better or worse is aggressively against doing optimizations to the point of producing the most ridiculously bad code. The belief tends to be that the JIT will do a better job fixing it if it has byte code that's as close as possible to the original code. But this only helps if a) the code ever gets JIT'd at all (rarely true for eg class initializers), and b) the JIT has the budget to do that optimization. Although JITs have the advantage of runtime information, they are also under immense pressure to produce any optimizations as fast as possible. So they rarely do the level of deep optimizations of an offline compiler.</p>
]]></description><pubDate>Fri, 20 Mar 2026 16:35:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47457016</link><dc:creator>kllrnohj</dc:creator><comments>https://news.ycombinator.com/item?id=47457016</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47457016</guid></item><item><title><![CDATA[New comment by kllrnohj in "Google details new 24-hour process to sideload unverified Android apps"]]></title><description><![CDATA[
<p>The problem is it's quite easy to poke holes in a sandbox when you're outside the sandbox looking in, especially when the user is granting you special permissions they don't understand. These apps aren't doing things like manipulating the heap of the banking app, they are instead just taking advantage of useful but powerful features like screen mirroring to read what the app is rendering.</p>
]]></description><pubDate>Thu, 19 Mar 2026 22:38:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47447389</link><dc:creator>kllrnohj</dc:creator><comments>https://news.ycombinator.com/item?id=47447389</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47447389</guid></item><item><title><![CDATA[New comment by kllrnohj in "Nvidia Launches Vera CPU, Purpose-Built for Agentic AI"]]></title><description><![CDATA[
<p>I purposely picked a CPU with the same thread geometry as your 9900K to avoid calls of "apples & oranges" or whatever. If you want more threads, the 9950X is right there in the same socket. Or Core Ultra 9 285k. Either of which will run circles around a 9900K in code compilation.<p>You can research microarchitecture differences if you want, it's a fascinating world, or you can just skip to looking at benchmarks/reviews. Little hard to compare against quite that large of a generation gap, but eg <a href="https://gamersnexus.net/cpus/rip-intel-amd-ryzen-7-9800x3d-cpu-review-benchmarks-vs-7800x3d-285k-14900k-more#9800x3d-production-benchmarks" rel="nofollow">https://gamersnexus.net/cpus/rip-intel-amd-ryzen-7-9800x3d-c...</a> or <a href="https://www.phoronix.com/review/amd-ryzen-7-9800x3d-linux/2" rel="nofollow">https://www.phoronix.com/review/amd-ryzen-7-9800x3d-linux/2</a></p>
]]></description><pubDate>Mon, 16 Mar 2026 23:44:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47406633</link><dc:creator>kllrnohj</dc:creator><comments>https://news.ycombinator.com/item?id=47406633</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47406633</guid></item><item><title><![CDATA[New comment by kllrnohj in "Nvidia Launches Vera CPU, Purpose-Built for Agentic AI"]]></title><description><![CDATA[
<p>Your 9900k at 5ghz does work slower than a Ryzen 9800X3D at 5ghz. A <i>lot</i> slower (1700 single core geekbench vs 3300, and just about any benchmark will tell the same story). Clock speed alone doesn't mean anything.</p>
]]></description><pubDate>Mon, 16 Mar 2026 21:31:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47405193</link><dc:creator>kllrnohj</dc:creator><comments>https://news.ycombinator.com/item?id=47405193</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47405193</guid></item><item><title><![CDATA[New comment by kllrnohj in "Nvidia Launches Vera CPU, Purpose-Built for Agentic AI"]]></title><description><![CDATA[
<p>If M5 has 9-18 cores and takes ~20w, then that's ~1-2w per CPU core. If these are 200-300W, and have ~100-200 CPU cores, then guess what? That's also ~1-2w per CPU core.<p>Xeons, Epycs, whatever this is - they are all also typically optimized for power efficiency. That's how they can fit so many CPU cores in 200-300W.</p>
]]></description><pubDate>Mon, 16 Mar 2026 21:22:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47405072</link><dc:creator>kllrnohj</dc:creator><comments>https://news.ycombinator.com/item?id=47405072</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47405072</guid></item><item><title><![CDATA[New comment by kllrnohj in "The MacBook Neo"]]></title><description><![CDATA[
<p>> you swapped your whole system memory in 3 seconds, which is impressive.<p>As a user a 3 second hang is unusable. Also, critically, swap consumes the life of the drive. Since the Neo's isn't user-replaceable, a 3-5 year lifespan before death is actually a non-trivial compromise, although time will tell on that one I suppose.</p>
]]></description><pubDate>Thu, 12 Mar 2026 15:25:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47352114</link><dc:creator>kllrnohj</dc:creator><comments>https://news.ycombinator.com/item?id=47352114</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47352114</guid></item><item><title><![CDATA[New comment by kllrnohj in "The MacBook Neo"]]></title><description><![CDATA[
<p>The SSD in the Neo only manages around 1,500 MB/s in sequential benchmarks, it's not an impressive drive.<p>> It's shocking but pretty much all laptops from Dell, HP, etc come with some form of eMMC storage.<p>I just went to Dell's website and picked a random $400 laptop and it had an NVME SSD. The $650 Dell 14 Essential also is NVME. Both of which are M.2 so easily upgraded, replaced, or have data recovery done on them. The only eMMC options I'm seeing are the $300 Chromebooks? Which is no where close to "pretty much all laptops." In fact it'd be "pretty much none of the laptops"</p>
]]></description><pubDate>Thu, 12 Mar 2026 13:34:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47350333</link><dc:creator>kllrnohj</dc:creator><comments>https://news.ycombinator.com/item?id=47350333</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47350333</guid></item></channel></rss>