<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: pornel</title><link>https://news.ycombinator.com/user?id=pornel</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 14:41:14 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=pornel" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by pornel in "Britain today generating 90%+ of electricity from renewables"]]></title><description><![CDATA[
<p>Energy providers use whatever internet API there is for either chargers or cars.<p>Octopus UK has a list of charger models and car brands they support for their special tariff for cheap off-peak EV charging.<p>The charging cable has a protocol for negotiating power, so either side can pause and restart charging.</p>
]]></description><pubDate>Sat, 28 Mar 2026 15:30:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47555495</link><dc:creator>pornel</dc:creator><comments>https://news.ycombinator.com/item?id=47555495</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47555495</guid></item><item><title><![CDATA[New comment by pornel in "How BYD got EV chargers to work almost as fast as gas pumps"]]></title><description><![CDATA[
<p>For me EVs already won when charging got down to 20 minutes.<p>EVs charge unattended. It takes <i>less</i> of my own time to leave EV plugged in parked next to a place I want to be at, than to go drive to a gas station and stand there holding a smelly nozzle.</p>
]]></description><pubDate>Sun, 22 Mar 2026 12:09:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47476679</link><dc:creator>pornel</dc:creator><comments>https://news.ycombinator.com/item?id=47476679</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47476679</guid></item><item><title><![CDATA[New comment by pornel in "How BYD got EV chargers to work almost as fast as gas pumps"]]></title><description><![CDATA[
<p>This is what everyone is already doing, even for relatively small and slow dispensers.<p>It's simply cheaper to have on-site batteries. It makes installation work with a smaller connection to the grid, and makes it possible to install chargers in more places without upgrading the grid connection.<p>Energy arbitrage is profitable on its own, so EV charging stations are almost just an excuse to get some land and a grid connection for more batteries.</p>
]]></description><pubDate>Sun, 22 Mar 2026 12:01:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47476628</link><dc:creator>pornel</dc:creator><comments>https://news.ycombinator.com/item?id=47476628</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47476628</guid></item><item><title><![CDATA[New comment by pornel in "How BYD got EV chargers to work almost as fast as gas pumps"]]></title><description><![CDATA[
<p>e-GMP platform (IONIQ 5, etc.) is really good for road trips.<p>At least in summer temperatures it reliably charges to 80% under 20 minutes. Its range estimate is quite good, and I can depend on it to know when I can skip a charging stop (when I first drove an EV I was freaking out about the 20% state of charge like it was a cellphone. Now I roll to the chargers with 2% left when it saves time).<p>It depends where you live, but infrastructure in the UK and EU has got good enough to the point I don't need backup plans. Chargers are as common as McDonald's (often quite literally). If a station is slow or busy, I can just go to the next one (and they are in clumps often enough that even with a low battery it's not a big deal).</p>
]]></description><pubDate>Sun, 22 Mar 2026 11:40:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47476485</link><dc:creator>pornel</dc:creator><comments>https://news.ycombinator.com/item?id=47476485</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47476485</guid></item><item><title><![CDATA[New comment by pornel in "JPEG Compression"]]></title><description><![CDATA[
<p>It is supported in JPEG! It can reduce color bleeding along the axis that each chroma channel controls.<p>For example, if you make sharp Cr and low-res Cb, you'll get sharper red edges with some yellow bleeding instead of completely blurry red edges if Cr was subsampled.</p>
]]></description><pubDate>Wed, 18 Mar 2026 14:24:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47426187</link><dc:creator>pornel</dc:creator><comments>https://news.ycombinator.com/item?id=47426187</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47426187</guid></item><item><title><![CDATA[New comment by pornel in "JPEG Compression"]]></title><description><![CDATA[
<p>Everything after JPEG is still fundamentally the same, but individual parts of the algorithm are supercharged.<p>JPEG has 8x8 blocks, modern codecs have variable-sized blocks from 4x4 to 128x128.<p>JPEG has RLE+Huffman, modern codecs have context-adaptive variations of arithmetic coding.<p>JPEG has a single quality scale for the whole image, modern codecs allow quality to be tweaked in different areas of the image.<p>JPEG applies block coefficients on top of a single flat color per block (DC coefficient), modern codecs use a "prediction" made by smearing previous couple of block for the starting point.<p>They're JPEGs with more of everything.</p>
]]></description><pubDate>Wed, 18 Mar 2026 14:16:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47426093</link><dc:creator>pornel</dc:creator><comments>https://news.ycombinator.com/item?id=47426093</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47426093</guid></item><item><title><![CDATA[New comment by pornel in "JPEG Compression"]]></title><description><![CDATA[
<p>It's a perfectly pragmatic engineering choice. Blocking is visible only when the compression is too heavy. When degradation is imperceptible, then the block edges are imperceptible too, and the problem doesn't need to be solved (in JPEG imperceptible still means 10:1 data size reduction).<p>Later compression algorithms were focused on video, where the aim was to have good-enough low-quality approximations.<p>Deblocking is an inelegant hack.<p>Deblocking <i>hurts</i> high quality compression of still images, because it makes it harder for codecs to precisely reproduce the original image. Blurring removes details that the blocks produced, so the codec has to either disable deblocking or compensate with exaggerated contrast (which is still an approximation). It also adds a dependency across blocks, which complicates the problem from independent per-block computation to finding a global optimum that happens to flip between frequency domain and pixel hacks. It's no longer a neat mathematical transform with a closed-form solution, but a pile of iterative guesswork (or just not taken into account at all, and the codec wins benchmarks on PSNR, looks good in side by side comparisons at 10% quality level, but is an auto-airbrushing texture-destroying annoyance when used for real images).<p>The Daala project tried to reinvent it with better mathematical foundations (lapped transforms), but in the end a post-processing pass of blurring the pixels has won.</p>
]]></description><pubDate>Wed, 18 Mar 2026 14:02:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47425932</link><dc:creator>pornel</dc:creator><comments>https://news.ycombinator.com/item?id=47425932</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47425932</guid></item><item><title><![CDATA[New comment by pornel in "Are LLM merge rates not getting better?"]]></title><description><![CDATA[
<p>Gell-Mann Amnesia for code quality.</p>
]]></description><pubDate>Sat, 14 Mar 2026 17:34:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47379017</link><dc:creator>pornel</dc:creator><comments>https://news.ycombinator.com/item?id=47379017</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47379017</guid></item><item><title><![CDATA[New comment by pornel in "My “grand vision” for Rust"]]></title><description><![CDATA[
<p>It's hard to see features through the programming language theory jargon, but solid theoretical foundations have worked well for Rust so far.<p>Jargon terms like "sum types" or "affine types" may seem complicated, but when you see it's actually "enums with data fields", it makes so much sense, and prevents plenty of state-related bugs.<p>Proposed "effects" mean that when you're writing an iterator or a stream, and need to handle error or await somewhere in the chain, you won't suddenly have a puzzle how to replace all of the functions in the entire chain and your call stack with their async or fallible equivalents.<p>"linear types" means that Rust will be able to have more control over destruction and lifetime of objects beyond sync call stack, so the tokio::spawn() (the "Rust async sucks" function) won't have to be complaining endlessly about lifetimes whenever you use a local variable.<p>I can't vouch for the specifics of the proposed features (they have tricky to design details), but it's not simply Rust getting more complex, but rather Rust trying to solve and <i>simplify</i> more problems, with robust and generalizable language features, rather than ad-hoc special cases. When it works it makes the language more uniform overall and gives a lot of bang for the buck in terms of complexity vs problems solved.</p>
]]></description><pubDate>Mon, 09 Mar 2026 02:40:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47304273</link><dc:creator>pornel</dc:creator><comments>https://news.ycombinator.com/item?id=47304273</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47304273</guid></item><item><title><![CDATA[New comment by pornel in "LLMs work best when the user defines their acceptance criteria first"]]></title><description><![CDATA[
<p>My comment was a summary of the situation, not literal prompts I use. I absolutely realize the work needs to be adequately described and agents must be steered in the right direction. The results also vary greatly depending on the task and the model, so devs see different rates of success.<p>On non-trivial tasks (like adding a new index type to a db engine, not oneshotting a landing page) I find that the time and effort required to guide an LLM and review its work can exceed the effort of implementing the code myself. Figuring out exactly what to do and how to do it is the hard part of the task. I don't find LLMs helpful in that phase - their assessments and plans are shallow and naive. They can create todo lists that seemingly check off every box, but miss the forest for the trees (and it's an extra work for me to spot these problems).<p>Sometimes the obvious algorithm isn't the right one, or it turns out that the requirements were wrong. When I implement it myself, I have all the details in my head, so I can discover dead-ends and immediately backtrack. But when LLM is doing the implementation, it takes much more time to spot problems in the mountains of code, and even more effort to tell when it's a genuinely a wrong approach or merely poor execution.<p>If I feed it what I know before solving the problem myself, I just won't know all the gotchas yet myself. I can research the problem and think about it really hard in detail to give bulletproof guidance, but that's just programming without the typing.<p>And that's when the models actually behave sensibly. A lot of the time they go off the rails and I feel like a babysitter instructing them "no, don't eat the crayons!", and it's my skill issue for not knowing I must have "NO eating crayons" in AGENTS.md.</p>
]]></description><pubDate>Sat, 07 Mar 2026 13:33:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47287497</link><dc:creator>pornel</dc:creator><comments>https://news.ycombinator.com/item?id=47287497</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47287497</guid></item><item><title><![CDATA[New comment by pornel in "LLMs work best when the user defines their acceptance criteria first"]]></title><description><![CDATA[
<p>When a new technology emerges we typically see some people who embrace it and "figure it out".<p>Electronic synthesisers went from "it's a piano, but expensive and sounds worse" to every weird preset creating a whole new genre of electronic music.<p>So it seems plausible, like Claude's code, that our complaints about unmaintainable code are from trying to use it like a piano, and the rave kids will find a better use for it.</p>
]]></description><pubDate>Sat, 07 Mar 2026 02:54:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47283989</link><dc:creator>pornel</dc:creator><comments>https://news.ycombinator.com/item?id=47283989</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47283989</guid></item><item><title><![CDATA[New comment by pornel in "LLMs work best when the user defines their acceptance criteria first"]]></title><description><![CDATA[
<p>Their default solution is to keep digging. It has a compounding effect of generating more and more code.<p>If they implement something with a not-so-great approach, they'll keep adding workarounds or redundant code every time they run into limitations later.<p>If you tell them the code is slow, they'll try to add optimized fast paths (more code), specialized routines (more code), custom data structures (even more code). And then add fractally more code to patch up all the problems that code has created.<p>If you complain it's buggy, you can have 10 bespoke tests for every bug. Plus a new mocking framework created every time the last one turns out to be unfit for purpose.<p>If you ask to unify the duplication, it'll say "No problem, here's a brand new metamock abstract adapter framework that has a superset of all feature sets, plus two new metamock drivers for the older and the newer code! Let me know if you want me to write tests for the new adapters."</p>
]]></description><pubDate>Sat, 07 Mar 2026 02:25:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47283819</link><dc:creator>pornel</dc:creator><comments>https://news.ycombinator.com/item?id=47283819</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47283819</guid></item><item><title><![CDATA[New comment by pornel in "Relicensing with AI-Assisted Rewrite"]]></title><description><![CDATA[
<p>Laws don't have to treat humans and machines equally. They can be "unfairly" biased for humans.<p>People have needs like "freedom of artistic expression" that we don't need to grant to machines.<p>Machines can operate at speeds and scales way beyond human abilities, so they can potentially create much more damage.<p>We can ban air pollution from machines without making it illegal to fart.</p>
]]></description><pubDate>Sat, 07 Mar 2026 02:03:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47283667</link><dc:creator>pornel</dc:creator><comments>https://news.ycombinator.com/item?id=47283667</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47283667</guid></item><item><title><![CDATA[New comment by pornel in "Relicensing with AI-Assisted Rewrite"]]></title><description><![CDATA[
<p>Generative AI changed the equation so much that our existing copyright laws are simply out of date.<p>Even copyright laws with provisions for machine learning were written when that meant tangential things like ranking algorithms or training of task-specific models that couldn't directly compete with all of their source material.<p>For code it also completely changes where the human-provided value is. Copyright protects specific expressions of an idea, but we can auto-generate the expressions now (and the LLM indirection messes up what "derived work" means). Protecting the ideas that guided the generation process is a much harder problem (we have patents for that and it's a mess).<p>It's also a strategic problem for GNU. 
GNU's goal isn't licensing per se, but giving users freedom to control their software. Licensing was just a clever tool that repurposed the copyright law to make the freedoms GNU wanted somewhat legally enforceable. When it's so easy to launder code's license now, it stops being an effective tool.<p>GNU's licensing strategy also depended on a scarcity of code (contribute to GCC, because writing a whole compiler from scratch is too hard). That hasn't worked well for a while due to permissive OSS already reducing scarcity, but gen AI is the final nail in the coffin.</p>
]]></description><pubDate>Thu, 05 Mar 2026 13:00:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47261107</link><dc:creator>pornel</dc:creator><comments>https://news.ycombinator.com/item?id=47261107</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47261107</guid></item><item><title><![CDATA[New comment by pornel in "New iPad Air, powered by M4"]]></title><description><![CDATA[
<p>The complaint isn't that iPad is useless, but that it would be equally useful to nearly every happy iPad user if it had a few generations older CPU.<p>iPad works for lots of people, but the things that iPad is <i>best</i> for don't really need a powerful CPU.<p>There are few "Pro" apps that you can run to prove it's possible to run them (except for plugins, OS-level helper apps, extra hardware, background processing that doesn't randomly die, scripting more fine-grained than shortcuts, competent file browser, etc.) but you can max out the CPU for a few minutes and go back to a macbook for real work.</p>
]]></description><pubDate>Tue, 03 Mar 2026 04:37:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47228113</link><dc:creator>pornel</dc:creator><comments>https://news.ycombinator.com/item?id=47228113</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47228113</guid></item><item><title><![CDATA[New comment by pornel in "New iPad Air, powered by M4"]]></title><description><![CDATA[
<p>It's just such a shame that really good CPUs are wasted on an OS that still acts as if it had hardware limitations of the 1st gen iPad.<p>Whenever I need to get anything done on iPadOS, I feel like I'm wearing boxing gloves.<p>The device's speed is limited by fiddly animations and DUPLO-sized siloed applications.<p>Its multitasking power is capped in software.</p>
]]></description><pubDate>Tue, 03 Mar 2026 04:24:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47228021</link><dc:creator>pornel</dc:creator><comments>https://news.ycombinator.com/item?id=47228021</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47228021</guid></item><item><title><![CDATA[New comment by pornel in "I built a demo of what AI chat will look like when it's “free” and ad-supported"]]></title><description><![CDATA[
<p>If you want to see the future, check how LLMs keep eagerly recommending JR Japan Rail Pass for tourists.<p>It <i>used to</i> be a very good deal, so LLMs got trained on lots of organic recommendations. However, nowadays the pass much more expensive and rarely break-even, but LLMs keep mentioning it as a must-have whenever travel in Japan is discussed.</p>
]]></description><pubDate>Sun, 01 Mar 2026 13:50:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47206674</link><dc:creator>pornel</dc:creator><comments>https://news.ycombinator.com/item?id=47206674</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47206674</guid></item><item><title><![CDATA[New comment by pornel in "A better streams API is possible for JavaScript"]]></title><description><![CDATA[
<p>In Rust, a Future can have only exactly one listener awaiting it, which means it doesn't need dynamic allocation and looping for an arbitrary number of .then() callbacks. This allows merging a chain of `.await`ed futures into a single state machine. You could get away with awaiting even on every byte.</p>
]]></description><pubDate>Sat, 28 Feb 2026 02:18:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47189267</link><dc:creator>pornel</dc:creator><comments>https://news.ycombinator.com/item?id=47189267</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47189267</guid></item><item><title><![CDATA[New comment by pornel in "I am directing the Department of War to designate Anthropic a supply-chain risk"]]></title><description><![CDATA[
<p>> <i>Conservatism consists of exactly one proposition, to wit: There must be in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect</i><p>For this administration the law isn't something that binds them, but something they can use against others.</p>
]]></description><pubDate>Sat, 28 Feb 2026 01:34:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47188829</link><dc:creator>pornel</dc:creator><comments>https://news.ycombinator.com/item?id=47188829</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47188829</guid></item><item><title><![CDATA[New comment by pornel in "Jane Street faces claims of insider trading that sped up Terraform's collapse"]]></title><description><![CDATA[
<p>If one trade can crash a "stable" coin down to zero, it only proves it was a ponzi scheme like the rest of them.</p>
]]></description><pubDate>Wed, 25 Feb 2026 23:42:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47159693</link><dc:creator>pornel</dc:creator><comments>https://news.ycombinator.com/item?id=47159693</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47159693</guid></item></channel></rss>