<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: treffer</title><link>https://news.ycombinator.com/user?id=treffer</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 21 Apr 2026 11:59:32 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=treffer" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by treffer in "Light Mode InFFFFFFlation"]]></title><description><![CDATA[
<p>Or switch to HDR if you have a capable display.<p>I was pleasantly surprised that HDR also means you can control brightness - it is all software I that case!<p>And the brightness keys on an external Apple keyboard work.</p>
]]></description><pubDate>Sun, 18 Jan 2026 07:29:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46665583</link><dc:creator>treffer</dc:creator><comments>https://news.ycombinator.com/item?id=46665583</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46665583</guid></item><item><title><![CDATA[New comment by treffer in "Pebble Round 2"]]></title><description><![CDATA[
<p>Oh great, I missed that end of the page. The "Round 2 details" links back to the blog and it is hard to see the FAQ on mobile (needs manual scrolling to the end).<p>Google search and Perplexity failed when I tried, too. Google search has caught up now (haven't retried Perplexity)<p>A 41.5mm diameter sounds good. That's a whopping 10mm/20% smaller than my current watch. Should be really neat given the thickness.</p>
]]></description><pubDate>Fri, 02 Jan 2026 22:58:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=46470554</link><dc:creator>treffer</dc:creator><comments>https://news.ycombinator.com/item?id=46470554</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46470554</guid></item><item><title><![CDATA[New comment by treffer in "Pebble Round 2"]]></title><description><![CDATA[
<p>I pre-ordered because I loved the Pebble Round - especially the size and look. My intended use case is for formal dress codes and special events (weddings, new years, ...) wheretny fennix 51mm does not fit in (literally and figuratively).<p>That said: I can't find full dimensions for the new round 2. I can guesstimate that it should be 10-20% smaller in diameter and less than 2/3 the thickness.<p>Would you mind sharing full dimensions or even update the post?<p>And congratulations! I really like this. I hope there will be enough of a market to support this project long term.</p>
]]></description><pubDate>Fri, 02 Jan 2026 20:54:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=46469230</link><dc:creator>treffer</dc:creator><comments>https://news.ycombinator.com/item?id=46469230</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46469230</guid></item><item><title><![CDATA[New comment by treffer in "Unlocking free WiFi on British Airways"]]></title><description><![CDATA[
<p>I had 8 IPs in a hetzner server years ago. One IP had an iptables rule to accept openvpn on any port.<p>My openvpn config was a long list of commonly accepted ports on either tcp or udp.<p>Startup would take a while but the number of times it worked was amazing.</p>
]]></description><pubDate>Sat, 25 Oct 2025 09:14:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=45702433</link><dc:creator>treffer</dc:creator><comments>https://news.ycombinator.com/item?id=45702433</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45702433</guid></item><item><title><![CDATA[New comment by treffer in "Patina: a Rust implementation of UEFI firmware"]]></title><description><![CDATA[
<p>Interesting. But who is OpenDevicePartnership?<p>Looking at the members on the repository this seems to be a Microsoft project?</p>
]]></description><pubDate>Sat, 11 Oct 2025 10:25:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=45547992</link><dc:creator>treffer</dc:creator><comments>https://news.ycombinator.com/item?id=45547992</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45547992</guid></item><item><title><![CDATA[New comment by treffer in "European Union Public Licence (EUPL)"]]></title><description><![CDATA[
<p>Compatibility as I understand it means "mixing this in a project is OK".<p>This is the case if the 2 license aren't at odds. Usually one license is stricter and you have to adhere to that one for the combined work.<p>A counter-example is GPLv2 and Apache license. Those 2 are incompatible. This was fixed with GPLv3 and you can often upgrade to GPLv3.<p>So no, this won't allow you to relicense as GPLv2. But you can use GPLv2 code.<p>This is especially relevant if you have such code redistribution clauses.</p>
]]></description><pubDate>Tue, 30 Sep 2025 08:39:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=45423292</link><dc:creator>treffer</dc:creator><comments>https://news.ycombinator.com/item?id=45423292</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45423292</guid></item><item><title><![CDATA[New comment by treffer in "FreeBSD Suspend/Resume"]]></title><description><![CDATA[
<p>I tried a few times as some BIOS have a hidden or disabled setting but I never got past a plain crash. Device and CPU vendor support for classic S3 is shrinking. E.g. on framework laptops the Intel CPU(!) does not officially support S3 sleep.<p>So I can understand that there is no option for it if all you can get is out of spec behavior and crashes.<p>Also note that it is incompatible with some secure boot and system integrity settings.</p>
]]></description><pubDate>Tue, 14 Jan 2025 08:31:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=42695033</link><dc:creator>treffer</dc:creator><comments>https://news.ycombinator.com/item?id=42695033</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42695033</guid></item><item><title><![CDATA[New comment by treffer in "Radxa Orion O6 Mini-ITX Arm Motherboard with Cix P1 12-Core ARMv9 SoC and UEFI"]]></title><description><![CDATA[
<p>The product page lists EDK II. Is the code available anywhere? I can't see it in edk-platforms.....<p>I would love to have a UEFI I can compile....</p>
]]></description><pubDate>Wed, 18 Dec 2024 18:26:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=42453304</link><dc:creator>treffer</dc:creator><comments>https://news.ycombinator.com/item?id=42453304</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42453304</guid></item><item><title><![CDATA[New comment by treffer in "HMD Barbie Phone"]]></title><description><![CDATA[
<p>I guess the reason is the screen. It's 320x240, and 0.3M is 640x480 (VGA).the secondary screen is even lower resolution (160x120).<p>It does work very well for this screen resolution.<p>And what else would you do with this media given it's a feature phone?</p>
]]></description><pubDate>Sun, 01 Sep 2024 12:45:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=41416559</link><dc:creator>treffer</dc:creator><comments>https://news.ycombinator.com/item?id=41416559</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41416559</guid></item><item><title><![CDATA[New comment by treffer in "Modern EV Batteries Rarely Fail: Study"]]></title><description><![CDATA[
<p>Is it? The article lists 2015 as the year where things improved a lot, 2017 is well past that. The numbers are low and even that's inflated due to recalls.<p>I've seen >>10 year old laptops where the battery is still good enough to go from charger to charger. Just go to ebay and check out 2009 MacBooks. That's ~15 years now.<p>I don't think this is unrealistic if you can live with the heavier degradation.</p>
]]></description><pubDate>Thu, 25 Apr 2024 06:39:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=40154219</link><dc:creator>treffer</dc:creator><comments>https://news.ycombinator.com/item?id=40154219</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40154219</guid></item><item><title><![CDATA[New comment by treffer in "Sysadmin friendly high speed Ethernet switching"]]></title><description><![CDATA[
<p>It just depends on what you use for management.<p>IIRC the /etc/network/interfaces does a reconfiguration that's pretty disruptive.<p>Things like brctl and ethtool worked on the fly without issues (note though that I mostly used Arista years ago).<p>It is usually non-disruptive if it gets applied as deltas. If your config tool does a teardown/recreate then that's disruptive. Within the bounds of ethernet and routing protocols (OSPF DR/DBR changes are disruptive, STP can be fun, ....).</p>
]]></description><pubDate>Wed, 24 Apr 2024 15:10:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=40145293</link><dc:creator>treffer</dc:creator><comments>https://news.ycombinator.com/item?id=40145293</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40145293</guid></item><item><title><![CDATA[New comment by treffer in "Ask HN: Please recommend how to manage personal serverss"]]></title><description><![CDATA[
<p>Depends on what you are doing. But you can take the path of app / os images.<p>My home network is just openwrt, and I use make plus a few scripts and imagebuilder to create images that I flash, including configs.<p>For rpi I actually liked cloud-init, but it is too flaky for complicated stuff. In that case I would nowerdays rather dockerize it and use systemd + podman or a kubelet in standalone mode. Secrets on a mount point. Make it so that the boot partition of the rpi is the main config folder. That way you can locally flash a golden image.<p>Anything that mutates a server is brittle as the starting point is a moving target. Building images (or fancy tarballs like docker) makes it way more likely that you get consistent results.</p>
]]></description><pubDate>Sat, 20 Apr 2024 20:35:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=40100808</link><dc:creator>treffer</dc:creator><comments>https://news.ycombinator.com/item?id=40100808</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40100808</guid></item><item><title><![CDATA[New comment by treffer in "Bzip2 Format Specification (2016) [pdf]"]]></title><description><![CDATA[
<p>The issue talks about one vs. multiple frames. That's exactly the issue. It's not a matter of complexity, it's a matter of bad compromises.<p>The issue can be easily played through. The most simplistic encoding where the issue happens is RLE (run length encoding).<p>Say we have 1MB of repeated 'a'. Originally  'aaa....a'. We now encode it as '(length,byte)', so the stream turns into (1048576,'a').<p>Now we would want to parallelize it over 16 cores. So we split the 1MB into 16 64k chunks and compress each chunk independently. This works but is ~16x larger.<p>Similar things happen for window based algorithms. We encode repeated content as (offset,length), referencing older occurrences. Now imagine 64k of random data, repeated 16 times. The parallel version can't compress anything (16x random data), the non-parallel version will compress it roughly 16:1.<p>There is a trick to avoid this downside. The lookup is not unlimited, there is a maximum window size to limit memory usage. For compatibility it's 8MB for zstd (at level 19), but you can go all the way to 2GB (ultra, 22, long=31).
As you make chunks significantly larger than the window you are only loosing out on the new ramp up. E.g. if you use 80MB chunks then you have a bit less than 10% of the file encoded worse. You could still double your encoded size with a well crafted file.
If you don't care about parallel decompression then you are able to only parallelize parts like the lookup search. This gives good speedup, but only on compression. That's the current parallel compression approach in most cases (iirc) leading to a single frame, just faster. The problem is that back-references can only be resolved backwards.<p>The whole problem is not implementation complexity. It's something you algorithmically can't do with current window based approaches without significant tradeoffs on memory consumption, compression ratio and parallel execution.<p>For bzip2 the file is always chunked at 900kb boundaries at most. Each block is encoded independently and can be decoded independently. It avoids this whole tradeoff issue altogether.<p>I would also disagree with "no need". Zstd easily outperforms tar, but even my laptop SSD is faster than the zstd speed limits. I just don't have the _external_ connectivity to get something onto my disk fast enough. I've also worked with servers 10 years ago where the PCIe bus to the RAID card was the limiting factor. Again easily exceeding the speed limits.<p>Anyway, as mentioned a few times it's an odd corner case. And one can't go wrong by choosing zstd for compression. But it is real fun to dig into these issues and look at them, I hope this sparks some interest in it!</p>
]]></description><pubDate>Wed, 10 Apr 2024 15:23:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=39991743</link><dc:creator>treffer</dc:creator><comments>https://news.ycombinator.com/item?id=39991743</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39991743</guid></item><item><title><![CDATA[New comment by treffer in "Bzip2 Format Specification (2016) [pdf]"]]></title><description><![CDATA[
<p>There is one thing you can't with most algorithms: prallelize decompression. That's because most compression algorithms use sliding windows to remove repetitive sections.<p>And decompression speed also drops as compression ratio increases.<p>If you transfer over say a 1GBit link then transfer speed is likely the bottleneck as zstd decompression can reach >200MB/s. However if you have a 10GBit link then you are CPU bound on decompression.
See e.g. decompression speed at [1].<p>Bzip2 is not window but block based (level 1 == 100kb blocks, 9 == 900kb blocks iirc). This means that, given enough cores, both compression and decompression can parallelize. At something like 10-20MB/s per core. So somwhere >10 cores you will start to outperform zstd.<p>Granted, that's a very very corner case. But one you might hit with servers. That's how I learned about it. But so far I've converged on zstd for everything. It is usually not worth the hassle to squeeze these last performance bits out.<p>[1] <a href="https://gregoryszorc.com/blog/2017/03/07/better-compression-with-zstandard/" rel="nofollow">https://gregoryszorc.com/blog/2017/03/07/better-compression-...</a></p>
]]></description><pubDate>Wed, 10 Apr 2024 14:06:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=39990892</link><dc:creator>treffer</dc:creator><comments>https://news.ycombinator.com/item?id=39990892</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39990892</guid></item><item><title><![CDATA[New comment by treffer in "Bzip2 Format Specification (2016) [pdf]"]]></title><description><![CDATA[
<p>This looks really good, I remember looking into BWT ad a kid. It's a true "wat" once you understand it.<p>And once you understand it, why does it compression so well? Because suffixes tend to have the same byte preceeding them.<p>Bzip2 is still highly useful because it is block based and can thus be scaled nearly linearly across cou cores (both on compress and decompress)! Especially at higher compression levels. See e.g. lbzip2.<p>Bzip2 is still somewhat relevant if you want to max out cores. Although it has a hard time competing with zstd.</p>
]]></description><pubDate>Wed, 10 Apr 2024 12:37:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=39990012</link><dc:creator>treffer</dc:creator><comments>https://news.ycombinator.com/item?id=39990012</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39990012</guid></item><item><title><![CDATA[New comment by treffer in "The Git repositories of XZ projects are available on GitHub again"]]></title><description><![CDATA[
<p>Compression algorithm implementations are not for everyone.<p>The math and algorithms behind it are fun to learn but hard. And then you need to implement it both performant and correct.<p>Only a few people build up the algorithmic background to do this. And the gains once an implementation is there are marginal (optimizations).<p>The only larger one seems to be zstd, and I haven't wrapped my head around ANS/tANS...</p>
]]></description><pubDate>Wed, 10 Apr 2024 12:27:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=39989929</link><dc:creator>treffer</dc:creator><comments>https://news.ycombinator.com/item?id=39989929</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39989929</guid></item><item><title><![CDATA[New comment by treffer in "The xz sshd backdoor rabbithole goes quite a bit deeper"]]></title><description><![CDATA[
<p>Well, there is a pretty logical explanation.<p>Libsystemd was moving to a dlopen architecture for its dependencies.<p>This means that the backdoor would not load as the sshd patch only used libsystemd for notify, which does not need liblzma at all.<p>So they IMHO gave it a last shot. It's OK if it burns as it would be useless in 3 months (or even less).<p>The collateral is the backdoor Binary, but given enough engineering power it will be irrelevant in 2-3 months, too.</p>
]]></description><pubDate>Sun, 07 Apr 2024 15:33:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=39961433</link><dc:creator>treffer</dc:creator><comments>https://news.ycombinator.com/item?id=39961433</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39961433</guid></item><item><title><![CDATA[New comment by treffer in "AMD Unveils Their Embedded+ Architecture, Ryzen Embedded with Versal Together"]]></title><description><![CDATA[
<p>I have seen this in some NAS systems. It's a pretty good fit, especially for the ones that can be upgraded to 64GB RAM and run VMs or docker.<p>Generally plenty of RAM plus many fast PCIe lanes is not something most ARM chips offer.</p>
]]></description><pubDate>Thu, 04 Apr 2024 18:43:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=39934279</link><dc:creator>treffer</dc:creator><comments>https://news.ycombinator.com/item?id=39934279</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39934279</guid></item><item><title><![CDATA[New comment by treffer in "Documentation for the AMD 7900XTX"]]></title><description><![CDATA[
<p>One thing everyone should keep in mind about this NVIDIA/AMD battle right now: CUDA has been published for 16 years, it's been a huge push by NVIDIA to do GPGPU computation. I remember seeing it as a new thing in university back then, after the advanced shaders that were only available on NVIDIA.<p>NVIDIA pretty rightful has the lead there, because they worked and invested into it for something like 20 years (you could do pretty advanced shaders on NVIDIA pre-CUDA).<p>It only started to pay off recently, and especially with the AI hype (GPU mining was nice, too).<p>Now everybody is looking at the profits and goes like "OMG, I want a part of that cake!", either by competing (AMD / Intel) or by paying less for the cards (basically everyone else in the AI space).<p>But you have to catch up to 16 years of pretty solid software and ecosystem development. And that's only going to work if you have good enough hardware. NVIDIA did the hard work here. They have earned this lead.<p>I am saying this as someone who would rather not buy NVIDIA. I really wish I can soon throw 1-2 7900XTX into a machine and use it for LLMs without issues. But I would also bet that it takes at least a few more years to catch up, even with the massive global interest.</p>
]]></description><pubDate>Mon, 01 Apr 2024 12:30:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=39893432</link><dc:creator>treffer</dc:creator><comments>https://news.ycombinator.com/item?id=39893432</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39893432</guid></item><item><title><![CDATA[New comment by treffer in "LLaMA now goes faster on CPUs"]]></title><description><![CDATA[
<p>A nice example of this is fftw which has hundreds (if not thousands) of generated methods to do the fft math. The whole project is a code generator.<p>It can then after compilation benchmark these, generate a wisdom file for the hardware and pick the right implementation.<p>Compared with that "a few" implementations of the core math kernel seem like an easy thing to do.</p>
]]></description><pubDate>Mon, 01 Apr 2024 11:13:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=39892862</link><dc:creator>treffer</dc:creator><comments>https://news.ycombinator.com/item?id=39892862</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39892862</guid></item></channel></rss>