<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: fabian2k</title><link>https://news.ycombinator.com/user?id=fabian2k</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 13 May 2026 18:22:09 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=fabian2k" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by fabian2k in "Postmortem: TanStack NPM supply-chain compromise"]]></title><description><![CDATA[
<p>Once you run your app with the updated dependencies, that code is executed anyway. And root or non-root doesn't matter, the important stuff is available as the user running the application anyway.</p>
]]></description><pubDate>Mon, 11 May 2026 22:21:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=48101483</link><dc:creator>fabian2k</dc:creator><comments>https://news.ycombinator.com/item?id=48101483</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48101483</guid></item><item><title><![CDATA[New comment by fabian2k in "Postmortem: TanStack NPM supply-chain compromise"]]></title><description><![CDATA[
<p>At least it was only online for 1-2 hours at most, and it didn't affect react-query. But still a bunch of quite well-known packages.<p>This doesn't really feel sustainable, you're rolling the dice every time the dependencies are updated.</p>
]]></description><pubDate>Mon, 11 May 2026 21:58:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=48101210</link><dc:creator>fabian2k</dc:creator><comments>https://news.ycombinator.com/item?id=48101210</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48101210</guid></item><item><title><![CDATA[New comment by fabian2k in "Doctors Just Staged the Quietest Coup in American History"]]></title><description><![CDATA[
<p>The blogs post seems overly dramatic to me and vastly exaggerating the importance of that document. A bunch of doctors voiced their concern about the president, and that concern was added to the public record. That's it.<p>We don't know if anyone will actually stop it if Trump ever gives a catastrophic military order. Odds are against it, since none of the illegal orders so far have been resisted effectively.</p>
]]></description><pubDate>Sun, 10 May 2026 11:26:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=48083001</link><dc:creator>fabian2k</dc:creator><comments>https://news.ycombinator.com/item?id=48083001</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48083001</guid></item><item><title><![CDATA[New comment by fabian2k in "AWS North Virginia data center outage – recovery to take hours"]]></title><description><![CDATA[
<p>I'd expect someone like AWS to just throttle machines before overloading their cooling. Because they probably can do that, while e.g. a data center that just rents the space can't really throttle their customers nicely.</p>
]]></description><pubDate>Fri, 08 May 2026 22:28:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=48069544</link><dc:creator>fabian2k</dc:creator><comments>https://news.ycombinator.com/item?id=48069544</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48069544</guid></item><item><title><![CDATA[New comment by fabian2k in "AWS North Virginia data center outage – resolved"]]></title><description><![CDATA[
<p>I thought cooling was pretty much pre-planned in any data center, and you simply don't install more stuff than you can cool?<p>So did some cooling equipment fail here or was there an external reason for the overheating? Or does Amazon overbook the cooling in their data centers?</p>
]]></description><pubDate>Fri, 08 May 2026 21:53:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=48069238</link><dc:creator>fabian2k</dc:creator><comments>https://news.ycombinator.com/item?id=48069238</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48069238</guid></item><item><title><![CDATA[New comment by fabian2k in "Grand Theft Oil Futures: Insider traders keep making a killing at our expense"]]></title><description><![CDATA[
<p>It was obvious that Trump is unstable and has extreme and volatile views on foreign policy. So yes, I think it is entirely fair to blame anyone that voted for Trump in the last election.</p>
]]></description><pubDate>Thu, 07 May 2026 13:52:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=48049475</link><dc:creator>fabian2k</dc:creator><comments>https://news.ycombinator.com/item?id=48049475</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48049475</guid></item><item><title><![CDATA[New comment by fabian2k in "Grand Theft Oil Futures: Insider traders keep making a killing at our expense"]]></title><description><![CDATA[
<p>And what did the attack accomplish? It did degrade the Iranian military somewhat. It killed the Iranian leadership, but odds are the replacements are simply even more radical and opposed to the US.<p>The nuclear material is probably still buried in the facilities attacked in the earlier strikes (not the war this year). That is a delay on any potential nuclear weapons development, but not more than that.<p>It showed Iran and the world just how much damage they can cause with their control over the strait. And it removed any factor that previously led Iran towards not blocking the strait even when attacked. In the end the odds are that this whole mess will cause death and suffering, damage the world economy and we'll likely end up with an even more dangerous Iran in the future.</p>
]]></description><pubDate>Thu, 07 May 2026 13:49:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=48049433</link><dc:creator>fabian2k</dc:creator><comments>https://news.ycombinator.com/item?id=48049433</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48049433</guid></item><item><title><![CDATA[New comment by fabian2k in "Docker 29 has changed its default image store for new installs"]]></title><description><![CDATA[
<p>> This difference is particularly noticeable with multiple images sharing the same base layers. With legacy storage drivers, shared base layers were stored once locally, and reused images that depended on them. With containerd, each image stores its own compressed version of shared layers, even though the uncompressed layers are still de-duplicated through snapshotters.<p>This seems like a really weird decision. If base images are duplicated for every image you have, that will add up quickly.</p>
]]></description><pubDate>Tue, 05 May 2026 14:04:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=48022699</link><dc:creator>fabian2k</dc:creator><comments>https://news.ycombinator.com/item?id=48022699</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48022699</guid></item><item><title><![CDATA[New comment by fabian2k in "Should I Run Plain Docker Compose in Production in 2026?"]]></title><description><![CDATA[
<p>My experience with docker-compose is a bit outdated, but my impression some years ago was that it was too sensitive and fragile. I encountered bugs or incompatibilities that broke the docker-compose setup often enough to be forced to pin the specific docker and docker-compose versions.<p>And the error handling was terrible. Most of these problems resulted in a Python stack trace in some docker-compose internals instead of a readable error message. Googling the stack trace usually lead to a description of the actual problem, but that's really not something that inspires confidence.</p>
]]></description><pubDate>Tue, 05 May 2026 10:46:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=48020653</link><dc:creator>fabian2k</dc:creator><comments>https://news.ycombinator.com/item?id=48020653</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48020653</guid></item><item><title><![CDATA[New comment by fabian2k in "Direct electrochemical black coffee quality appraisal using cyclic voltammetry"]]></title><description><![CDATA[
<p>I have a 200 EUR handgrinder and an Aeropress. That's about the limit for me, there isn't really anything else to improve there for filter coffee. There's probably a lot more ways to make different coffee, but not that much room for making the same coffee better. I also don't want to mess with the water, so that puts a ceiling on coffee quality anyway.<p>There's some legitimate room to spend much more money when making Espresso. But a lot of the more expensive options would be more about the workflow than quality. If you need to make many Espressos in quick succession you'll hit the limits of cheaper equipment.</p>
]]></description><pubDate>Sat, 02 May 2026 09:03:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47984706</link><dc:creator>fabian2k</dc:creator><comments>https://news.ycombinator.com/item?id=47984706</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47984706</guid></item><item><title><![CDATA[New comment by fabian2k in "Linux 7.0 Broke PostgreSQL: The Preemption Regression Explained"]]></title><description><![CDATA[
<p>That regression is maybe most useful as a reminder to people to configure huge pages for PostgreSQL. That's the one recommended basic performance tuning that is just annoying enough to set up that I suspect many people with smaller DBs will skip it.<p>Though I actually don't know how large shared buffers has to be for huge pages to make a noticeable difference.</p>
]]></description><pubDate>Wed, 29 Apr 2026 19:03:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47952849</link><dc:creator>fabian2k</dc:creator><comments>https://news.ycombinator.com/item?id=47952849</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47952849</guid></item><item><title><![CDATA[New comment by fabian2k in "Coffee with a splash of physics: how to make the most out of your brew"]]></title><description><![CDATA[
<p>There's certainly people that focus too much on minor details. And you really don't need to go into the 4-digit in spending to get very good coffee (especially if you don't want Espresso). But I was a bit surprised how much small details can matter in brewing coffee.<p>I'm using an Aeropress and an 1ZPresso handgrinder. I found that it helped a lot to do things exactly the same way each time. It reduced variability and made it easier to adjust a parameter like grind size. In particular I found stirring to be a really finicky parameter with a potentially large effect. If I stirred vigorously without changing anything else, the coffee got noticeably bitter. I switched to not stirring at all, it's mixed plenty just by pouring the water. Makes the workflow easier and reduces variability.<p>So while I think there's plenty of ritual around coffee that has no real effect, I suspect the value lies in keeping to exactly the same method and performing all steps the same way each time.</p>
]]></description><pubDate>Wed, 29 Apr 2026 15:58:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47950219</link><dc:creator>fabian2k</dc:creator><comments>https://news.ycombinator.com/item?id=47950219</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47950219</guid></item><item><title><![CDATA[New comment by fabian2k in "He asked AI to count carbs 27000 times. It couldn't give the same answer twice"]]></title><description><![CDATA[
<p>The paper itself is a lot clearer about the purpose. The blog post reads very clickbaity and doesn't really explain the context well.</p>
]]></description><pubDate>Wed, 29 Apr 2026 13:16:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47947952</link><dc:creator>fabian2k</dc:creator><comments>https://news.ycombinator.com/item?id=47947952</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47947952</guid></item><item><title><![CDATA[New comment by fabian2k in "He asked AI to count carbs 27000 times. It couldn't give the same answer twice"]]></title><description><![CDATA[
<p>It does sound like a pretty terrible idea to try to count carbohydrates from an image. There just isn't enough information there to reliably do that. At best you could identify the object in the image and then show reference information on typical nutrition values. But if you need anything more accurate than that, you probably have to read the labels on the ingredients and calculate.</p>
]]></description><pubDate>Wed, 29 Apr 2026 13:06:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47947842</link><dc:creator>fabian2k</dc:creator><comments>https://news.ycombinator.com/item?id=47947842</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47947842</guid></item><item><title><![CDATA[New comment by fabian2k in "Germany Overtakes US in Ammunition Production Capacity"]]></title><description><![CDATA[
<p>> But the U.S. has made it clear that it wants to concentrate on the Indo-Pacific and the threat posed by China's powerful military, rather than propping up Europe.<p>If that were true they wouldn't have wasted enormous amounts of expensive ammunition in Iran.</p>
]]></description><pubDate>Wed, 29 Apr 2026 07:40:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47945242</link><dc:creator>fabian2k</dc:creator><comments>https://news.ycombinator.com/item?id=47945242</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47945242</guid></item><item><title><![CDATA[New comment by fabian2k in "Pgbackrest is no longer being maintained"]]></title><description><![CDATA[
<p>Are you using WAL archiving? As far as I understand, pgbackrest and Barman can also use direct streaming from the DB (same mechanism as replication), I didn't find any mention of this in the WAL-G documentation.<p>With WAL archiving you need to wait for a WAL segment to finish before it's backed up. With streaming backups the deadtime is minimized. At least that's as far as I understand this, I didn't get to try this out in practice yet.</p>
]]></description><pubDate>Mon, 27 Apr 2026 12:33:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47920769</link><dc:creator>fabian2k</dc:creator><comments>https://news.ycombinator.com/item?id=47920769</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47920769</guid></item><item><title><![CDATA[New comment by fabian2k in "Pgbackrest is no longer being maintained"]]></title><description><![CDATA[
<p>I was about to set up Postgres backups with pgbackrest very soon. It looked like the most mature solution for my use case. What I was aiming for was continuous backups to an object storage provider, without a central DB server but the backup tool directly installed on the Postgres server.<p>I'll have to look at the alternatives again, I think that was mostly WAL-G and Barman. It looks like Barman doesn't support direct backup to object storage, unfortunately. And I find the WAL-G documentation very confusing. What I'm looking for is WAL streaming and object storage support, to minimize the amount of data that can be lost and so I don't have to run my own backup server.</p>
]]></description><pubDate>Mon, 27 Apr 2026 11:35:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47920256</link><dc:creator>fabian2k</dc:creator><comments>https://news.ycombinator.com/item?id=47920256</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47920256</guid></item><item><title><![CDATA[New comment by fabian2k in "An AI agent deleted our production database. The agent's confession is below"]]></title><description><![CDATA[
<p>Especially in combination with not having scoped api keys at all, if I understand the article correctly. If I read it correctly, any key to the dev/staging environment can access their prod systems. That's just insane.<p>I'd never feel comfortable without a second backup at a different provider anyway. A backup that isn't deleteable with any role/key that is actually used on any server or in automation anywhere.</p>
]]></description><pubDate>Sun, 26 Apr 2026 19:48:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47913470</link><dc:creator>fabian2k</dc:creator><comments>https://news.ycombinator.com/item?id=47913470</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47913470</guid></item><item><title><![CDATA[New comment by fabian2k in "PCR is a surprisingly near-optimal technology"]]></title><description><![CDATA[
<p>This is about modern PCR, which is already optimized a lot compared to early PCR. And if you're in a "normal" lab, everything around the PCR, all the handling and preparation will be such a large chunk of time that improving the PCR time alone doesn't really matter that much.<p>In a very automated, high-throughput setting I'd imagine that parallelizing the PCR would be the best way to increase throughput. There probably isn't that much potential in speeding up the time compared to just multiplying the number of reactions. Which is part of the point of the article.<p>Regarding the cheaper lab instruments, I'm not quite convinced by these ultra cheap examples. Many lab instruments need quite a bit of precision and reliability, and I would be suspicious that the cheap examples here could compete in that regard. Even PCR needs pretty exact temperature control across many individual reaction vessels. Of course the margins on lab instruments are likely enormous, and there should be plenty of potential for cheaper ones. But I don't think the ultra-cheap DIY stuff will convince people, and it'll likely also fail at the purchasing process anyway for larger institutions.</p>
]]></description><pubDate>Sat, 25 Apr 2026 09:40:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47900072</link><dc:creator>fabian2k</dc:creator><comments>https://news.ycombinator.com/item?id=47900072</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47900072</guid></item><item><title><![CDATA[New comment by fabian2k in "My adventure in designing API keys"]]></title><description><![CDATA[
<p>You don't need any encryption or signing for API keys. Using JWTs is probably more dangerous here, and more annoying for people using the API since you now have to handle refreshing tokens.<p>Plain old API keys are straightforward to implement. Create a long random string and save it in the DB. When someone connects to the API, check if the API key is in your DB and use that to authenticate them. That's it.</p>
]]></description><pubDate>Wed, 15 Apr 2026 07:28:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47775780</link><dc:creator>fabian2k</dc:creator><comments>https://news.ycombinator.com/item?id=47775780</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47775780</guid></item></channel></rss>