<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: scottlamb</title><link>https://news.ycombinator.com/user?id=scottlamb</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 11:37:48 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=scottlamb" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by scottlamb in "Case study: recovery of a corrupted 12 TB multi-device pool"]]></title><description><![CDATA[
<p>Wow, yuck. (The "Why do we even have that lever?!" line comes to mind.)<p>...even so, without a disk failure, that probably wasn't the cause of this event.</p>
]]></description><pubDate>Mon, 06 Apr 2026 17:02:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47663612</link><dc:creator>scottlamb</dc:creator><comments>https://news.ycombinator.com/item?id=47663612</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47663612</guid></item><item><title><![CDATA[New comment by scottlamb in "Case study: recovery of a corrupted 12 TB multi-device pool"]]></title><description><![CDATA[
<p>Might be true, but I don't see any aspect of that which is relevant to this event:<p>* Data single obviously means losing a single drive will cause data loss, but no drive was actually lost, right?<p>* Metadata DUP (not sure if it's across 2 disks or all 3) should be expected to be robust, I'd expect?<p>* I certainly eye DM-SMR disks with suspicion in general, but it doesn't sound like they were responsible for the damage: "Both DUP copies of several metadata blocks were written with inconsistent parent and child generations."</p>
]]></description><pubDate>Mon, 06 Apr 2026 15:04:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47661870</link><dc:creator>scottlamb</dc:creator><comments>https://news.ycombinator.com/item?id=47661870</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47661870</guid></item><item><title><![CDATA[New comment by scottlamb in "AWS engineer reports PostgreSQL perf halved by Linux 7.0, fix may not be easy"]]></title><description><![CDATA[
<p>> The number of people at Amazon is pretty much irrelevant; the org is going to ensure that someone is keeping an eye on kernel performance, but also that the work isn’t duplicative.<p>I'd guess they have dozens of people across say a Linux kernel team, a Graviton hardware integration team, an EC2 team, and a Amazon RDS for PostgreSQL team who might at one point or another run a benchmark like this. They probably coordinate to an extent, but not so much that only one person would ever run this test. So yes it is duplicative. And they're likely intending to test the configurations they use in production, yes, but people just make mistakes.</p>
]]></description><pubDate>Mon, 06 Apr 2026 04:25:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47656983</link><dc:creator>scottlamb</dc:creator><comments>https://news.ycombinator.com/item?id=47656983</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47656983</guid></item><item><title><![CDATA[New comment by scottlamb in "AWS engineer reports PostgreSQL perf halved by Linux 7.0, fix may not be easy"]]></title><description><![CDATA[
<p>By "those applications" I'm talking about other applications affected by this regression. There are several apps in addition to Redis  that recommend limiting the transparent huge page configuration. (Some of them recommend using explicit huge pages instead.) But it's quite possible none of them are affected by this regression, as it may be particular to apps using spinlocks. (Certainly the new rseq API mentioned in the thread is targeted at spinlock users.) It seems equally possible to me that some spinlock-using app has a regression irrespective of huge pages.</p>
]]></description><pubDate>Mon, 06 Apr 2026 04:11:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47656909</link><dc:creator>scottlamb</dc:creator><comments>https://news.ycombinator.com/item?id=47656909</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47656909</guid></item><item><title><![CDATA[New comment by scottlamb in "AWS engineer reports PostgreSQL perf halved by Linux 7.0, fix may not be easy"]]></title><description><![CDATA[
<p>I doubt they explicitly said "I'll run without huge pages, which is an important AWS configuration". They probably just forgot a step. And "someone at Amazon" describes a lot of people; multiply your mental probability tables accordingly.</p>
]]></description><pubDate>Sun, 05 Apr 2026 14:24:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47649768</link><dc:creator>scottlamb</dc:creator><comments>https://news.ycombinator.com/item?id=47649768</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47649768</guid></item><item><title><![CDATA[New comment by scottlamb in "AWS engineer reports PostgreSQL perf halved by Linux 7.0, fix may not be easy"]]></title><description><![CDATA[
<p>> While using huge pages whenever possible is the right solution and this should be enough for PostgreSQL, perhaps there are applications that cannot use huge pages and which are affected by the regression.<p>It will be more interesting to talk about those applications if and when they are found. And I wouldn't assume the solutions are limited to reverting this change, starting to use the new spinlock time-slice extension mechanism, and enabling huge pages.<p>It sounds like using 4K pages with 100G of buffer cache was just the thing that made this spinlock's critical section become longer than PostgreSQL's developers had seen before. So when trying to apply the solution to some hypothetical other software that is suddenly benchmarking poorly, I'd generalize from "enable huge pages" to "look for other differences between your benchmark configuration and what the software's authors tested on".</p>
]]></description><pubDate>Sun, 05 Apr 2026 14:16:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47649703</link><dc:creator>scottlamb</dc:creator><comments>https://news.ycombinator.com/item?id=47649703</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47649703</guid></item><item><title><![CDATA[New comment by scottlamb in "The 'paperwork flood': How I drowned a bureaucrat before dinner"]]></title><description><![CDATA[
<p>I hear you...to an extent. I <i>just</i> got off the phone with Comcast Business Class, asking for a refund after I had 26 hours of downtime in the past week. Not a company with a great reputation for customer service, and the agent I spoke with was probably not exactly earning a six-figure salary. He was empathetic. The outcome was unsatisfactory [1], but he was polite, he said he understood how important availability is my business, he put me on hold for a while, said he tried for more with his manager, and I believed him. That's all it takes, not like a master study in empathizing with your bitter enemy and de-escalating conflict. I'm mad at Comcast, but I'm not mad at him.<p>[1] A discount that was less than the delta between consumer-class and business-class prices, when the latter doesn't seem to actually be providing better availability lately.</p>
]]></description><pubDate>Fri, 27 Mar 2026 18:18:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47546323</link><dc:creator>scottlamb</dc:creator><comments>https://news.ycombinator.com/item?id=47546323</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47546323</guid></item><item><title><![CDATA[New comment by scottlamb in "The 'paperwork flood': How I drowned a bureaucrat before dinner"]]></title><description><![CDATA[
<p>> Unfortunately, it might also just cause anyone who wants to do good to leave, leaving people who just need a job and don't care about doing good.<p>I don't think the author would have acted this way toward someone who said "sorry, I know it's a burden, I know it's stressful to be at risk of losing these benefits, and I've told that to everyone I can repeatedly." So how much danger is there really that the inconvenience of reloading the fax machine is pushing out someone who is trying to do good?<p>(For the sake of argument, I'm going with all the details of the story, including that this caused Karen any distress at all. I think it's more likely a real office like this has a setup for which getting a 500-page fax is no big deal at all. And if it really is a DoS on their processing, the consequence I'd be more worried about is causing acceptance to slow down enough that other disability claims are not processed before their deadline.)</p>
]]></description><pubDate>Fri, 27 Mar 2026 16:51:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47545157</link><dc:creator>scottlamb</dc:creator><comments>https://news.ycombinator.com/item?id=47545157</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47545157</guid></item><item><title><![CDATA[New comment by scottlamb in "The 'paperwork flood': How I drowned a bureaucrat before dinner"]]></title><description><![CDATA[
<p>> I know it's fiction<p>Or semi-fiction? The author is actually blind and tagged it nonfiction, but I suspect some embellishment.<p>> but in reality, Karen is likely just as annoyed by this as the author.<p>When I'm frustrated talking with an agent of a big organization, I try to remember they probably didn't set the policy. But I also expect them to express some empathy for how I'm negatively affected by that policy. The author/protagonist, accurately or not, felt the opposite from "Karen from compliance". In their shoes, I wouldn't feel much empathy for Karen in return.<p>> The spam should go to the person in charge<p>I also expect the agent to have a closer relationship with "the person in charge" than I do (none whatsoever). If I mention the policy is absurd, they could at least make some effort to pass that along to their manager.<p>Also, sending the information to the agent is necessary compliance, even if the volume is malicious.<p>> not the person who is forced to deal with this every day<p>Maybe they feeling a bit of the pain themselves might make them more likely to speak up. If this becomes a miserable job that no one will stay in, that might provoke a change.</p>
]]></description><pubDate>Fri, 27 Mar 2026 15:54:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47544377</link><dc:creator>scottlamb</dc:creator><comments>https://news.ycombinator.com/item?id=47544377</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47544377</guid></item><item><title><![CDATA[New comment by scottlamb in "Apple randomly closes bug reports unless you "verify" the bug remains unfixed"]]></title><description><![CDATA[
<p>> Google had their own versions of things. IIRC bugs had both a priority and s everity for some reason (they were the same 99% of the time) between 0 and 4. So a standard bug was p2/s2. p0/s0 was the most severe and meant a serious user-facing outage. People would often change a p2/s2 to p3/s3, which basically meant "I'm never going to do this and I will never look at it again".<p>Yeah, I've done that. I find it much more honest than automatically closing it as stale or asking the reporter to repeatedly verify it even if I'm not going to work on it. The record still exists that the bug is there. Maybe some day the world will change and I'll have time to work on it.<p>I'm sure the leadership who set SLAs on medium-priority bugs anticipated a lot of bugs would become low-priority. They forced triage; that's the point.<p>> People even wrote automated rules to see if their bugs filed got downgraded to alert them.<p>This part though is a sign people are using the "don't notify" box inappropriately, denying reporters/watchers the opportunity to speak up if they disagree about the downgrade.</p>
]]></description><pubDate>Wed, 25 Mar 2026 22:34:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47524182</link><dc:creator>scottlamb</dc:creator><comments>https://news.ycombinator.com/item?id=47524182</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47524182</guid></item><item><title><![CDATA[New comment by scottlamb in "Debunking Zswap and Zram Myths"]]></title><description><![CDATA[
<p>> Many consumer SSDs, especially DRAMless ones (e.g., Apacer AS350 1TB, but also seen on Crucial SSDs), under synchronous writes, will regularly produce latency spikes of 10 seconds or more, due to the way they need to manage their cells.<p>Is there an experiment you'd recommend to reliably show this behavior on such a SSD (or ideally to become confident a given SSD is unaffected)? Is it as simple as writing flat-out for say, 10 minutes, with O_DIRECT so you can easily measure latency of individual writes? do you need a certain level of concurrency? or a mixed read/write load? etc? repeated writes to a small region vs writes to a large region (or maybe given remapping that doesn't matter)? Is this like a one-liner with `fio`? does it depend on longer-term state such as how much of the SSD's capacity has been written and not TRIMed?<p>Also, what could one do in advance to know if they're about to purchase such an SSD? You mentioned one affected model. You mentioned DRAMless too, but do consumer SSD spec sheets generally say how much DRAM (if any) the devices have? maybe some known unaffected consumer models? it'd be a shame to jump to enterprise prices to avoid this if that's not necessary.<p>I have a few consumer SSDs around that I've never really pushed; it'd be interesting to see if they have this behavior.</p>
]]></description><pubDate>Tue, 24 Mar 2026 15:25:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47504062</link><dc:creator>scottlamb</dc:creator><comments>https://news.ycombinator.com/item?id=47504062</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47504062</guid></item><item><title><![CDATA[New comment by scottlamb in "No, Windows Start does not use React"]]></title><description><![CDATA[
<p>I see your edits:<p>> This was an oversimplification bordering on being misleading. It’s a lighter JS runtime that’s calling native code for rendering controls. The argument still has merit. Just because something in JS doesn’t make it slow or bloated. Interpreted languages will almost always be slower than their native compiled counterparts, but it’s negligble [sic] for these purposes.<p>Isn't it a full JS runtime? I think by "a lighter JS runtime that's calling native code" you mean it doesn't deal with HTML/CSS rendering, but that's not what JS runtime means. These are separate parts of the browser architecture.<p>I don't agree it's negligible for this purpose. Core OS functionality should run well on old/cheap machines, and throwing in unnecessary interpreters/JITs for trivial stuff is inconsistent with their recently announced commitment to "faster and more responsive Windows experiences" and "improved memory efficiency".</p>
]]></description><pubDate>Tue, 24 Mar 2026 03:13:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47498240</link><dc:creator>scottlamb</dc:creator><comments>https://news.ycombinator.com/item?id=47498240</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47498240</guid></item><item><title><![CDATA[New comment by scottlamb in "No, Windows Start does not use React"]]></title><description><![CDATA[
<p>I've got nothing then.</p>
]]></description><pubDate>Tue, 24 Mar 2026 01:56:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47497776</link><dc:creator>scottlamb</dc:creator><comments>https://news.ycombinator.com/item?id=47497776</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47497776</guid></item><item><title><![CDATA[New comment by scottlamb in "No, Windows Start does not use React"]]></title><description><![CDATA[
<p>Here's one: Microsoft management heavily incentivizes their developers to use LLMs for virtually everything (to the "do it or you're fired" level) and the LLM (due to its training data or whatever) is far more able to pump out code with React Native than their own frameworks. This makes it the right choice for them. Not for the user, but you can't have everything.<p>I don't have any inside information; I'm running with the hypothetical.</p>
]]></description><pubDate>Tue, 24 Mar 2026 01:31:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47497578</link><dc:creator>scottlamb</dc:creator><comments>https://news.ycombinator.com/item?id=47497578</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47497578</guid></item><item><title><![CDATA[New comment by scottlamb in "No, Windows Start does not use React"]]></title><description><![CDATA[
<p>> Shouldn't devs be allowed to select what they feel is the "best" choice for a given component?<p>To some extent, yes. But if they choose React Native, something's probably wrong, because (despite what the article says) that requires throwing in a Javascript engine, significantly bloating a core Windows component. If they only use it for a small section ("that can be disabled", or in other words is on by default), it seems like an even poorer trade-off, as most users suffer the pain but the devs are making minimal advantage of whatever benefits it provides.<p>If the developers are correct that this is the best choice, that reflects poorly on the quality of Microsoft's core native development platforms, as madeofpalk said.<p>If the developers of a core Windows component are incorrect about the best choice, that reflects poorly on this team, and I might be inclined to say no, someone more senior should be making the choice.</p>
]]></description><pubDate>Tue, 24 Mar 2026 00:36:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47497211</link><dc:creator>scottlamb</dc:creator><comments>https://news.ycombinator.com/item?id=47497211</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47497211</guid></item><item><title><![CDATA[New comment by scottlamb in "On a Boat"]]></title><description><![CDATA[
<p>Thanks!<p>Any idea what Firefox is waiting for? To me those lines I quoted seem entirely arbitrary, and a skim through bugzilla didn't help.</p>
]]></description><pubDate>Wed, 18 Mar 2026 17:50:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47428930</link><dc:creator>scottlamb</dc:creator><comments>https://news.ycombinator.com/item?id=47428930</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47428930</guid></item><item><title><![CDATA[New comment by scottlamb in "On a Boat"]]></title><description><![CDATA[
<p>I wouldn't say I'm done evaluating it, and as a spare-time project, my NVR's needs are pretty simple at present.<p>But WebCodecs is just really straightforward. It's hard to find anything to complain about.<p>If you have an IP camera sitting around, you can run a quick WebSocket+WebCodecs example I threw together: <<a href="https://github.com/scottlamb/retina" rel="nofollow">https://github.com/scottlamb/retina</a>> (try `cargo run --package client webcodecs ...`). For one of my cameras, it gives me <160ms glass-to-glass latency, [1] with most of that being the IP camera's encoder. Because WebCodecs doesn't supply a particular jitter buffer implementation, you can just not have one at all if you want to prioritize liveness, and that's what my example does. A welcome change from using MSE.<p>Skipping the jitter buffer also made me realize with one of my cameras, I had a weird pattern where up to six frames would pile up in the decode queue until a key frame and then start over, which without a jitter buffer is hard to miss at 10 fps. It turns out that even though this camera's H.264 encoder never reorders frames, they hadn't bothered to say that in their VUI bitstream restrictions, so the decoder had to introduce additional latency just in case. I added some logic to "fix" the VUI and now its live stream is more responsive too. So the problem I had wasn't MSE's fault exactly, but MSE made it hard to understand because all the buffering was a black box.<p>[1] <a href="https://pasteboard.co/Jfda3nqOQtyV.png" rel="nofollow">https://pasteboard.co/Jfda3nqOQtyV.png</a></p>
]]></description><pubDate>Wed, 18 Mar 2026 16:14:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47427605</link><dc:creator>scottlamb</dc:creator><comments>https://news.ycombinator.com/item?id=47427605</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47427605</guid></item><item><title><![CDATA[New comment by scottlamb in "On a Boat"]]></title><description><![CDATA[
<p>> Never had to work with moq<p>Probably never had to work with (live) video at all? I think using moq is the <i>dream</i> for anyone who does. The alternatives—DASH, HLS, MSE, WebRTC, SRT, etc.— are all ridiculously fussy and limiting in one way or another, where QUIC/WebTransport and WebCodecs just give you the primitives you want to use as you choose, and moq appears focused on using them in a reasonable, CDN-friendly way.</p>
]]></description><pubDate>Wed, 18 Mar 2026 15:33:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47427064</link><dc:creator>scottlamb</dc:creator><comments>https://news.ycombinator.com/item?id=47427064</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47427064</guid></item><item><title><![CDATA[New comment by scottlamb in "On a Boat"]]></title><description><![CDATA[
<p>Very cool result, but I'm struggling to understand the baseline: what does "TCP + application FEC" mean? If everything is one TCP stream, and thus the kernel delivers bytes to the application strictly in order, what does application FEC accomplish? Or is it distributed across several TCP streams?</p>
]]></description><pubDate>Wed, 18 Mar 2026 15:20:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47426888</link><dc:creator>scottlamb</dc:creator><comments>https://news.ycombinator.com/item?id=47426888</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47426888</guid></item><item><title><![CDATA[New comment by scottlamb in "On a Boat"]]></title><description><![CDATA[
<p>I've also looked at switching my open source IP camera NVR to WebCodecs and WebTransport (maybe MoQ). Two things giving me pause:<p>* Firefox support for WebCodecs is poor—none at all on Android [1], H.265 is behind a feature flag. [2]<p>* Mobile Safari doesn't support WebTransport. Or didn't...I just looked it up again and see it does in 26.4 TP. Progress! [3]<p>[1] <a href="https://searchfox.org/firefox-main/rev/da2bfb8bf7dc476186dfe3f8ecc23a3c7ff4e326/dom/media/webcodecs/VideoDecoder.cpp#195-198" rel="nofollow">https://searchfox.org/firefox-main/rev/da2bfb8bf7dc476186dfe...</a><p>[2] <a href="https://searchfox.org/firefox-main/rev/da2bfb8bf7dc476186dfe3f8ecc23a3c7ff4e326/dom/media/webcodecs/WebCodecsUtils.cpp#614-618" rel="nofollow">https://searchfox.org/firefox-main/rev/da2bfb8bf7dc476186dfe...</a><p>[3] <a href="https://caniuse.com/webtransport" rel="nofollow">https://caniuse.com/webtransport</a></p>
]]></description><pubDate>Wed, 18 Mar 2026 13:38:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47425678</link><dc:creator>scottlamb</dc:creator><comments>https://news.ycombinator.com/item?id=47425678</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47425678</guid></item></channel></rss>