<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: vitus</title><link>https://news.ycombinator.com/user?id=vitus</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 11:53:00 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=vitus" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by vitus in "Launching Cloudflare's Gen 13 servers"]]></title><description><![CDATA[
<p>Eh. It depends what your bottleneck is. If the bottleneck is now, say, CPU cache contention because you've doubled your thread count, it's entirely possible that FL1 running on the new server generation is operating in a different regime than on the previous generation. You can see some hints of that happening, since doubling thread count didn't result in a doubling of throughput.<p>In fact, I suspect based on the throughput doubling with FL2, we're back in the same regime as the baseline.<p>It would be useful to see what the latency is of FL2 on Gen12 compared to baseline (FL1 on Gen12), just to confirm.</p>
]]></description><pubDate>Tue, 24 Mar 2026 11:58:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47501367</link><dc:creator>vitus</dc:creator><comments>https://news.ycombinator.com/item?id=47501367</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47501367</guid></item><item><title><![CDATA[New comment by vitus in "25 Years of Eggs"]]></title><description><![CDATA[
<p>It depends what dates you're looking at, but energy (gas prices and more) and food (including eggs) are generally recognized as way more volatile than the rest of the CPI.<p>Eggs were actually quite stable for the 20 years prior to 2001, so maybe don't put your life savings into egg futures...<p>Egg prices: <a href="https://fred.stlouisfed.org/series/APU0000708111" rel="nofollow">https://fred.stlouisfed.org/series/APU0000708111</a><p>CPI: <a href="https://fred.stlouisfed.org/series/CPIAUCSL" rel="nofollow">https://fred.stlouisfed.org/series/CPIAUCSL</a><p>Core CPI (without food + energy prices): <a href="https://fred.stlouisfed.org/series/CPILFESL" rel="nofollow">https://fred.stlouisfed.org/series/CPILFESL</a></p>
]]></description><pubDate>Sun, 22 Mar 2026 11:43:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47476506</link><dc:creator>vitus</dc:creator><comments>https://news.ycombinator.com/item?id=47476506</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47476506</guid></item><item><title><![CDATA[New comment by vitus in "A Japanese glossary of chopsticks faux pas (2022)"]]></title><description><![CDATA[
<p>> Kaeshibashi<p>The preference is to use a separate pair of communal chopsticks that is not used directly for eating.<p>> Kosuribashi<p>I have heard that this one is because it's considered to be an insult implying that the chopsticks are low-quality. (That said, if your chopsticks are indeed low-quality, then avoiding splinters is probably preferable to then visibly plucking splinters out of your fingers.)</p>
]]></description><pubDate>Fri, 20 Mar 2026 21:37:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47460960</link><dc:creator>vitus</dc:creator><comments>https://news.ycombinator.com/item?id=47460960</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47460960</guid></item><item><title><![CDATA[New comment by vitus in "Glassworm is back: A new wave of invisible Unicode attacks hits repositories"]]></title><description><![CDATA[
<p>Agreed on all those fronts. I'm just dismayed by all the comments suggesting that maintainers just merged PRs with this trojan, when the attack vector implies a more mundane form of credential compromise (and not, as the article implies, AI being used to sneak malicious changes past code review at scale).</p>
]]></description><pubDate>Sun, 15 Mar 2026 17:48:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47389842</link><dc:creator>vitus</dc:creator><comments>https://news.ycombinator.com/item?id=47389842</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47389842</guid></item><item><title><![CDATA[New comment by vitus in "Glassworm is back: A new wave of invisible Unicode attacks hits repositories"]]></title><description><![CDATA[
<p>Looks like the repo owner force-pushed a bad commit to replace an existing one. But then, why not forge it to maintain the existing timestamp + author, e.g. via `git commit --amend -C df8c18`?<p>Innocuous PR (but do note the line about "pedronauck pushed a commit that referenced this pull request last week"): <a href="https://github.com/pedronauck/reworm/pull/28" rel="nofollow">https://github.com/pedronauck/reworm/pull/28</a><p>Original commit: <a href="https://github.com/pedronauck/reworm/commit/df8c18" rel="nofollow">https://github.com/pedronauck/reworm/commit/df8c18</a><p>Amended commit: <a href="https://github.com/pedronauck/reworm/commit/d50cd8" rel="nofollow">https://github.com/pedronauck/reworm/commit/d50cd8</a><p>Either way, pretty clear sign that the owner's creds (and possibly an entire machine) are compromised.</p>
]]></description><pubDate>Sun, 15 Mar 2026 17:00:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47389296</link><dc:creator>vitus</dc:creator><comments>https://news.ycombinator.com/item?id=47389296</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47389296</guid></item><item><title><![CDATA[New comment by vitus in "Google just gave Sundar Pichai a $692M pay package"]]></title><description><![CDATA[
<p>> if indeed he went all-in on AI in 2015, that seems to me like a damn near prophetic vision.<p>Also note that 7 years later, when ChatGPT came out, built on top of Google Brain research (transformers), Google was caught flat-footed.<p>Even supposing that Pichai really had the right vision a decade ago, he completely failed in leading its execution until a serious threat to the company's core business model materialized.</p>
]]></description><pubDate>Sun, 08 Mar 2026 19:51:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47300638</link><dc:creator>vitus</dc:creator><comments>https://news.ycombinator.com/item?id=47300638</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47300638</guid></item><item><title><![CDATA[New comment by vitus in "Latency numbers every programmer should know"]]></title><description><![CDATA[
<p>> That’s PCIe 3.0 x4 or PCIe 4.0 x2, which a decent commodity M.2 NVMe SSD can use and can possibly saturate, at least for reads.<p>Given that there's a separate item for sequential disk reads vs SSD reads, I think it's pretty clear that particular item meant hard drives specifically. Agreed that modern SSDs should be able to pull that off.<p>> That being said, all the connections over 100Gbps are currently multi-lane AFAIK, and the heroic efforts and multiplexing needed to exceed 100Gbps at any distance are a bit in excess of the very simple technology that got us to 100Mbps “fast Ethernet”.<p>Yeah. Terabit networking is not here yet, and it's certainly not "commodity network"-grade. We can LACP a bunch of 100G optics together, but we're probably 5-10 years out for 800G ethernet to become widely adopted and for 1600G to even be developed.</p>
]]></description><pubDate>Sat, 28 Feb 2026 18:36:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47198707</link><dc:creator>vitus</dc:creator><comments>https://news.ycombinator.com/item?id=47198707</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47198707</guid></item><item><title><![CDATA[New comment by vitus in "Latency numbers every programmer should know"]]></title><description><![CDATA[
<p>Well, it shouldn't be slower than "Read 1,000,000 bytes sequentially from memory" (741ns) which in turn shouldn't be slower than "Read 1,000,000 bytes sequentially from disk" (359 us).<p>That said, all those numbers feel a bit off by 1.5-2 orders of magnitude -- that disk read speed translates to about 3 GB/s which is well outside the range of what HDDs can achieve.<p><a href="https://brenocon.com/dean_perf.html" rel="nofollow">https://brenocon.com/dean_perf.html</a> indicates the original set of numbers were more like 10us, 250us, and 30ms.<p>And it links to <a href="https://github.com/colin-scott/interactive_latencies" rel="nofollow">https://github.com/colin-scott/interactive_latencies</a> which seems like it extrapolates progress from 14 years ago:<p><pre><code>        // NIC bandwidth doubles every 2 years
        // [source: http://ampcamp.berkeley.edu/wp-content/uploads/2012/06/Ion-stoica-amp-camp-21012-warehouse-scale-computing-intro-final.pdf]
        // TODO: should really be a step function
        // 1Gb/s = 125MB/s = 125*10^6 B/s in 2003
</code></pre>
which means that in 2026 we'll have seen 11 doublings since gigabit speeds in 2003, so we'll all have > terabit speeds available to us.</p>
]]></description><pubDate>Sat, 28 Feb 2026 15:33:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47196505</link><dc:creator>vitus</dc:creator><comments>https://news.ycombinator.com/item?id=47196505</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47196505</guid></item><item><title><![CDATA[New comment by vitus in "We have ipinfo at home or how to geolocate IPs in your CLI using latency"]]></title><description><![CDATA[
<p>You probably meant to say oversubscribing, not overprovisioning.<p>Oversubscription is expected to a certain degree (this is fundamentally the same concept as "statistical multiplexing"). But even oversubscription in itself is not guaranteed to result in bufferbloat -- appropriate traffic shaping (especially to "encourage" congestion control algorithms to back off sooner) can mitigate a lot of those issues. And, it can be hard to differentiate between bufferbloat at the last mile vs within the ISP's backbone.</p>
]]></description><pubDate>Sat, 31 Jan 2026 15:41:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=46837588</link><dc:creator>vitus</dc:creator><comments>https://news.ycombinator.com/item?id=46837588</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46837588</guid></item><item><title><![CDATA[New comment by vitus in "Rex is a safe kernel extension framework that allows Rust in the place of eBPF"]]></title><description><![CDATA[
<p>We do; most people don't just write eBPF by hand.<p><a href="https://github.com/llvm/llvm-project/tree/main/llvm/lib/Target/BPF" rel="nofollow">https://github.com/llvm/llvm-project/tree/main/llvm/lib/Targ...</a></p>
]]></description><pubDate>Sun, 28 Dec 2025 12:02:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46410456</link><dc:creator>vitus</dc:creator><comments>https://news.ycombinator.com/item?id=46410456</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46410456</guid></item><item><title><![CDATA[New comment by vitus in "map::operator[] should be nodiscard"]]></title><description><![CDATA[
<p>std::ignore's behavior outside of use with std::tie is not specified in any finalized standard.<p><a href="https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p2968r2.html" rel="nofollow">https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p29...</a> aims to address that, but that won't be included until C++26 (which also includes _ as a sibling commenter mentions).</p>
]]></description><pubDate>Wed, 24 Dec 2025 17:43:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=46377546</link><dc:creator>vitus</dc:creator><comments>https://news.ycombinator.com/item?id=46377546</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46377546</guid></item><item><title><![CDATA[New comment by vitus in "“Are you the one?” is free money"]]></title><description><![CDATA[
<p>> This is incorrect, the correct strategy is mostly to check the most probable match (the exception being if the people in that match has less possible pairings remaining than the next most probable match).<p>Do you have any hard evidence, or just basing this on vibes? Because your proposed strategy is emphatically not how you maximize information gain.<p>Scaling up the problem to larger sizes, is it worth explicitly spending an action to confirm a match that has 99% probability? Is it worth it to (most likely) eliminate 1% of the space of outcomes (by probability)? Or would you rather halve your space?<p>This isn't purely hypothetical, either. The match-ups skew your probabilities such that your individual outcomes cease to be equally probable, so just looking at raw cardinalities is insufficient.<p>If you have a single match out of 10 pairings, and you've ruled out 8 of them directly, then if you target one of the two remaining pairs, you nominally have a 50/50 chance of getting a match (or no match!).<p>Meanwhile, you could have another match-up where you got 6 out of 10 pairings, and you've ruled out 2 of them (thus you have 8 remaining pairs to check, 6 of which are definitely matches). Do you spend your truth booth on the 50/50 shot (which actually will always reveal a match), or the 75/25 shot?<p>(I can construct examples where you have a 50/50 shot but without the guarantee on whether you reveal a match. Your information gain will still be the same.)</p>
]]></description><pubDate>Wed, 17 Dec 2025 00:29:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=46296706</link><dc:creator>vitus</dc:creator><comments>https://news.ycombinator.com/item?id=46296706</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46296706</guid></item><item><title><![CDATA[New comment by vitus in "“Are you the one?” is free money"]]></title><description><![CDATA[
<p>So, for 10 pairs, 45 guesses (9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1) in the worst case, and roughly half that on average?<p>It's interesting how close 22.5 is to the 21.8 bits of entropy for 10!, and that has me wondering how often you would win if you followed this strategy with 18 truth booths followed by one match up (to maintain the same total number of queries).<p>Simulation suggests about 24% chance of winning with that strategy, with 100k samples. (I simplified each run to "shuffle [0..n), find index of 0".)</p>
]]></description><pubDate>Tue, 16 Dec 2025 12:44:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=46287824</link><dc:creator>vitus</dc:creator><comments>https://news.ycombinator.com/item?id=46287824</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46287824</guid></item><item><title><![CDATA[New comment by vitus in "“Are you the one?” is free money"]]></title><description><![CDATA[
<p>It should be easier to understand the optimal truth booth strategy. Since this is a yes/no type of question, the maximum entropy is 1 bit, as noted by yourself and others. As such, you want to pick a pair where the odds are as close to 50/50 as possible.<p>> Employing that approach alone performed worse than the contestants did in real life, so didn't think it was worth mentioning!<p>Yeah, this alone should not be sufficient. At the extreme of getting a score of 0, you also need the constraint that you're not repeating known-bad pairs. The same applies for pairs ruled out (or in!) from truth booths.<p>Further, if your score goes down, you need to use that as a signal that one (or more) of the pairs you swapped out was actually correct, and you need to cycle those back in.<p>I don't know what a human approximation of the entropy-minimization approach looks like in full. Good luck!</p>
]]></description><pubDate>Tue, 16 Dec 2025 11:44:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=46287409</link><dc:creator>vitus</dc:creator><comments>https://news.ycombinator.com/item?id=46287409</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46287409</guid></item><item><title><![CDATA[New comment by vitus in "Ecosia: The greenest AI is here"]]></title><description><![CDATA[
<p>It's way more lopsided than your example would suggest.<p>My understanding is that Netflix can stream 100 Gbps from a 100W server footprint (slide 17 of [0]). Even if you assume every stream is 4k and uses 25 Mbps, that's still thousands of streams. I would guess that the bulk of the power consumption from streaming video is probably from the end-user devices -- a backbone router might consume a couple of kilowatts of power, but it's also moving terabits of traffic.<p>[0] <a href="https://people.freebsd.org/~gallatin/talks/OpenFest2023.pdf" rel="nofollow">https://people.freebsd.org/~gallatin/talks/OpenFest2023.pdf</a></p>
]]></description><pubDate>Wed, 03 Dec 2025 05:05:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46130505</link><dc:creator>vitus</dc:creator><comments>https://news.ycombinator.com/item?id=46130505</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46130505</guid></item><item><title><![CDATA[New comment by vitus in "NSA and IETF, part 3: Dodging the issues at hand"]]></title><description><![CDATA[
<p>To add to this: rough consensus is defined in BCP 25 / RFC 2418 (<a href="https://datatracker.ietf.org/doc/html/rfc2418#section-3.3" rel="nofollow">https://datatracker.ietf.org/doc/html/rfc2418#section-3.3</a>):<p><pre><code>   IETF consensus does not require that all participants agree although
   this is, of course, preferred.  In general, the dominant view of the
   working group shall prevail.  (However, it must be noted that
   "dominance" is not to be determined on the basis of volume or
   persistence, but rather a more general sense of agreement.) Consensus
   can be determined by a show of hands, humming, or any other means on
   which the WG agrees (by rough consensus, of course).  Note that 51%
   of the working group does not qualify as "rough consensus" and 99% is
   better than rough.  It is up to the Chair to determine if rough
   consensus has been reached.
</code></pre>
The goal has never been 100%, but it is not enough to merely have a majority opinion.</p>
]]></description><pubDate>Mon, 24 Nov 2025 20:19:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=46038740</link><dc:creator>vitus</dc:creator><comments>https://news.ycombinator.com/item?id=46038740</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46038740</guid></item><item><title><![CDATA[New comment by vitus in "IP blocking the UK is not enough to comply with the Online Safety Act"]]></title><description><![CDATA[
<p>I get that it's satisfying to tell them to go away because they're being unreasonable. But what's the legal strategy here? Piss off the regulators such that they really won't drop this case, and give them fodder to be able to paint the lawyer and his client as uncooperative?<p>Is the strategy really just "get new federal laws passed so UK can't shove these regulations down our throats"? Is that going to happen on a timeline that makes sense for this specific case?</p>
]]></description><pubDate>Sun, 09 Nov 2025 00:04:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=45861468</link><dc:creator>vitus</dc:creator><comments>https://news.ycombinator.com/item?id=45861468</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45861468</guid></item><item><title><![CDATA[New comment by vitus in "IP blocking the UK is not enough to comply with the Online Safety Act"]]></title><description><![CDATA[
<p>The combative stance that he's taking really doesn't do him any favors in resolving the issue.<p>Lawyer: "I've confirmed that at least one UK IP address is blocked."<p>Regulators: "We've confirmed that at least one UK IP address is not blocked."<p>In what world is the correct response "Dear regulators, you're incompetent. Pound sand." instead of "Can you share the IP address you used so my client can address this in their geoblock?"</p>
]]></description><pubDate>Sat, 08 Nov 2025 23:30:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=45861135</link><dc:creator>vitus</dc:creator><comments>https://news.ycombinator.com/item?id=45861135</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45861135</guid></item><item><title><![CDATA[New comment by vitus in "Things I've Heard Boomers Say That I Agree with 100%"]]></title><description><![CDATA[
<p>China is much more smartphone-centric than the US. QR codes are universal, WeChat and AliPay are the most common form of payments (online or in person).</p>
]]></description><pubDate>Sat, 08 Nov 2025 15:20:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=45857234</link><dc:creator>vitus</dc:creator><comments>https://news.ycombinator.com/item?id=45857234</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45857234</guid></item><item><title><![CDATA[New comment by vitus in "Why hasn't there been a new major sports league?"]]></title><description><![CDATA[
<p>Not by $$$, which is the main focus of the article.<p>In the second table, LoL esports is explicitly highlighted as a success by mindshare, but not profitability. And below that:<p>> LoL Esports: loses hundreds of millions of dollars annually, exists solely as a marketing mechanism to get people to play the actual game</p>
]]></description><pubDate>Sat, 08 Nov 2025 15:06:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=45857106</link><dc:creator>vitus</dc:creator><comments>https://news.ycombinator.com/item?id=45857106</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45857106</guid></item></channel></rss>