<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: dist1ll</title><link>https://news.ycombinator.com/user?id=dist1ll</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 14 Apr 2026 10:25:33 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=dist1ll" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by dist1ll in "Building a 25 Gbit/s workstation for the SCION Association"]]></title><description><![CDATA[
<p>Your MS-01 routes line-rate 25Gbps in software with VyOS w/o kernel bypass? That's very surprising to me. At what packet sizes?</p>
]]></description><pubDate>Tue, 13 Jan 2026 08:59:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46598708</link><dc:creator>dist1ll</dc:creator><comments>https://news.ycombinator.com/item?id=46598708</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46598708</guid></item><item><title><![CDATA[New comment by dist1ll in "Logging sucks"]]></title><description><![CDATA[
<p>Sorry for the OT response, I was curious about this comment[0] you made a while back. How did you measure memory transfer speed?<p>[0] <a href="https://news.ycombinator.com/item?id=38820893">https://news.ycombinator.com/item?id=38820893</a></p>
]]></description><pubDate>Tue, 23 Dec 2025 03:04:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46361969</link><dc:creator>dist1ll</dc:creator><comments>https://news.ycombinator.com/item?id=46361969</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46361969</guid></item><item><title><![CDATA[New comment by dist1ll in "What Does a Database for SSDs Look Like?"]]></title><description><![CDATA[
<p>The Aurora paper [0] goes into detail of correlated failures.<p>> <i>In Aurora, we have chosen a design point of tolerating (a) losing
an entire AZ and one additional node (AZ+1) without losing data,
and (b) losing an entire AZ without impacting the ability to write
data. [..] With such a model, we can (a) lose a
single AZ and one additional node (a failure of 3 nodes) without
losing read availability, and (b) lose any two nodes, including a
single AZ failure and maintain write availability.</i><p>As for why this can be considered durable enough, section 2.2 gives an argument based on their MTTR (mean time to repair) of storage segments<p>> <i>We would need to see two
such failures in the same 10 second window plus a failure of an
AZ not containing either of these two independent failures to lose
quorum. At our observed failure rates, that’s sufficiently unlikely,
even for the number of databases we manage for our customers.</i><p>[0] <a href="https://pages.cs.wisc.edu/~yxy/cs764-f20/papers/aurora-sigmod-17.pdf" rel="nofollow">https://pages.cs.wisc.edu/~yxy/cs764-f20/papers/aurora-sigmo...</a></p>
]]></description><pubDate>Sat, 20 Dec 2025 13:58:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46336238</link><dc:creator>dist1ll</dc:creator><comments>https://news.ycombinator.com/item?id=46336238</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46336238</guid></item><item><title><![CDATA[New comment by dist1ll in "What Does a Database for SSDs Look Like?"]]></title><description><![CDATA[
<p>> "surely if I send request to 5 nodes some of that will land on disk in reasonably near future?"<p>That would be asynchronous replication. But IIUC the author is instead advocating for a distributed log with <i>synchronous</i> quorum writes.</p>
]]></description><pubDate>Sat, 20 Dec 2025 12:41:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46335751</link><dc:creator>dist1ll</dc:creator><comments>https://news.ycombinator.com/item?id=46335751</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46335751</guid></item><item><title><![CDATA[New comment by dist1ll in "What Does a Database for SSDs Look Like?"]]></title><description><![CDATA[
<p>Is there more detail on the design of the distributed multi-AZ journal? That feels like the meat of the architecture.</p>
]]></description><pubDate>Sat, 20 Dec 2025 12:37:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46335733</link><dc:creator>dist1ll</dc:creator><comments>https://news.ycombinator.com/item?id=46335733</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46335733</guid></item><item><title><![CDATA[New comment by dist1ll in "Revisiting "Let's Build a Compiler""]]></title><description><![CDATA[
<p>As long as your target language has a strict define-before-use rule and no advanced inference is required you will know the types of expressions, and can perform type-based optimizations. You can also do constant folding and (very rudimentary) inlining. But the best optimizations are done on IRs, which you don't have access to in an old-school single pass design. LICM, CSE, GVN, DCE, and all the countless loop opts are not available to you. You'll also spill to memory a lot, because you can't run a decent regalloc in a single pass.<p>I'm actually a big fan a function-by-function dual-pass compilation. You generate IR from the parser in one pass, and do codegen right after. Most intermediate state is thrown out (including the AST, for non-polymorphic functions) and you move on to the next function. This give you an extremely fast data-oriented baseline compiler with reasonable codegen (much better than something like tcc).</p>
]]></description><pubDate>Wed, 10 Dec 2025 13:04:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46217224</link><dc:creator>dist1ll</dc:creator><comments>https://news.ycombinator.com/item?id=46217224</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46217224</guid></item><item><title><![CDATA[New comment by dist1ll in "StardustOS: Library operating system for building light-weight Unikernels"]]></title><description><![CDATA[
<p>I would argue that stateful services (databases, message queues, CDNs) all perfectly fit the unikernel model. The question is whether the additional engineering effort and system design is worth the performance gain.</p>
]]></description><pubDate>Fri, 05 Dec 2025 07:19:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=46157645</link><dc:creator>dist1ll</dc:creator><comments>https://news.ycombinator.com/item?id=46157645</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46157645</guid></item><item><title><![CDATA[New comment by dist1ll in "Why xor eax, eax?"]]></title><description><![CDATA[
<p>Another one is "jalr x0, imm(x0)", which turns an indirect branch into a direct jump to address "imm" in a single instruction w/o clobbering a register. Pretty neat.</p>
]]></description><pubDate>Mon, 01 Dec 2025 14:04:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46107549</link><dc:creator>dist1ll</dc:creator><comments>https://news.ycombinator.com/item?id=46107549</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46107549</guid></item><item><title><![CDATA[New comment by dist1ll in "Cloudflare outage on November 18, 2025 post mortem"]]></title><description><![CDATA[
<p>I do use a combination of newtyped indices + singleton arenas for data structures that only grow (like the AST). But for the IR, being able to remove nodes from the graph is very important. So phantom typing wouldn't work in that case.</p>
]]></description><pubDate>Wed, 19 Nov 2025 16:09:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=45981272</link><dc:creator>dist1ll</dc:creator><comments>https://news.ycombinator.com/item?id=45981272</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45981272</guid></item><item><title><![CDATA[New comment by dist1ll in "Cloudflare outage on November 18, 2025 post mortem"]]></title><description><![CDATA[
<p>Sure, these days I'm mostly working on a few compilers. Let's say I want to make a fixed-size SSA IR. Each instruction has an opcode and two operands (which are essentially pointers to other instructions). The IR is populated in one phase, and then lowered in the next. During lowering I run a few peephole and code motion optimizations on the IR, and then do regalloc + asm codegen. During that pass the IR is mutated and indices are invalidated/updated. The important thing is that this phase is extremely performance-critical.</p>
]]></description><pubDate>Wed, 19 Nov 2025 03:06:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=45975387</link><dc:creator>dist1ll</dc:creator><comments>https://news.ycombinator.com/item?id=45975387</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45975387</guid></item><item><title><![CDATA[New comment by dist1ll in "Cloudflare outage on November 18, 2025 post mortem"]]></title><description><![CDATA[
<p>If that's the case then hats off. What you're describing is definitely not what I've seen in practice. In fact, I don't think I've ever seen a crate or production codebase that documents infallibility of every single slice access. Even security-critical cryptography crates that passed audits don't do that. Personally, I found it quite hard to avoid indexing for graph-heavy code, so I'm always on the lookout for interesting ways to enforce access safety. If you have some code to share that would be very interesting.</p>
]]></description><pubDate>Wed, 19 Nov 2025 02:04:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=45975023</link><dc:creator>dist1ll</dc:creator><comments>https://news.ycombinator.com/item?id=45975023</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45975023</guid></item><item><title><![CDATA[New comment by dist1ll in "Cloudflare outage on November 18, 2025 post mortem"]]></title><description><![CDATA[
<p>For iteration, yes. But there's other cases, like any time you have to deal with lots of linked data structures. If you need high performance, chances are that you'll have to use an index+arena strategy. They're also common in mathematical codebases.</p>
]]></description><pubDate>Wed, 19 Nov 2025 01:41:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=45974872</link><dc:creator>dist1ll</dc:creator><comments>https://news.ycombinator.com/item?id=45974872</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45974872</guid></item><item><title><![CDATA[New comment by dist1ll in "Cloudflare outage on November 18, 2025 post mortem"]]></title><description><![CDATA[
<p>> every unwrap in production code needs an INFALLIBILITY comment. clippy::unwrap_used can enforce this.<p>How about indexing into a slice/map/vec? Should every `foo[i]` have an infallibility comment? Because they're essentially `get(i).unwrap()`.</p>
]]></description><pubDate>Wed, 19 Nov 2025 01:11:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=45974627</link><dc:creator>dist1ll</dc:creator><comments>https://news.ycombinator.com/item?id=45974627</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45974627</guid></item><item><title><![CDATA[New comment by dist1ll in "WASM 3.0 Completed"]]></title><description><![CDATA[
<p>WASM traps on out-of-bounds accesses (including overflow). Masking addresses would hide that.</p>
]]></description><pubDate>Thu, 18 Sep 2025 03:32:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=45284990</link><dc:creator>dist1ll</dc:creator><comments>https://news.ycombinator.com/item?id=45284990</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45284990</guid></item><item><title><![CDATA[New comment by dist1ll in "How is Ultrassembler so fast?"]]></title><description><![CDATA[
<p>Ditto. Perfect hashing strings smaller than 8 bytes has been the fastest lookup method in my experience.</p>
]]></description><pubDate>Sun, 31 Aug 2025 21:32:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=45087306</link><dc:creator>dist1ll</dc:creator><comments>https://news.ycombinator.com/item?id=45087306</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45087306</guid></item><item><title><![CDATA[New comment by dist1ll in "I'm working on implementing a programming language all my own"]]></title><description><![CDATA[
<p>> because that is what it means in mathematics<p>Personally, I think this argument only holds water for languages that are rooted in mathematics (e.g. Haskell, Lean, Rocq, F*, ...). If your computational model comes from a place of physical hardware, instructions, registers, memory etc. you're going to end up with something very different than an abstract machine based on lambda calculus. Both valid ways to design a PL.</p>
]]></description><pubDate>Sat, 30 Aug 2025 01:16:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=45071099</link><dc:creator>dist1ll</dc:creator><comments>https://news.ycombinator.com/item?id=45071099</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45071099</guid></item><item><title><![CDATA[New comment by dist1ll in "John Carmack's arguments against building a custom XR OS at Meta"]]></title><description><![CDATA[
<p>Intel still does it. As far as I can see they're the only player in town that provide open, detailed documentation for their high-speed NICs [0]. You can actually write a driver for their 100Gb cards from scratch using their datasheet. Most other vendors would either (1) ignore you, (2) make you sign an NDA or (3) refer you to their poorly documented Linux/BSD driver.<p>Not sure what the situation is for other hardware like NVMe SSDs.<p>[0] 2750 page datasheet for the e810 Ethernet controller <a href="https://www.intel.com/content/www/us/en/content-details/613875/intel-ethernet-controller-e810-datasheet.html" rel="nofollow">https://www.intel.com/content/www/us/en/content-details/6138...</a></p>
]]></description><pubDate>Fri, 29 Aug 2025 22:57:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=45070345</link><dc:creator>dist1ll</dc:creator><comments>https://news.ycombinator.com/item?id=45070345</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45070345</guid></item><item><title><![CDATA[New comment by dist1ll in "Cloudflare Is Not a CDN"]]></title><description><![CDATA[
<p>> While latency from a conventional CDN is usually < 80ms, with Cloudflare, I have frequently seen it to be in 150-300ms<p>So since magecdn is built on top of Cloudflare, how do they guarantee low latency?</p>
]]></description><pubDate>Mon, 11 Aug 2025 18:11:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=44867457</link><dc:creator>dist1ll</dc:creator><comments>https://news.ycombinator.com/item?id=44867457</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44867457</guid></item><item><title><![CDATA[New comment by dist1ll in "Fast"]]></title><description><![CDATA[
<p>It's fascinating to me how the values and priorities of a project's leaders affect the community and its dominant narrative. I always wondered how it was possible for so many people in the Rust community to share such a strong view on soundness, undefined behavior, thread safety etc. I think it's because people driving the project were actively shaping the culture.<p>Meanwhile, compiler performance just didn't have a strong advocate with the right vision of what could be done. At least that's my read on the situation.</p>
]]></description><pubDate>Thu, 31 Jul 2025 09:38:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=44743946</link><dc:creator>dist1ll</dc:creator><comments>https://news.ycombinator.com/item?id=44743946</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44743946</guid></item><item><title><![CDATA[New comment by dist1ll in "Strategies for Fast Lexers"]]></title><description><![CDATA[
<p>You can get pretty far with a branch per byte, as long as the bulk of the work is done w/ SIMD (like character classification). But yeah, LUT lookup per byte is not recommended.</p>
]]></description><pubDate>Mon, 14 Jul 2025 17:23:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=44562796</link><dc:creator>dist1ll</dc:creator><comments>https://news.ycombinator.com/item?id=44562796</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44562796</guid></item></channel></rss>