<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: HippoBaro</title><link>https://news.ycombinator.com/user?id=HippoBaro</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 28 Apr 2026 16:30:00 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=HippoBaro" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by HippoBaro in "Rust is just a tool"]]></title><description><![CDATA[
<p>> Says frustrating things like "so I can just use unsafe", because no you don't and if you do I would reject your changes immediately.<p>This is the kind of hostility (which is frankly toxic) that’s become associated with parts of the Rust community, and has  fairly or not, driven away many talented people over time.</p>
]]></description><pubDate>Sat, 28 Feb 2026 09:46:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47192959</link><dc:creator>HippoBaro</dc:creator><comments>https://news.ycombinator.com/item?id=47192959</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47192959</guid></item><item><title><![CDATA[New comment by HippoBaro in "Swift is a more convenient Rust (2023)"]]></title><description><![CDATA[
<p>> Rust invented the concept of ownership as a solution memory management issues without resorting to something slower like Garbage Collection or Reference Counting.<p>This is plain wrong, and it undermines the credibility of the author and the rest of the piece. Rust did not invent ownership in the abstract; it relies on plain RAII, a model that predates Rust by decades and was popularized by C++. What Rust adds is a compile-time borrow checker that enforces ownership and lifetime rules statically, not a fundamentally new memory-management paradigm.</p>
]]></description><pubDate>Sun, 01 Feb 2026 00:03:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46842196</link><dc:creator>HippoBaro</dc:creator><comments>https://news.ycombinator.com/item?id=46842196</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46842196</guid></item><item><title><![CDATA[New comment by HippoBaro in "Interview with RollerCoaster Tycoon's Creator, Chris Sawyer (2024)"]]></title><description><![CDATA[
<p>> It actually took a lot longer to re-write the game in C++ than it took me to write the original machine code version 20 years earlier.<p>Is the most interesting quote IMO. I often feel like productivity has gone down significantly in recent years, despite tooling and computers being more numerous/sophisticated/fast.</p>
]]></description><pubDate>Wed, 03 Dec 2025 06:48:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46130965</link><dc:creator>HippoBaro</dc:creator><comments>https://news.ycombinator.com/item?id=46130965</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46130965</guid></item><item><title><![CDATA[New comment by HippoBaro in "Systems Programming with Zig"]]></title><description><![CDATA[
<p>I’m very excited for Zig personally, but calling it “ultra reliable” feels very premature.<p>The language isn’t even stable, which is pretty much the opposite of something you can rely on.<p>We’ll know in many years if it was something worth relying on.</p>
]]></description><pubDate>Sat, 04 Oct 2025 23:44:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=45477752</link><dc:creator>HippoBaro</dc:creator><comments>https://news.ycombinator.com/item?id=45477752</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45477752</guid></item><item><title><![CDATA[New comment by HippoBaro in "Microsoft has urged its employees on H-1B and H-4 visas to return immediately"]]></title><description><![CDATA[
<p>It was nighttime in Singapore when the ruling was announced. My husband and I scrambled to find a flight back. The best we could find, at any price, lands 25mins after the deadline.<p>We are on our way there.</p>
]]></description><pubDate>Sat, 20 Sep 2025 13:29:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=45313214</link><dc:creator>HippoBaro</dc:creator><comments>https://news.ycombinator.com/item?id=45313214</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45313214</guid></item><item><title><![CDATA[New comment by HippoBaro in "Hitting Peak File IO Performance with Zig"]]></title><description><![CDATA[
<p>It’s really the hardware block size that matters in this case (direct I/O). That value is a property of the hardware and can’t be changed.<p>In some situations, the “logical” block size can differ. For example, buffered writes use the page cache, which operates in PAGE_SIZE blocks (usually 4K). Or your RAID stripe size might be misconfigured, stuff like that. Otherwise they should be equal for best outcomes.<p>In general, we want it to be as small as possible!</p>
]]></description><pubDate>Sun, 07 Sep 2025 06:52:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=45155982</link><dc:creator>HippoBaro</dc:creator><comments>https://news.ycombinator.com/item?id=45155982</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45155982</guid></item><item><title><![CDATA[New comment by HippoBaro in "(On | No) Syntactic Support for Error Handling"]]></title><description><![CDATA[
<p>Just to add my two cents—I’ve been writing Go professionally for about 10 years, and neither I nor any of my colleagues have had real issues with how Go handles errors.<p>Newcomers often push back on this aspect of the language (among other things), but in my experience, that usually fades as they get more familiar with Go’s philosophy and design choices.<p>As for the Go team’s decision process, I think it’s a good thing that the lack of consensus over a long period and many attempts can prompt them to formally define a position.</p>
]]></description><pubDate>Tue, 03 Jun 2025 17:29:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=44172444</link><dc:creator>HippoBaro</dc:creator><comments>https://news.ycombinator.com/item?id=44172444</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44172444</guid></item><item><title><![CDATA[New comment by HippoBaro in "Precision Clock Mk IV"]]></title><description><![CDATA[
<p>I’m amazed by the ambition, technical brilliance, and relentless dedication behind some personal projects on display here.<p>All of this for a clock! I don’t get it, but I’m in awe.</p>
]]></description><pubDate>Sat, 31 May 2025 18:38:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=44146154</link><dc:creator>HippoBaro</dc:creator><comments>https://news.ycombinator.com/item?id=44146154</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44146154</guid></item><item><title><![CDATA[New comment by HippoBaro in "Cutting down Rust compile times from 30 to 2 minutes with one thousand crates"]]></title><description><![CDATA[
<p>Eminently pragmatic solution — I like it. In Rust, a crate is a compilation unit, and the compiler has limited parallelism opportunities, especially since rustc offloads much of the work to LLVM, which is largely single-threaded.<p>It’s not surprising they didn’t see a linear speedup from splitting into so many crates. The compiler now produces a large number of intermediate object files that must be read back and linked into the final binary. On top of that, rustc caches a significant amount of semantic information — lifetimes, trait resolutions, type inference — much of which now has to be recomputed for each crate, including dependencies. That introduces a lot of redundant work.<p>I also would expect this to hurt runtime performance as it likely reduces inlining opportunities (unless LTO is really good now?)</p>
]]></description><pubDate>Thu, 17 Apr 2025 12:06:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=43715593</link><dc:creator>HippoBaro</dc:creator><comments>https://news.ycombinator.com/item?id=43715593</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43715593</guid></item><item><title><![CDATA[New comment by HippoBaro in "Nyxpsi – A Next-Gen Network Protocol for Extreme Packet Loss"]]></title><description><![CDATA[
<p>It would be great to know a bit more about the protocol itself in the readme. I’m left wondering if it’s reliable connection-oriented, stream or message based, etc.</p>
]]></description><pubDate>Tue, 17 Sep 2024 01:26:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=41563054</link><dc:creator>HippoBaro</dc:creator><comments>https://news.ycombinator.com/item?id=41563054</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41563054</guid></item><item><title><![CDATA[New comment by HippoBaro in "Asynchronous IO: the next billion-dollar mistake?"]]></title><description><![CDATA[
<p>I am not sure I buy the underlying idea behind this piece, that somehow a lot of money/time has been invested into asynchronous IO at the expense of thread performance (creation time, context switch time, scheduler efficiency, etc.).<p>First, significant work has been done in the kernel in that area simply because any gains there massively impact application performance and energy efficiency, two things the big kernel sponsors deeply care about.<p>Second, asynchronous IO in the kernel has actually been underinvested for years. Async disk IO did not exist at all for years until AIO came to be. And even that was a half-backed, awful API no one wanted to use except for some database people who needed it badly enough to be willing to put up with it. It's a somewhat recent development that really fast, genuinely async IO has taken center stage through io_uring and the likes of AF_XDP.</p>
]]></description><pubDate>Sat, 07 Sep 2024 06:58:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=41472056</link><dc:creator>HippoBaro</dc:creator><comments>https://news.ycombinator.com/item?id=41472056</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41472056</guid></item><item><title><![CDATA[New comment by HippoBaro in "Asynchronous IO: the next billion-dollar mistake?"]]></title><description><![CDATA[
<p>> Under the asynchronous model, both timeouts and cancellation simply compose. You take a future representing the work you're doing, and spawn a new future that completes after sleeping for some duration, or spawn a new future that waits on a cancel channel. Then you just race these futures. Take whichever completes first and cancel the other.<p>That only works when what you're trying to do has no side effect. Consider what happens when you need to cancel a write to a file or a stream. Did you write everything? Something? Nothing? What's the state of the file/stream at this point?<p>Unfortunately, this is intractable: you'll need the underlying system to let you know, which means you will have to wait for it to return. Therefore, if these operations should have a deadline, you'll need to be able to communicate that to the kernel.</p>
]]></description><pubDate>Sat, 07 Sep 2024 06:47:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=41472008</link><dc:creator>HippoBaro</dc:creator><comments>https://news.ycombinator.com/item?id=41472008</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41472008</guid></item><item><title><![CDATA[New comment by HippoBaro in "Clang vs. Clang"]]></title><description><![CDATA[
<p>I think the author knows very well what UB is and means. But he’s thinking critically about the whole system.<p>UB is meant to add value. It’s possible to write a language without it, so why do we have any UB at all? We do because of portability and because it gives flexibility to compilers writers.<p>The post is all about whether this flexibility is worth it when compared with the difficulty of writing programs without UB.<p>The author makes the case that (1) there seem to be more money lost on bugs than money saved on faster bytecode and (2) there’s an unwillingness to do something about it because compiler writers have a lot of weight when it comes to what goes into language standards.</p>
]]></description><pubDate>Sat, 03 Aug 2024 17:40:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=41148010</link><dc:creator>HippoBaro</dc:creator><comments>https://news.ycombinator.com/item?id=41148010</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41148010</guid></item><item><title><![CDATA[New comment by HippoBaro in "Borgo is a statically typed language that compiles to Go"]]></title><description><![CDATA[
<p>Go has an amazing runtime and tool ecosystem, but I’ve always missed a little bit more type safety (especially rust enums). Neat!</p>
]]></description><pubDate>Tue, 30 Apr 2024 16:14:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=40212665</link><dc:creator>HippoBaro</dc:creator><comments>https://news.ycombinator.com/item?id=40212665</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40212665</guid></item><item><title><![CDATA[New comment by HippoBaro in "Garbage collection for systems programmers (2023)"]]></title><description><![CDATA[
<p>For the kind of software I write there are two cases: (1) the hot path for which I will always have custom allocators and avoid allocations and (2) everything else.<p>For (1) GC or not it doesn’t make a difference, I’ll opt-out. For (2) GC is really convenient and correct.</p>
]]></description><pubDate>Sat, 30 Mar 2024 19:34:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=39877880</link><dc:creator>HippoBaro</dc:creator><comments>https://news.ycombinator.com/item?id=39877880</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39877880</guid></item><item><title><![CDATA[New comment by HippoBaro in "JetBrains IDE new Terminal Interface"]]></title><description><![CDATA[
<p>A few years ago my company moved to using Bazel as our build system. JetBrain IDEs have a plug-in for Bazel that’s incredibly slow and buggy. It’s a shame because I used to really enjoy using their product, but now it’s barely usable.</p>
]]></description><pubDate>Thu, 22 Feb 2024 14:17:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=39467376</link><dc:creator>HippoBaro</dc:creator><comments>https://news.ycombinator.com/item?id=39467376</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39467376</guid></item><item><title><![CDATA[New comment by HippoBaro in "SSDs have become fast, except in the cloud"]]></title><description><![CDATA[
<p>As a database engineer who worked extensively with both i3 and i4 instances, I want to add that although i4 has lower IOPS, the latencies distribution of IO ops is an order of magnitude better.<p>IOPS indeed matters a lot, but so does latency! For our use case, it was much easier to saturate those disks than the old i3s, and we attribute it to the better latencies, making IO scheduling a lot more accurate.</p>
]]></description><pubDate>Wed, 21 Feb 2024 05:49:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=39450577</link><dc:creator>HippoBaro</dc:creator><comments>https://news.ycombinator.com/item?id=39450577</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39450577</guid></item><item><title><![CDATA[New comment by HippoBaro in "Goodbye, clean code (2020)"]]></title><description><![CDATA[
<p>Abstracting code is always a dangerous gamble because they rely on invariants. In this case “the shape all resize the same way”.<p>When the invariants breaks, the abstraction collapses and the code can become much more convoluted than it was originally. I’ve seen it many times.<p>In my career I’ve seen many many bad code abstractions and very few good ones. As measured by how long before they break.<p>I’ve asked the engineers that came up with those if there was a trick to it, and the answer has always been “dude it’s the 10th time in my career I’m writing that stuff”.<p>Good abstractions come from domain experience. If you’re writing something new, don’t abstract it. If you feel smart about it, that’s a bad sign. You don’t feel smart when you’re writing something for the 10th time.</p>
]]></description><pubDate>Fri, 08 Dec 2023 15:15:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=38570000</link><dc:creator>HippoBaro</dc:creator><comments>https://news.ycombinator.com/item?id=38570000</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38570000</guid></item><item><title><![CDATA[New comment by HippoBaro in "Banging errors in Go"]]></title><description><![CDATA[
<p>There's absolutely nothing in Rust that mandate error handling. You can always ignore them, just like in any other language.</p>
]]></description><pubDate>Fri, 20 Oct 2023 10:20:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=37954327</link><dc:creator>HippoBaro</dc:creator><comments>https://news.ycombinator.com/item?id=37954327</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37954327</guid></item><item><title><![CDATA[New comment by HippoBaro in "Banging errors in Go"]]></title><description><![CDATA[
<p>This! These fancy operators add a lot of complexity in practice. Trivial things in Go like error wrapping (to add local context), become much more complicated with `?` operators.<p>Go made something super simple. I write and review a lot of Go code daily, and I don't quite get how these error branches are such a big issue. The code is always very simple to follow through.</p>
]]></description><pubDate>Fri, 20 Oct 2023 10:16:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=37954288</link><dc:creator>HippoBaro</dc:creator><comments>https://news.ycombinator.com/item?id=37954288</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37954288</guid></item></channel></rss>