<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: dotwaffle</title><link>https://news.ycombinator.com/user?id=dotwaffle</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 14 May 2026 15:15:29 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=dotwaffle" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by dotwaffle in "A History of IDEs at Google"]]></title><description><![CDATA[
<p>If you're the author of <a href="https://github.com/hanwen/go-fuse/" rel="nofollow">https://github.com/hanwen/go-fuse/</a> -- thank you :D</p>
]]></description><pubDate>Thu, 14 May 2026 11:20:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=48133855</link><dc:creator>dotwaffle</dc:creator><comments>https://news.ycombinator.com/item?id=48133855</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48133855</guid></item><item><title><![CDATA[New comment by dotwaffle in "A History of IDEs at Google"]]></title><description><![CDATA[
<p>As someone who predominantly writes in Go, cider-v was a massive step backwards compared to cider. I eventually moved entirely over to using vim (with the set of internal plugins for blaze etc) which became so much more useful, but I still missed the features of a proper IDE that cider just excelled at.<p>I imagine a lot of it came from that push to "use outside world tools more rather than writing our own" which is great in theory, but really felt like a huge leap backwards in terms of convergence.</p>
]]></description><pubDate>Thu, 14 May 2026 02:29:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=48130424</link><dc:creator>dotwaffle</dc:creator><comments>https://news.ycombinator.com/item?id=48130424</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48130424</guid></item><item><title><![CDATA[New comment by dotwaffle in "A History of IDEs at Google"]]></title><description><![CDATA[
<p>> flow-crushing remote desktop latency<p>Yeah, I was working out of the Sydney office. Almost everything was incredibly slow due to that latency, not just chromoting but also just accessing most sites through beyondcorp.</p>
]]></description><pubDate>Thu, 14 May 2026 02:13:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=48130338</link><dc:creator>dotwaffle</dc:creator><comments>https://news.ycombinator.com/item?id=48130338</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48130338</guid></item><item><title><![CDATA[New comment by dotwaffle in "A History of IDEs at Google"]]></title><description><![CDATA[
<p>/me shudders. cider-v...</p>
]]></description><pubDate>Thu, 14 May 2026 02:10:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=48130327</link><dc:creator>dotwaffle</dc:creator><comments>https://news.ycombinator.com/item?id=48130327</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48130327</guid></item><item><title><![CDATA[New comment by dotwaffle in "A History of IDEs at Google"]]></title><description><![CDATA[
<p>Cider (and p4/g4c etc) was amazing when I left back in 2020, I loved it so much, and truly miss it. I rejoined Google last year, and they'd replaced it with a VSCode clone that truly was just a glorified text editor and most were all-in on mercurial as a piper/citc shim -- I was only there for 5 months before I decided not to stay, and I never managed to get Go type definition hints working.</p>
]]></description><pubDate>Thu, 14 May 2026 02:08:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=48130314</link><dc:creator>dotwaffle</dc:creator><comments>https://news.ycombinator.com/item?id=48130314</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48130314</guid></item><item><title><![CDATA[New comment by dotwaffle in "I returned to AWS and was reminded why I left"]]></title><description><![CDATA[
<p>That's what Locally Recoverable Codes (including Pyramid Codes) are designed to address: you can repair a missing block using only a subset of the blocks, which you can make sure are placed in just one zone, eliminating the cross-az bandwidth requirements. Sure, if you lose multiple blocks, you are going to end up needing that extra bandwidth, but the chances of having two blocks offline at once is very low -- if it's just for a failure rather than extended maintenance, then you have probably already repaired the first block by the time the second fails.<p>In fact, the RS configuration is often in the >50 data blocks and >10 parity blocks range (albeit with an LRC/nested RS config) for object stores because it's more important to have that recoverability than repair efficiency. While one large provider I worked at did have a system whereby they did effectively have two copies of the RS-encoded data (so that 130% turned into 260%) across two AZs, they were actively in the process of swapping to the blocks being evenly distributed across the AZs, near-halving the total required disk space.<p>As I said before, most object storage is not on SSD, it's on hard disk: it's 20% of the price per TB, and most objects are read very infrequently. I can promise you that they're not paying $200 for 1TB of SSD either... I realise prices are higher than sensible in the last 6 months, but it was fairly easy to pick up SSDs for under $50/TB at retail pricing (and hard disks for under $10/TB) only a year ago.</p>
]]></description><pubDate>Tue, 12 May 2026 12:04:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=48107076</link><dc:creator>dotwaffle</dc:creator><comments>https://news.ycombinator.com/item?id=48107076</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48107076</guid></item><item><title><![CDATA[New comment by dotwaffle in "I returned to AWS and was reminded why I left"]]></title><description><![CDATA[
<p>GCS / Azure can survive a total AZ failure too, think of 3 x Replicas as RAID-1, whereas Erasure Coding is more like RAID-6. Only it's actually more resilient:<p>Let's say you have 10 data blocks, and you have 4 parity blocks. You can now lose 4 servers containing a block and still be able to repair the data, whereas in 3 x Replica you can only lose 2, and have to store everything 300% of size, instead of only 140%.<p>And yes, it <i>is</i> unreasonable how much they charge for both storage and inter-az bandwidth.</p>
]]></description><pubDate>Mon, 11 May 2026 21:14:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=48100778</link><dc:creator>dotwaffle</dc:creator><comments>https://news.ycombinator.com/item?id=48100778</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48100778</guid></item><item><title><![CDATA[New comment by dotwaffle in "I returned to AWS and was reminded why I left"]]></title><description><![CDATA[
<p>Is it 3x redundancy forever? I always just kinda assumed it was RS encoded after a while, so only 30-50% larger than a single copy. Plus, almost all object storage is written to / read from hard disks, not to SSDs. Unless they're in a caching layer that is.<p>I know Azure has done a bunch of work around Pyramid Codes (essentially a locally repairable EC/RS variant), and Google obviously have the Colossus infrastructure that allows variable encodings, I'd be surprised if AWS is still triple-replicated everywhere.</p>
]]></description><pubDate>Mon, 11 May 2026 20:06:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=48099972</link><dc:creator>dotwaffle</dc:creator><comments>https://news.ycombinator.com/item?id=48099972</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48099972</guid></item><item><title><![CDATA[New comment by dotwaffle in "I returned to AWS and was reminded why I left"]]></title><description><![CDATA[
<p>Insanely high S3 storage charges too. $23/TB/month? Even with the insane HDD pricing that we see today, that's paying off a drive in 1 month (at retail) that will last for 50-100 months. Sure, there's probably some encoding overhead, but it's still mad.</p>
]]></description><pubDate>Mon, 11 May 2026 17:10:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=48097703</link><dc:creator>dotwaffle</dc:creator><comments>https://news.ycombinator.com/item?id=48097703</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48097703</guid></item><item><title><![CDATA[New comment by dotwaffle in "I returned to AWS and was reminded why I left"]]></title><description><![CDATA[
<p>> Just create separate AWS accounts for separate services<p>My understanding is that different AWS accounts have different mappings of availability zones, so it's very easy to suddenly find yourself with an unexpected bandwidth bill due to all the cross-az traffic.<p>I've been irritated at AWS (and the other large cloud providers) that they charge $0.01/GB for cross-az traffic. That's $3.24/Mbps -- about the same I was paying for internet transit (as in: from London to anywhere in the world) 20 years ago, and this is just between two datacenters in the same city controlled by the same organisation, markup must be 10,000x or more considering these places are cross-connected with massive bundles of fiber!</p>
]]></description><pubDate>Mon, 11 May 2026 10:11:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=48093091</link><dc:creator>dotwaffle</dc:creator><comments>https://news.ycombinator.com/item?id=48093091</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48093091</guid></item><item><title><![CDATA[New comment by dotwaffle in "Days without GitHub incidents"]]></title><description><![CDATA[
<p>Commits or pushes? Commits aren't really a worthwhile source of measurement in terms of load.</p>
]]></description><pubDate>Mon, 04 May 2026 17:59:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=48012390</link><dc:creator>dotwaffle</dc:creator><comments>https://news.ycombinator.com/item?id=48012390</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48012390</guid></item><item><title><![CDATA[New comment by dotwaffle in "US national level OS-level age verification bill proposed"]]></title><description><![CDATA[
<p>Someone came up with a good theory a while ago that I'm inclined to believe: The social media companies (esp. Meta as I understand it) were looking at huge fines for showing adult content to under-18s, so they lobbied hard to ensure that the burden of proof for age verification was on anyone else but themselves, hence why the OS vendors are being targeted now.<p>Ultimately, they seem to have realised that they can't stop adult content from being shared, so the easiest way to get there was to mark anything even vaguely possible of being adult, and require age verification -- which comes with a lot of political cover vs. just deleting it.<p>Of course, if you stoke up the right people, you end up with lots of support from the puritanical brigades, and label all naysayers as putting children in harm's way.</p>
]]></description><pubDate>Tue, 14 Apr 2026 23:38:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47772886</link><dc:creator>dotwaffle</dc:creator><comments>https://news.ycombinator.com/item?id=47772886</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47772886</guid></item><item><title><![CDATA[New comment by dotwaffle in "SSH has no Host header"]]></title><description><![CDATA[
<p>That's the point, though. An SSH key gives authentication, not authorization. Generally a certificate is a key signed by some other mutually trusted authority, which SSH explicitly tried to avoid.</p>
]]></description><pubDate>Wed, 18 Mar 2026 06:18:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47422133</link><dc:creator>dotwaffle</dc:creator><comments>https://news.ycombinator.com/item?id=47422133</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47422133</guid></item><item><title><![CDATA[New comment by dotwaffle in "Why Your Load Balancer Still Sends Traffic to Dead Backends"]]></title><description><![CDATA[
<p>I've never quite understood why there couldn't be a standardised "reverse" HTTP connection, from server to load balancer, over which connections are balanced. Standardised so that some kind of health signalling could be present for easy/safe draining of connections.</p>
]]></description><pubDate>Tue, 24 Feb 2026 03:01:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47132346</link><dc:creator>dotwaffle</dc:creator><comments>https://news.ycombinator.com/item?id=47132346</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47132346</guid></item><item><title><![CDATA[New comment by dotwaffle in "The largest zip tie is nearly 4 feet long and $75"]]></title><description><![CDATA[
<p>... fair point.</p>
]]></description><pubDate>Wed, 04 Feb 2026 04:07:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46881339</link><dc:creator>dotwaffle</dc:creator><comments>https://news.ycombinator.com/item?id=46881339</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46881339</guid></item><item><title><![CDATA[New comment by dotwaffle in "The largest zip tie is nearly 4 feet long and $75"]]></title><description><![CDATA[
<p>They're 47 inches long. Amazon (UK) has 48 inch long zip ties for $14.45 (pack of 12), 60 inch long for $18. Not quite as thick or wide, sure... But that's not what was in the headline :P</p>
]]></description><pubDate>Wed, 04 Feb 2026 03:54:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=46881266</link><dc:creator>dotwaffle</dc:creator><comments>https://news.ycombinator.com/item?id=46881266</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46881266</guid></item><item><title><![CDATA[New comment by dotwaffle in "Helldivers 2 devs slash install size from 154GB to 23GB"]]></title><description><![CDATA[
<p>> An extra 131 GB of bandwidth per download would have cost Steam several million dollars over the last two years<p>Nah, not even close. Let's guess and say there were about 15 million copies sold. 15M * 131GB is about 2M TB (2000 PB / 2 EB). At 30% mean utilisation, a 100Gb/s port will do 10 PB in a month, and at most IXPs that costs $2000-$3000/month. That makes it about $400k in bandwidth charges (I imagine 90%+ is peered or hosted inside ISPs, not via transit), and you could quite easily build a server that would push 100Gb/s of static objects for under $10k a pop.<p>It would surprise me if the total additional costs were over $1M, considering they already have their own CDN setup. One of the big cloud vendors would charge $100M just for the bandwidth, let alone the infrastructure to serve it, based on some quick calculation I've done (probably incorrectly) -- though interestingly, HN's fave non-cloud vendor Hetzner would only charge $2M :P</p>
]]></description><pubDate>Wed, 03 Dec 2025 15:32:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46135658</link><dc:creator>dotwaffle</dc:creator><comments>https://news.ycombinator.com/item?id=46135658</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46135658</guid></item><item><title><![CDATA[New comment by dotwaffle in "Building the largest known Kubernetes cluster"]]></title><description><![CDATA[
<p>I started rewriting gcsfuse using <a href="https://github.com/hanwen/go-fuse" rel="nofollow">https://github.com/hanwen/go-fuse</a> instead of <a href="https://github.com/jacobsa/fuse" rel="nofollow">https://github.com/jacobsa/fuse</a> and found it rock-solid. FUSE has come a long way in the last few years, including things like passthrough.<p>Honestly, I'd give FUSE a second chance, you'd be surprised at how useful it can be -- after all, it's literally running in userland so you don't need to do anything funky with privileges. However, if I starting afresh on a similar project I'd probably be looking at using 9p2000.L instead.</p>
]]></description><pubDate>Mon, 24 Nov 2025 16:10:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46035615</link><dc:creator>dotwaffle</dc:creator><comments>https://news.ycombinator.com/item?id=46035615</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46035615</guid></item><item><title><![CDATA[New comment by dotwaffle in "NFS at 40 – Remembering the Sun Microsystems Network File System"]]></title><description><![CDATA[
<p>I know quite a few AFS systems that moved to AuriStor's YFS: <a href="https://www.auristor.com/openafs/migrate-to-auristor/auristor-comparison" rel="nofollow">https://www.auristor.com/openafs/migrate-to-auristor/auristo...</a><p>As I understand it, it mitigated many of those issues, but is still very "90s" in operation.<p>I've been flirting with the idea of writing a replacement for years, about time I had a go at it!</p>
]]></description><pubDate>Mon, 06 Oct 2025 13:36:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=45491287</link><dc:creator>dotwaffle</dc:creator><comments>https://news.ycombinator.com/item?id=45491287</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45491287</guid></item><item><title><![CDATA[New comment by dotwaffle in "Gmail will no longer support checking emails from third-party accounts via POP"]]></title><description><![CDATA[
<p>I know a lot of people who use it, in fact I'm one of them.<p>I have an @gmail.com account with about 20 years of stuff associated with it, from purchases to YouTube subscriptions, from calendars to GCP accounts.<p>However, I use a vanity email (me@somedomain.example) that everyone I know uses to get hold of me. Until about 10 years ago I could just forward emails but that slowly became unworkable as more and more stuff just broke due to SPF etc. So, I've been using POP pickup (and accepting the 5-30 minute delay) ever since.<p>As I understand it, I can't move all my gmail.com data into a GWork profile easily, and POP has worked for years. This is very frustrating.</p>
]]></description><pubDate>Thu, 02 Oct 2025 05:56:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=45446710</link><dc:creator>dotwaffle</dc:creator><comments>https://news.ycombinator.com/item?id=45446710</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45446710</guid></item></channel></rss>