<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ericvolp12</title><link>https://news.ycombinator.com/user?id=ericvolp12</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 24 Apr 2026 08:27:25 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ericvolp12" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by ericvolp12 in "4k NASA employees opt to leave agency through deferred resignation program"]]></title><description><![CDATA[
<p>The article doesn't really do this justice, it's not really "opting to leave" it's that entire divisions inside the Science side of NASA have had their projects defunded and so all the work the people in those labs were doing is now gone. They're being asked to leave voluntarily so they don't have to be "fired" but all their work and resources are gone and they couldn't stay if they wanted to.<p>A friend of mine had her division's headcount cut by >80% that was all research focused and building instruments for deep space observation. No one is hiring people to do that in the private sector. Dozens of astrophysics PhDs in that division alone are now without work and with no real prospects doing anything related to what they've dedicated their entire lives to (and accepted modest salaries as civil servants to do).</p>
]]></description><pubDate>Mon, 28 Jul 2025 00:09:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=44705909</link><dc:creator>ericvolp12</dc:creator><comments>https://news.ycombinator.com/item?id=44705909</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44705909</guid></item><item><title><![CDATA[New comment by ericvolp12 in "Introducing Gemma 3n"]]></title><description><![CDATA[
<p>The Y-axis in that graph is fucking hilarious</p>
]]></description><pubDate>Thu, 26 Jun 2025 18:37:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=44390064</link><dc:creator>ericvolp12</dc:creator><comments>https://news.ycombinator.com/item?id=44390064</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44390064</guid></item><item><title><![CDATA[New comment by ericvolp12 in "In praise of “normal” engineers"]]></title><description><![CDATA[
<p>> Make it easy to do the right thing and hard to do the wrong thing.<p>This is basically the mantra of every platform team I've worked on. Your goal is to make the easy and obvious solution to engineers' problems the "right" one for the sustainability of software and reliability of services.<p>Make it easy to ship things that are reliable and manage distributed state well and can scale well and engineers will build better muscle memory for building software in that shape and your whole org will benefit.<p>This will never stop being true.</p>
]]></description><pubDate>Thu, 19 Jun 2025 18:37:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=44321295</link><dc:creator>ericvolp12</dc:creator><comments>https://news.ycombinator.com/item?id=44321295</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44321295</guid></item><item><title><![CDATA[Attacking My Landlord's Boiler]]></title><description><![CDATA[
<p>Article URL: <a href="https://blog.videah.net/attacking-my-landlords-boiler/">https://blog.videah.net/attacking-my-landlords-boiler/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43759073">https://news.ycombinator.com/item?id=43759073</a></p>
<p>Points: 388</p>
<p># Comments: 224</p>
]]></description><pubDate>Tue, 22 Apr 2025 04:27:40 +0000</pubDate><link>https://blog.videah.net/attacking-my-landlords-boiler/</link><dc:creator>ericvolp12</dc:creator><comments>https://news.ycombinator.com/item?id=43759073</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43759073</guid></item><item><title><![CDATA[New comment by ericvolp12 in "When imperfect systems are good: Bluesky's lossy timelines"]]></title><description><![CDATA[
<p>This is probably what we'll end up with in the long-run. Things have been fast enough without it (aside from this issue) but there's a lot of low-hanging fruit for Timelines architecture updates. We're spread pretty thin from a engineering-hours standpoint atm so there's a lot of intense prioritization going on.</p>
]]></description><pubDate>Wed, 19 Feb 2025 22:37:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=43108664</link><dc:creator>ericvolp12</dc:creator><comments>https://news.ycombinator.com/item?id=43108664</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43108664</guid></item><item><title><![CDATA[New comment by ericvolp12 in "Visualizing 13M Bluesky users"]]></title><description><![CDATA[
<p>Give Jetstream a try instead, it's all JSON for you already:<p>websocat wss://jetstream2.us-west.bsky.network/subscribe</p>
]]></description><pubDate>Tue, 12 Nov 2024 20:42:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=42119519</link><dc:creator>ericvolp12</dc:creator><comments>https://news.ycombinator.com/item?id=42119519</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42119519</guid></item><item><title><![CDATA[New comment by ericvolp12 in "How Discord stores trillions of messages (2023)"]]></title><description><![CDATA[
<p>Oh that's pretty neat. Did you just end up being okay with large partitions? I've been really afraid to let partition sizes grow beyond 100k rows even if the rows themselves are tiny but I'm not really sure how much of a real-world performance impact it has. It definitely complicates the data model to break the partitions up though.</p>
]]></description><pubDate>Mon, 30 Sep 2024 00:02:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=41692039</link><dc:creator>ericvolp12</dc:creator><comments>https://news.ycombinator.com/item?id=41692039</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41692039</guid></item><item><title><![CDATA[New comment by ericvolp12 in "How Discord stores trillions of messages (2023)"]]></title><description><![CDATA[
<p>Interesting, we've got to 5M+ reads/sec in realistic simulated benchmarks and ~2M reads/sec of real-world-throughput on our clusters that are <10 nodes (though really high density). I don't think I've pushed writes beyond 1M QPS in real-world or simulated loads yet though. Thankfully our partitioning schemes are super well distributed though and our rows are very small (generally 1-5k) so I don't think we'd have a problem hitting some big numbers.</p>
]]></description><pubDate>Sun, 29 Sep 2024 23:52:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=41691992</link><dc:creator>ericvolp12</dc:creator><comments>https://news.ycombinator.com/item?id=41691992</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41691992</guid></item><item><title><![CDATA[New comment by ericvolp12 in "How Discord stores trillions of messages (2023)"]]></title><description><![CDATA[
<p>Did you guys end up redesigning the partitioning scheme to fit within Scylla's recommended partition sizes? I assume the tombstone issue didn't disappear with a move to Scylla but incremental compaction and/or SCTS might have helped a bunch?</p>
]]></description><pubDate>Sun, 29 Sep 2024 04:51:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=41685003</link><dc:creator>ericvolp12</dc:creator><comments>https://news.ycombinator.com/item?id=41685003</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41685003</guid></item><item><title><![CDATA[New comment by ericvolp12 in "How Discord stores trillions of messages (2023)"]]></title><description><![CDATA[
<p>ScyllaDB scales horizontally on a shard-per-core architecture with a ballpark throughput of 12,500 Reads and 12,500 Writes per second per shard. If you're running Scylla across a total of 64 cores (maybe on 4 VMs with 16 vCPUs each), you can get up to 800k Reads 800k Writes per sec of throughput with P99 writes of <500us and p99 reads of <2ms.<p>You will not be able to get that performance out of Postgres and the write scaling will also be impossible on a non-sharded DB.<p>If you're a company like Discord and are running dozens (70-something?) of ScyllaDB nodes, likely each with 32 or 64 vCPUs, you've got capacity for 50M+ reads/writes per second across the cluster assuming your read/write workloads are evenly balanced across shards.</p>
]]></description><pubDate>Sun, 29 Sep 2024 04:46:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=41684982</link><dc:creator>ericvolp12</dc:creator><comments>https://news.ycombinator.com/item?id=41684982</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41684982</guid></item><item><title><![CDATA[New comment by ericvolp12 in "How Discord stores trillions of messages (2023)"]]></title><description><![CDATA[
<p>The funny part is ScyllaDB still uses tombstones for deletions, though they do have configurable compaction strategies and iirc Discord uses Scylla's Incremental Compaction Strategy that I suppose solves the specific issue they were dealing with. iirc that compaction strategy will trigger a compaction once a certain threshold of a partition is tombstones and then the table is rebuilt without the tombstoned content (which effectively pauses writes on that specific node and that specific table and partition for the duration of that process). Compacting a massive partition is really expensive. Scylla defaults to warning you that a partition is too large if it has at least 100,000 rows in it. My guess is when they moved to ScyllaDB they also adopted a new strategy for partitioning messages in a channel that keeps partition sizes reasonable so compactions don't take a super long time.</p>
]]></description><pubDate>Sun, 29 Sep 2024 04:39:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=41684959</link><dc:creator>ericvolp12</dc:creator><comments>https://news.ycombinator.com/item?id=41684959</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41684959</guid></item><item><title><![CDATA[New comment by ericvolp12 in "Jetstream: Shrinking the AT Protocol Firehose by >99%"]]></title><description><![CDATA[
<p>Yeah exactly! The longer we can make it without having to shard the firehose, the better. It's a lot less complex to consume as a single stream.</p>
]]></description><pubDate>Tue, 24 Sep 2024 20:08:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=41640369</link><dc:creator>ericvolp12</dc:creator><comments>https://news.ycombinator.com/item?id=41640369</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41640369</guid></item><item><title><![CDATA[New comment by ericvolp12 in "Jetstream: Shrinking the AT Protocol Firehose by >99%"]]></title><description><![CDATA[
<p>The current relay firehose has more than 250 subscribers. It's served more than 8.5Gbps in real-world peak traffic sustained for ~12 hours a day. That being said, Jetstream is a lot more friendly for devs to get started with consuming than the full protocol firehose, and helps grow the ecosystem of cool projects people build on the open network.<p>Also, this was a fun thing I built mostly in my free time :)</p>
]]></description><pubDate>Tue, 24 Sep 2024 19:50:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=41640209</link><dc:creator>ericvolp12</dc:creator><comments>https://news.ycombinator.com/item?id=41640209</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41640209</guid></item><item><title><![CDATA[New comment by ericvolp12 in "Jetstream: Shrinking the AT Protocol Firehose by >99%"]]></title><description><![CDATA[
<p>Yes the actual record content on the network isn't huge at the moment but the firehose doesn't include blobs (images and videos) which take up significantly more space. Either way, yeah it's pretty lightweight. Total number of records on the network is around 2.5Bn in the ~1.5 years Bluesky has been around.<p>I aggregated some stats when we hit 10M users here - <a href="https://bsky.app/profile/did:plc:q6gjnaw2blty4crticxkmujt/post/3l47puwmjnl2x" rel="nofollow">https://bsky.app/profile/did:plc:q6gjnaw2blty4crticxkmujt/po...</a></p>
]]></description><pubDate>Tue, 24 Sep 2024 18:23:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=41639482</link><dc:creator>ericvolp12</dc:creator><comments>https://news.ycombinator.com/item?id=41639482</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41639482</guid></item><item><title><![CDATA[New comment by ericvolp12 in "Jetstream: Shrinking the AT Protocol Firehose by >99%"]]></title><description><![CDATA[
<p>The 10x surge in traffic was us gaining 3.5M new users over the course of a week (growing the entire userbase by >33%) and all these users have been incredibly active on a daily basis.<p>Lots of these numbers are public and the impact of the surge can be seen here: <a href="https://bskycharts.edavis.dev/static/dynazoom.html?plugin_name=edavis.dev%2Fbskycharts.edavis.dev%2Fbsky_users_total&start_iso8601=2024-04-22T11%3A15%3A23-0700&stop_iso8601=2024-09-24T11%3A15%3A23-0700&start_epoch=1713809723&stop_epoch=1727201723&lower_limit=&upper_limit=&size_x=800&size_y=400&cgiurl_graph=%2Fmunin-cgi%2Fmunin-cgi-graph" rel="nofollow">https://bskycharts.edavis.dev/static/dynazoom.html?plugin_na...</a><p>Note the graphs in that link only show users that take a publicly visible action (i.e. post, like, follow, etc.) and won't show lurkers at all.</p>
]]></description><pubDate>Tue, 24 Sep 2024 18:20:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=41639451</link><dc:creator>ericvolp12</dc:creator><comments>https://news.ycombinator.com/item?id=41639451</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41639451</guid></item><item><title><![CDATA[New comment by ericvolp12 in "Jetstream: Shrinking the AT Protocol Firehose by >99%"]]></title><description><![CDATA[
<p>It's impossible to use the compressed version of the stream without using a client that has the baked-in ZSTD dictionary. This is a usability issue for folks using languages without a Jetstream client who just want to consume the websocket as JSON. It also makes things like using websocat and unix pipes to build some kind of automation a lot harder (though probably not impossible).<p>FWIW the default mode is uncompressed unless the client explicitly requests compression with a custom header. I tried using per-message-deflate but the support for it in the websocket libraries I was using was very poor and it has the same problem as streaming compression in terms of CPU usage on the Jetstream server.</p>
]]></description><pubDate>Tue, 24 Sep 2024 16:49:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=41638424</link><dc:creator>ericvolp12</dc:creator><comments>https://news.ycombinator.com/item?id=41638424</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41638424</guid></item><item><title><![CDATA[New comment by ericvolp12 in "Jetstream: Shrinking the AT Protocol Firehose by >99%"]]></title><description><![CDATA[
<p>Jetstream isn't an official change to the Protocol, it's an optimization I made for my own services that I realized a lot of other devs would appreciate. The major driving force behind it was both the bandwidth savings but also making the Firehose a lot easier to use for devs that aren't familiar with AT Proto and MSTs. Jetstream is a much more approachable way for people to dip their toe into my favorite part of AT Proto: the public event stream.</p>
]]></description><pubDate>Tue, 24 Sep 2024 15:52:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=41637899</link><dc:creator>ericvolp12</dc:creator><comments>https://news.ycombinator.com/item?id=41637899</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41637899</guid></item><item><title><![CDATA[New comment by ericvolp12 in "Jetstream: Shrinking the AT Protocol Firehose by >99%"]]></title><description><![CDATA[
<p>The full Firehose provides two major verification features. First it includes a signature that can be validated letting you know the updates are signed by the repo owner. Second, by providing the MST proof, it makes it hard or impossible for the repo owner to omit any changes to the repo contents in the Firehose events. If some records are created or deleted without emitting events, the next event emitted will show that something's not right and you should re-sync your copy of the repo to understand what changed.</p>
]]></description><pubDate>Tue, 24 Sep 2024 15:48:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=41637849</link><dc:creator>ericvolp12</dc:creator><comments>https://news.ycombinator.com/item?id=41637849</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41637849</guid></item><item><title><![CDATA[An Entire Social Network in 1.6GB (GraphD Part 2)]]></title><description><![CDATA[
<p>Article URL: <a href="https://jazco.dev/2024/04/20/roaring-bitmaps/">https://jazco.dev/2024/04/20/roaring-bitmaps/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=40115836">https://news.ycombinator.com/item?id=40115836</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 22 Apr 2024 16:23:20 +0000</pubDate><link>https://jazco.dev/2024/04/20/roaring-bitmaps/</link><dc:creator>ericvolp12</dc:creator><comments>https://news.ycombinator.com/item?id=40115836</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40115836</guid></item><item><title><![CDATA[SREs Solve Problems at Home (Making My Dumb Fridge Smart)]]></title><description><![CDATA[
<p>Article URL: <a href="https://blog.ericv.me/2021/03/07/smartfridge/">https://blog.ericv.me/2021/03/07/smartfridge/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=26388466">https://news.ycombinator.com/item?id=26388466</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 08 Mar 2021 17:23:53 +0000</pubDate><link>https://blog.ericv.me/2021/03/07/smartfridge/</link><dc:creator>ericvolp12</dc:creator><comments>https://news.ycombinator.com/item?id=26388466</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=26388466</guid></item></channel></rss>