<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: tristan9</title><link>https://news.ycombinator.com/user?id=tristan9</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 16:55:59 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=tristan9" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by tristan9 in "How to waste bandwidth, battery power, and annoy sysadmins"]]></title><description><![CDATA[
<p>Bad take.<p>I opened the Github issue linked. For us it represented, at times, thousands of requests per second across multiple users. And that was with affected users getting IP-banned temporarily.<p>Some of which were 404s which you typically absolutely do not want cached. Or 405s (on HEAD /favicon.ico for example). Or 429s. Or 403s.<p>Browsers are expected to:
1. Use the favicon specified in meta if any (we do have one, /favicon.svg)
2. Respect cache headers (immutable + multi-months max-age)
3. Not make completely random requests to things they should ignore (such as OpenGraph tags)<p>Yes CDNs do help with these kinds of issues, but they absolutely do not fix them all. Which is why even though we have a pretty damn elaborate setup in that regard we were being annoyed by the issue.<p>But also Firefox on iOS should be not-completely-broken.</p>
]]></description><pubDate>Sat, 29 Jun 2024 15:47:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=40831404</link><dc:creator>tristan9</dc:creator><comments>https://news.ycombinator.com/item?id=40831404</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40831404</guid></item><item><title><![CDATA[New comment by tristan9 in "Cloudflare took down our website"]]></title><description><![CDATA[
<p>It's not even 100Mbps sustained. That is nowhere near 30k/year or you're getting ripped off.<p>For 3k/month you can get a good quality 10Gbps link. That's 3.2PB with a P.</p>
]]></description><pubDate>Sun, 26 May 2024 14:51:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=40482605</link><dc:creator>tristan9</dc:creator><comments>https://news.ycombinator.com/item?id=40482605</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40482605</guid></item><item><title><![CDATA[New comment by tristan9 in "The largest DDoS attack to date, peaking above 398M rps"]]></title><description><![CDATA[
<p>Well, lucky you. Or unlucky me and everyone I know running a large website. Guess we’ll never know.</p>
]]></description><pubDate>Wed, 11 Oct 2023 00:27:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=37839542</link><dc:creator>tristan9</dc:creator><comments>https://news.ycombinator.com/item?id=37839542</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37839542</guid></item><item><title><![CDATA[New comment by tristan9 in "The largest DDoS attack to date, peaking above 398M rps"]]></title><description><![CDATA[
<p>I’m aware. Doesn’t make it sting less being in the receiving end of attacks all the time and seeing everyone collectively shrug.</p>
]]></description><pubDate>Wed, 11 Oct 2023 00:23:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=37839514</link><dc:creator>tristan9</dc:creator><comments>https://news.ycombinator.com/item?id=37839514</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37839514</guid></item><item><title><![CDATA[New comment by tristan9 in "The largest DDoS attack to date, peaking above 398M rps"]]></title><description><![CDATA[
<p>> Trust me when I say that you don't want the ISP's to inspect web traffic.<p>They do already. DPI on port 53 for DNS blocks or SNI inspection are common place. So are IP blocks.<p>> If you want traffic, you need to be equipped to handle traffic. You are the one with the internet facing infrastructure.<p>Slightly misleading wording here. More accurately your point is: « you want to run a website? Better have the infra to support traffic spikes comparable to that of a tech giant ».
400M rps would cost an unfathomable amount of money to be able to handle even just while dropping all packets.<p>> And maybe Facebook and Google are big enough to push around the ISP's, but they are the only ones. Nobody will bat an eyelash if 15,000 Comcast users in Phoenix AZ can access your hokey-pokey website.<p>Obviously yes. Too bad it’s better business for everyone to say nothing and just recommend you use their product.</p>
]]></description><pubDate>Wed, 11 Oct 2023 00:20:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=37839486</link><dc:creator>tristan9</dc:creator><comments>https://news.ycombinator.com/item?id=37839486</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37839486</guid></item><item><title><![CDATA[New comment by tristan9 in "The largest DDoS attack to date, peaking above 398M rps"]]></title><description><![CDATA[
<p>> Traditionally, a botnet can be compromised (at least largely) of actual consumer devices unknowingly making requests on their owners' behalf.<p>And I do count that in.<p>Just because a user is the source of an attack unknowingly doesn’t make it right.<p>What would make it right is for there to be a more generalized remote blackholing system in place.<p>ie my site runs on an IP, is able to tell my ISP to reject traffic to it from $sources, and my ISP can send that request to the source ISP.<p>And if it makes my site unavailable to that other ISP because of CGNAT and 0 oversight, tough luck. Guess their support is getting calls so maybe they start monitoring obviously abusive egress spikes per-destination.</p>
]]></description><pubDate>Wed, 11 Oct 2023 00:13:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=37839448</link><dc:creator>tristan9</dc:creator><comments>https://news.ycombinator.com/item?id=37839448</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37839448</guid></item><item><title><![CDATA[New comment by tristan9 in "The largest DDoS attack to date, peaking above 398M rps"]]></title><description><![CDATA[
<p>IP reputation is already a thing. And plenty enough ASNs are well-known for willfully hosting C2 servers and spam, DoS, etc sources…</p>
]]></description><pubDate>Wed, 11 Oct 2023 00:10:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=37839420</link><dc:creator>tristan9</dc:creator><comments>https://news.ycombinator.com/item?id=37839420</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37839420</guid></item><item><title><![CDATA[New comment by tristan9 in "The largest DDoS attack to date, peaking above 398M rps"]]></title><description><![CDATA[
<p>These blogposts document the attack. Documenting it and acting in it are different.<p>There’s no practical action being taken besides « use our profucts cause we can tank it for you » here.<p>The mitigations listed are better than nothing, but the fact that every skid out there can hire a botnet of a few thousands compromised machines (like here) and send you a few millions (say this protocol attack allowed a 100x higer than avg impact) rps is way enough to kill the infra of 99.99% websites. No questions asked.</p>
]]></description><pubDate>Wed, 11 Oct 2023 00:08:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=37839403</link><dc:creator>tristan9</dc:creator><comments>https://news.ycombinator.com/item?id=37839403</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37839403</guid></item><item><title><![CDATA[New comment by tristan9 in "The largest DDoS attack to date, peaking above 398M rps"]]></title><description><![CDATA[
<p>> What happens when you log an attack from a device that is attacking you from a school or business WiFi network? Block the whole IP forever?<p>No, but for a day perhaps.<p>> What if the user is on a CGNAT. Are you going to block the edge proxy for that entire ISP?<p>Maybe. If the ISP doesn’t bother doing anything about it (which is THEIR job, not mine as a website operator).<p>If the ISP can’t be arsed to do their job, why am I supposed to care about them at all?<p>> What if you're getting hit from a residential connection that gets a new rotated IP every couple of weeks? Block whoever gets that IP from now on?<p>Same as the CGNAT one. It’s the ISP’s job to handle their misbehaving customers.<p>If they refuse to do it and get complaints from their other customers that they’re getting blocked, maybe they’ll actually get to it.<p>> Your solution doesn't stop attacks. It just stops regular users.<p>No. It puts pressure on the ISPs to finally stop whining loudly when they receive an attack while closing their eyes on any attack originating from their network.<p>This is not sustainable.</p>
]]></description><pubDate>Tue, 10 Oct 2023 14:26:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=37832548</link><dc:creator>tristan9</dc:creator><comments>https://news.ycombinator.com/item?id=37832548</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37832548</guid></item><item><title><![CDATA[New comment by tristan9 in "The largest DDoS attack to date, peaking above 398M rps"]]></title><description><![CDATA[
<p>> just with everything production-grade, the average enterprise just isn't ready to deal with all the upfront cost to run your entire computing solution<p>That’s not a fair point.<p>We’re not even trying to make the internet safe. There is zero (0) actions being taken to stop this madness.
If you run a large website, you still regularly see attacks from routers compromised 3, 4, 5 years ago. Or how a mere few days of poking around smartly is still enough to this day to find enough open DNS resolvers to launch >500Gbps attacks with one or two computers.<p>Why are these threats allowed to still exist?<p>The only ones attempting something are governments shutting down booters (DDoS-as-a-service platforms). But that’s treating symptoms, not causes.<p>We will eventually need to do something, or it will be impossible to run a website that can’t be kicked down for free by the next bored skid.<p>Just like paying protection fees to the mafia was a status quo, this also is just that. A status quo, not an inevitability.<p>The solution is to finally hold accountable attack origins (ISPs, mostly), so that monitoring their egress becomes something they have an incentive to do.</p>
]]></description><pubDate>Tue, 10 Oct 2023 14:17:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=37832434</link><dc:creator>tristan9</dc:creator><comments>https://news.ycombinator.com/item?id=37832434</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37832434</guid></item><item><title><![CDATA[New comment by tristan9 in "The largest DDoS attack to date, peaking above 398M rps"]]></title><description><![CDATA[
<p>You’re completely wrong.<p>All large sites regularly get attacked.<p>The average skiddie’s motivations are that they’re bored. So they DoS a site they use regularly just to see.<p>Heck they generally don’t even mean to cause damage per-se, and just think it’s a funny use of their evening.<p>You have to stop thinking DoS attacks are always particularly personal. They really often just aren’t, and it’s a monumental pain in the ass to be on the receiving end.</p>
]]></description><pubDate>Tue, 10 Oct 2023 14:10:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=37832340</link><dc:creator>tristan9</dc:creator><comments>https://news.ycombinator.com/item?id=37832340</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37832340</guid></item><item><title><![CDATA[New comment by tristan9 in "The largest DDoS attack to date, peaking above 398M rps"]]></title><description><![CDATA[
<p>Thing is, don’t care.<p>The problem is that ISPs whose customers are originating the attacks from don’t give a shit.<p>If we have to give up 1% of legitimate traffic to thwart 90% of attacks, it is a good deal.<p>If you and other customers complain to your ISP (or switch), eventually they’ll do something about it.<p>We can’t seriously keep on accepting that « thousands of compromised devices » is a fine reality for a « small botnet ».<p>These devices should be quarantined.</p>
]]></description><pubDate>Tue, 10 Oct 2023 14:07:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=37832311</link><dc:creator>tristan9</dc:creator><comments>https://news.ycombinator.com/item?id=37832311</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37832311</guid></item><item><title><![CDATA[New comment by tristan9 in "The largest DDoS attack to date, peaking above 398M rps"]]></title><description><![CDATA[
<p>You can’t.
If your webserver receives 400m rps it dies, end if story.<p>Mitigations are just that, mitigations. They are as effective as buying a better door lock to protect your apartment from a nuke.</p>
]]></description><pubDate>Tue, 10 Oct 2023 14:04:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=37832261</link><dc:creator>tristan9</dc:creator><comments>https://news.ycombinator.com/item?id=37832261</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37832261</guid></item><item><title><![CDATA[New comment by tristan9 in "MangaDex infrastructure overview"]]></title><description><![CDATA[
<p>> k8s cluster exists solely to handle their monitoring and logging<p>Does image processing, runs our analytics, runs our Sentry, runs our gitlab-ci runners, and quite a few other things not mentioned expressly<p>> which would be extreme overkill<p>That's an interesting argument against k8s ; if anything I find it much easier to work with -- once accustomed to its idioms, ofc -- than alternatives like dedicated VMs, Docker Swarm etc<p>Getting HA and auto-healing for free is possible without it, of course, but does require much more work, especially if you aim for a somewhat minimised amount of statefulness (as in deviation from the template of your system)<p>Also S3-compatible storage backends are really aplenty, from commercial offering to simpler ones like MinIO. Ceph just happens to be a bit higher of a deployment investment with the benefit of fantastic performance, flexibility and resiliency. Somewhat like k8s itself, it's a bit daunting at first but does actually make things simpler in the long run (imo)<p>> 18k metrics/samples [...] are nothing<p>Well yes and no, the number of metrics isn't relevant per se, but its cardinality is very relevant, and managing that in a single prometheus instance will quickly require some serious vertical scaling, especially if you want to look at data on longer ranges (which, in contrary to logs, we are interested in)<p>> 7k logs per second [...] are nothing<p>That's an interesting take . Surely this isn't a world-record-shattering amount indeed, but no one seems to have such a great non-SaaS-or-cheap solution to storing, sorting and querying this amount of logs either (at the resource efficiency of Loki anyway), so maybe we just have a different set of expectations for log management<p>> If you don't like a pull-based architecture [...] why use one at all!? There are many more push-based setups out there that are simpler to set up and less complex.<p>Are there really? 
That is non-SaaS and with as widespread 3rd-party software support as Prometheus does? ie great integration with essentially any database, webserver, runtime, OS, etc?<p>Because if we talk only about node metrics like CPU etc then yeah, sure there are plenty of options. But (maybe not so) obviously the diagram showing only node exporter doesn't mean that this is the only integration we use -- we collect prometheus metrics for MySQL, PHP-FPM, Varnish, Nginx, HAProxy, Elasticsearch, Redis, RabbitMQ etc (essentially every single piece of software we use).<p>Fwiw I found very little in the way of open-source solutions to that problem that ticked as many boxes as Prometheus.<p>As for "simpler to set up and less complex", both Cortex and Loki would be really annoying to manage outside of Kubernetes, I'll happily give you that.
But... being able to easily deploy and manage such systems once you have Kubernetes is precisely one of the reasons to use it. You can't say it's complex to deploy itself but then ignore the fact that it largely outweighs this by making reliable operation of complex-but-powerful software on top it, that is precisely one of the upsides of using it in the first place :)</p>
]]></description><pubDate>Wed, 08 Sep 2021 09:33:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=28454926</link><dc:creator>tristan9</dc:creator><comments>https://news.ycombinator.com/item?id=28454926</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28454926</guid></item><item><title><![CDATA[New comment by tristan9 in "MangaDex infrastructure overview"]]></title><description><![CDATA[
<p>As I said, it's not so much that we ask that data to be fetched -- it is there in the first place, and pulled from Elasticsearch, not a SQL database<p>Because of this model, we also make sure that Elasticsearch merely works a search cache, not as an authoritative content database (hence everything we add in there is considered public, on purpose, and what isn't meant to be public is just not indexed in ES)<p>However the gzip efficiency improvements would be really neat for sure<p>Fwiw I also don't work on the backend and there might be good reasons to not expressly filter out data (yet anyway, perhaps it will end up as a separate entity and be a include parameter)</p>
]]></description><pubDate>Tue, 07 Sep 2021 11:36:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=28443455</link><dc:creator>tristan9</dc:creator><comments>https://news.ycombinator.com/item?id=28443455</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28443455</guid></item><item><title><![CDATA[New comment by tristan9 in "MangaDex infrastructure overview"]]></title><description><![CDATA[
<p>Definitely needs optimising for user experience indeed!<p>However the serving of this JS has nearly no cost to us (as they are cached at the edge by DDoS-Guard and the frontend is otherwise entirely static on our end)</p>
]]></description><pubDate>Tue, 07 Sep 2021 10:12:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=28442992</link><dc:creator>tristan9</dc:creator><comments>https://news.ycombinator.com/item?id=28442992</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28442992</guid></item><item><title><![CDATA[New comment by tristan9 in "MangaDex infrastructure overview"]]></title><description><![CDATA[
<p>That's very close to how MD@H works, but it also has a time component and tokens are not generated by our main backends, so it'd require a separate internal http call per chapter</p>
]]></description><pubDate>Tue, 07 Sep 2021 10:10:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=28442977</link><dc:creator>tristan9</dc:creator><comments>https://news.ycombinator.com/item?id=28442977</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28442977</guid></item><item><title><![CDATA[New comment by tristan9 in "MangaDex infrastructure overview"]]></title><description><![CDATA[
<p>> The real problem is that generating that much JSON is very "heavy" on servers. Lots and lots of small object allocations, which gives the garbage collector a ton of work to do. It's also expensive to decode on the browser for similar reasons.<p>For what it's worth, this isn't generated live but a mix of existing entity documents<p>Most of it is page filenames which indeed could be made optional and fetched only by the reader, but that'd be us actively nulling them out in the returned entity, since they are there in the ES documents for the chapters (a manga feed like this being a list of chapters)</p>
]]></description><pubDate>Tue, 07 Sep 2021 10:08:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=28442962</link><dc:creator>tristan9</dc:creator><comments>https://news.ycombinator.com/item?id=28442962</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28442962</guid></item><item><title><![CDATA[New comment by tristan9 in "MangaDex infrastructure overview"]]></title><description><![CDATA[
<p>> Would definitely like to hear more about their dev environment, how it is different from prod, and how they handle the differences.<p>It's honestly quite boringly similar (hence why it's only vaguely alluded to in the article)<p>Take out DDoS-Guard/External LBs (no need for publicness of it), pick a cheap-o cloud provider to get niceties like quick rebuilding with Terraform etc, slap a VPC-like thing to make it a similar private network (do use a different subnet so copypasting typos across dev and prod are impossible) and scale down everything (ES node has 8 CPUs and 24GB ram in prod? It will have to do with 2vCPUs and 2GB RAM in dev)<p>One of the annoying things is you do want to test the replicated/distributed nature of things, so you can't just throw everything on a single-instance-single-host because it's dev, otherwise you miss out on a lot of the configuration being properly tested, which ends up a bit costlier than necessary</p>
]]></description><pubDate>Tue, 07 Sep 2021 08:51:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=28442509</link><dc:creator>tristan9</dc:creator><comments>https://news.ycombinator.com/item?id=28442509</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28442509</guid></item><item><title><![CDATA[New comment by tristan9 in "MangaDex infrastructure overview"]]></title><description><![CDATA[
<p>Not correct, we generate 2 thumbnails sizes for every cover -- if the site loads full-size anywhere by default (rather than when you expand it) it's definitely a bug!</p>
]]></description><pubDate>Tue, 07 Sep 2021 08:42:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=28442469</link><dc:creator>tristan9</dc:creator><comments>https://news.ycombinator.com/item?id=28442469</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28442469</guid></item></channel></rss>