<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: jordanthoms</title><link>https://news.ycombinator.com/user?id=jordanthoms</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 29 Apr 2026 05:39:23 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=jordanthoms" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by jordanthoms in "Humans have caused 1.5 °C of long-term global warming according to new estimates"]]></title><description><![CDATA[
<p>Oh, I agree on this. People were never going to accept, nor IMO should they have, a massive reduction in their living standards. New technology is the way to make people's lives better while also reducing global warming.<p>I just got back from a off-grid island here in New Zealand - 20 years ago, generators were everywhere and as soon as it got dark you'd hear nothing but the buzzing of running them all around you. Now there is solar everywhere and it's completely silent.</p>
]]></description><pubDate>Sun, 17 Nov 2024 21:03:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=42167225</link><dc:creator>jordanthoms</dc:creator><comments>https://news.ycombinator.com/item?id=42167225</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42167225</guid></item><item><title><![CDATA[New comment by jordanthoms in "Humans have caused 1.5 °C of long-term global warming according to new estimates"]]></title><description><![CDATA[
<p>Humans can be trusted to do the right thing, once all other possibilities have been exhausted.</p>
]]></description><pubDate>Sun, 17 Nov 2024 20:38:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=42167009</link><dc:creator>jordanthoms</dc:creator><comments>https://news.ycombinator.com/item?id=42167009</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42167009</guid></item><item><title><![CDATA[New comment by jordanthoms in "Wait Until 8th"]]></title><description><![CDATA[
<p>Also interested in this - the Apple Watch for Kids setup seems a possibility, but it's only available in certain countries</p>
]]></description><pubDate>Thu, 31 Oct 2024 21:21:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=42011657</link><dc:creator>jordanthoms</dc:creator><comments>https://news.ycombinator.com/item?id=42011657</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42011657</guid></item><item><title><![CDATA[New comment by jordanthoms in "You'll regret using natural keys"]]></title><description><![CDATA[
<p>This is dependent on the database you are using - if it's a key-sharded distributed database, you want to have insertions evenly spread across the key space in order to avoid having all the inserts go into a single shard (which could  overload it)</p>
]]></description><pubDate>Wed, 05 Jun 2024 15:46:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=40586360</link><dc:creator>jordanthoms</dc:creator><comments>https://news.ycombinator.com/item?id=40586360</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40586360</guid></item><item><title><![CDATA[New comment by jordanthoms in "How not to change PostgreSQL column type"]]></title><description><![CDATA[
<p>Not quite <i>any</i> - in CockroachDB, all schema changes are non-blocking:  <a href="https://www.cockroachlabs.com/docs/stable/online-schema-changes" rel="nofollow">https://www.cockroachlabs.com/docs/stable/online-schema-chan...</a> . Yugabyte seems to be getting there with them also.<p>Still risks involved in migrations (mostly from the migration executing too quickly and creating high load in the cluster - the admission control system should have reduced this) and we have extra review steps for them, but it's been very useful to be able to migrate large tables without any extra application-level work.</p>
]]></description><pubDate>Wed, 08 May 2024 06:37:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=40295014</link><dc:creator>jordanthoms</dc:creator><comments>https://news.ycombinator.com/item?id=40295014</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40295014</guid></item><item><title><![CDATA[New comment by jordanthoms in "Meta outage"]]></title><description><![CDATA[
<p>Gmail was also experiencing issues: <a href="https://www.google.com/appsstatus/dashboard/incidents/shD5VvSGTxETw1YLbsCt" rel="nofollow">https://www.google.com/appsstatus/dashboard/incidents/shD5Vv...</a></p>
]]></description><pubDate>Tue, 05 Mar 2024 17:30:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=39606571</link><dc:creator>jordanthoms</dc:creator><comments>https://news.ycombinator.com/item?id=39606571</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39606571</guid></item><item><title><![CDATA[New comment by jordanthoms in "Meta outage"]]></title><description><![CDATA[
<p>That's kinda the point though isn't it? DownDetector is showing an early indication of a major outage in both of your examples. The issue may not be caused  by the indicated service, but it's still a useful information source especially when we can correlate reports on there with what we are seeing in our internal monitoring.</p>
]]></description><pubDate>Tue, 05 Mar 2024 17:28:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=39606539</link><dc:creator>jordanthoms</dc:creator><comments>https://news.ycombinator.com/item?id=39606539</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39606539</guid></item><item><title><![CDATA[New comment by jordanthoms in "Meta outage"]]></title><description><![CDATA[
<p>We saw a big spike in latency and failures on the Google OAuth apis starting at the same time (15:21 UTC)</p>
]]></description><pubDate>Tue, 05 Mar 2024 17:18:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=39606416</link><dc:creator>jordanthoms</dc:creator><comments>https://news.ycombinator.com/item?id=39606416</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39606416</guid></item><item><title><![CDATA[New comment by jordanthoms in "Meta outage"]]></title><description><![CDATA[
<p>A single report on there is useless. A sudden flood of reports is a good sign that something interesting is happening.</p>
]]></description><pubDate>Tue, 05 Mar 2024 17:15:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=39606381</link><dc:creator>jordanthoms</dc:creator><comments>https://news.ycombinator.com/item?id=39606381</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39606381</guid></item><item><title><![CDATA[New comment by jordanthoms in "Meta outage"]]></title><description><![CDATA[
<p>It has false positives and noise for sure, but it's also very sensitive and shows issues very quickly.<p>I wouldn't trust it as a single source, but in a case like this where our internal  monitoring shows a spike of issues with the Google APIs and we can see a huge spike in reported issues for Google on Downdetector starting at the same time, it's useful to confirm that the issues have an external source.</p>
]]></description><pubDate>Tue, 05 Mar 2024 17:13:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=39606365</link><dc:creator>jordanthoms</dc:creator><comments>https://news.ycombinator.com/item?id=39606365</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39606365</guid></item><item><title><![CDATA[New comment by jordanthoms in "We don't have official RSS feed support for now, but we're working on a solution"]]></title><description><![CDATA[
<p>Your general point definitely stands - there is a pretty nice third party solution for google workspace though: <a href="https://github.com/GAM-team/GAM">https://github.com/GAM-team/GAM</a></p>
]]></description><pubDate>Mon, 11 Dec 2023 12:55:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=38600223</link><dc:creator>jordanthoms</dc:creator><comments>https://news.ycombinator.com/item?id=38600223</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38600223</guid></item><item><title><![CDATA[New comment by jordanthoms in "Northlight technology in Alan Wake 2"]]></title><description><![CDATA[
<p>I guess the problem there is pretty fundamental - in reality you'd be tensing muscles and shifting your weight etc before the snappy movement, but the game only knows that you want to move when you move the stick or push a button - so it either has to show that realistic motion after your button press and introduce latency, or sacrifice the realism in the animations.</p>
]]></description><pubDate>Tue, 07 Nov 2023 22:52:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=38184216</link><dc:creator>jordanthoms</dc:creator><comments>https://news.ycombinator.com/item?id=38184216</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38184216</guid></item><item><title><![CDATA[New comment by jordanthoms in "DoorDash manages high-availability CockroachDB clusters at scale"]]></title><description><![CDATA[
<p>It does seem like there would be a clean boundary between each school district, but actually there's plenty of sharing and collaboration on Kami that happens with users between districts, teachers and students move schools, parents can have children in different school districts, etc. Even a single classroom assignment can cross those, e.g. when someone external comes in to do a training session.<p>We could have modified our application layer to handle those cases, but it's a lot of extra complexity and room for error, and we'd have had to consider and solve for all of these cross-tenant situations as we add new functionality, so I was really keen to avoid that.<p>Also, there are some really big districts - NYCDOE has >1.1 million students and 1,800 schools. Even with them on a dedicated shard, it's quite possible that it'd get overloaded and we'd be spending more dev effort figuring out how to safely split them onto multiple shards.<p>When we looked at using distributed SQL database instead it was a clear win - from the application's perspective, it just looks like a really, really big PostgreSQL box, so we didn't need to change much. (the SQL support is very close to PG - The most annoying thing for us was the lack of trigram indexes, and Cockroach has now added those now). And in terms of the operational side, upgrading and maintaining CRDB has actually been easier than PG - version upgrades are easier to do without downtime, and schema migrations don't lock tables.</p>
]]></description><pubDate>Fri, 03 Nov 2023 04:22:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=38124497</link><dc:creator>jordanthoms</dc:creator><comments>https://news.ycombinator.com/item?id=38124497</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38124497</guid></item><item><title><![CDATA[New comment by jordanthoms in "DoorDash manages high-availability CockroachDB clusters at scale"]]></title><description><![CDATA[
<p>We moved took a large working PostgreSQL app and switched it over to CRDB and that doesn't match my experience. Our existing schemas and query patterns moved over nicely - latency for small indexed reads and writes did increase from ~1ms to ~3ms, but the max throughput now effectively unlimited since we could add capacity by adding new nodes into the cluster and letting CRDB automatically rebalance the workload. There was an increase in cost as it will need more cores, disk etc compared to a single-primary PostgreSQL, but that makes sense when you consider that every bit of data is getting stored on 5 different nodes and there are overheads to maintain the consistency.<p>For the highest throughput endpoints we did make some changes to be more optimal on CRDB so we could run a smaller cluster, but it didn't require anything close to a rewrite.</p>
]]></description><pubDate>Fri, 03 Nov 2023 03:24:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=38124092</link><dc:creator>jordanthoms</dc:creator><comments>https://news.ycombinator.com/item?id=38124092</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38124092</guid></item><item><title><![CDATA[New comment by jordanthoms in "DoorDash manages high-availability CockroachDB clusters at scale"]]></title><description><![CDATA[
<p>They would have to have many shards per city to keep up with the level of write traffic though. And what happens when a user from SFBA goes down to LA?</p>
]]></description><pubDate>Thu, 02 Nov 2023 22:59:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=38121591</link><dc:creator>jordanthoms</dc:creator><comments>https://news.ycombinator.com/item?id=38121591</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38121591</guid></item><item><title><![CDATA[New comment by jordanthoms in "DoorDash manages high-availability CockroachDB clusters at scale"]]></title><description><![CDATA[
<p>Clickhouse is a totally different use case - Cockroach is OLTP, Clickhouse is OLAP. We use both Cockroach and Clickhouse at scale and they are both great but not competing products - Cockroach is great for the types of reads and writes you do when serving user requests, processing transactions etc, but isn't optimal for analytics queries where you are going do things like read and aggregate data on a 50TB table.  Clickhouse eats those kinds of aggregate queries for breakfast, and is fast for some types of small read queries too, but it's not built to handle random writes or frequently updating rows of data.</p>
]]></description><pubDate>Thu, 02 Nov 2023 22:56:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=38121563</link><dc:creator>jordanthoms</dc:creator><comments>https://news.ycombinator.com/item?id=38121563</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38121563</guid></item><item><title><![CDATA[New comment by jordanthoms in "DoorDash manages high-availability CockroachDB clusters at scale"]]></title><description><![CDATA[
<p>Sharding is anything but simple. A single shard per region wouldn't have enough write capacity so they'd have to be managing likely 100+ shards in each region -  you'd have to build a lot of infrastructure to automate setting those up, rebalancing traffic to avoid hot spots and underutilized shards, in sync with schema migrations etc.<p>Even after that, now your applications using the DB have to be aware of the sharding - interactions between users who are housed on different shards etc could require a lot of work at the application layer.  If your customers can be easily be split into tenants which never interact with each other this isn't so bad but for a consumer app like DoorDash there isn't clear tenant boundaries.<p>We looked at all this for Kami and realised that it would be much easier for us to move from PostgreSQL to CockroachDB (we had exceeded the write capacity of a single PostgreSQL primary) than to shard Postgres, and it'd make future development much faster. We could have made sharding work if we had to... but it's not 2013 any more and we have distributed SQL databases, why not use them?</p>
]]></description><pubDate>Thu, 02 Nov 2023 22:47:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=38121457</link><dc:creator>jordanthoms</dc:creator><comments>https://news.ycombinator.com/item?id=38121457</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38121457</guid></item><item><title><![CDATA[New comment by jordanthoms in "Omnigres: Postgres as a Platform"]]></title><description><![CDATA[
<p>These are also good points you've raised.<p>On scaling, yeah a single postgres server can handle a lot. For us we were well past the million user mark before running into serious issues. However, a lot of how we were able to keep postgres working for us as we grew was by shifting work from postgres to our stateless services like I alluded to before - e.g. making our SQL queries as simple to execute as possible even if it means more work for the client to piece the parts back together.<p>If everything had been running inside the database we wouldn't have had that option and we'd probably have hit scaling limits much earlier - I guess we could have split off the traffic to the highest traffic endpoints and have those handled by a separate service calling the PG db, but then you get into issues with keeping the authentication  etc consistent.<p>Re security - yep, PG is already using C to parse untrusted inputs from the network, which is also scary, but it's (hopefully) well reviewed and mature code - and even so, I wouldn't want to expose PG's usual wire protocol port to the internet, so it's hard to imagine exposing HTTP from postgres to the wild west.<p>Ultimately it probably is just a question of the sort of project it's being used for - if it's for something that's not going to get need to get to larger scales, handle a lot of complexity over time, or pass security reviews and your main goal is simplicity, then maybe an approach like this is a good option. I've just found that things tend to start off looking small and simple and then turn out to be anything but, so I'd rather run `rails new` and point it at a standard PG server - which would be just as simple and productive when you are starting out, and can keep scaling as your customer base and team size grows up to the size of  Shopify, Github, or Kami (shameless plug).</p>
]]></description><pubDate>Mon, 16 Oct 2023 08:57:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=37897266</link><dc:creator>jordanthoms</dc:creator><comments>https://news.ycombinator.com/item?id=37897266</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37897266</guid></item><item><title><![CDATA[New comment by jordanthoms in "Omnigres: Postgres as a Platform"]]></title><description><![CDATA[
<p>This is an interesting project, but I'd be unlikely to use it, for a couple of reasons:<p>- In my experience the performance of the stateful DB server has been the biggest bottleneck when scaling - it's much easier to scale the stateless application servers which sit between end user requests and your DB in a traditional architecture. So usually I'm wanting to move as much work as possible away from the DB in order to squeeze the most out of it before needing to shard the DB or move to a different solution, rather than moving more responsibilities into the DB.<p>- It's frankly pretty scary to load a C extension into postgres which is opening ports and parsing requests etc - bugs in it could crash the server or open security holes, and if you were able to exploit a vulnerability you'd be able to grab any of the data in the DB and easily exfiltrate it. This would be less of an issue if using this for an internal service which isn't directly exposed to the internet, but it still could make it easier for an attacker to escalate their access. (This isn't a 'it should be rust' comment really, even if this was in rust it would still be pretty worrying).<p>- Even if think you only need simple CRUD actions, over time you tend to need more and more logic around those actions. Authentication, verification, triggering processes in other systems, maybe you make schema changes and need to adapt requests from old clients, etc.  It's really nice to have a more heavyweight application server where you can implement that logic - I'm pretty skeptical that row level permissions, triggers etc will be able to cleanly handle all those as you add new requirements over time. This applies also to other tools for more directly exposing your DB ( e.g. PostgREST ). IMO starting off using a tool like this is really just setting you up to have to do a pretty painful rebuild later on.<p>Am I missing something here, maybe I have misunderstood the intended use case?</p>
]]></description><pubDate>Mon, 16 Oct 2023 06:12:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=37896392</link><dc:creator>jordanthoms</dc:creator><comments>https://news.ycombinator.com/item?id=37896392</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37896392</guid></item><item><title><![CDATA[New comment by jordanthoms in "Gmail will no longer allow users to revert back to its old design"]]></title><description><![CDATA[
<p>Interesting - after the initial load it's substantially slower for me</p>
]]></description><pubDate>Thu, 10 Nov 2022 01:20:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=33540889</link><dc:creator>jordanthoms</dc:creator><comments>https://news.ycombinator.com/item?id=33540889</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33540889</guid></item></channel></rss>