<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: brentjanderson</title><link>https://news.ycombinator.com/user?id=brentjanderson</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 18 Apr 2026 00:55:53 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=brentjanderson" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by brentjanderson in "Do things that don't scale, and then don't scale"]]></title><description><![CDATA[
<p>There's actually a term for this, Context Collapse [1] that explores how social media forces everyone to have a single online persona instead of presenting in the way that makes sense for a given social context (e.g. the "you" at work vs. the "you" at school vs. the "you" with family).<p>[1]: <a href="https://en.wikipedia.org/wiki/Context_collapse" rel="nofollow">https://en.wikipedia.org/wiki/Context_collapse</a></p>
]]></description><pubDate>Sat, 16 Aug 2025 21:22:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=44927024</link><dc:creator>brentjanderson</dc:creator><comments>https://news.ycombinator.com/item?id=44927024</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44927024</guid></item><item><title><![CDATA[New comment by brentjanderson in "Bill Atkinson has died"]]></title><description><![CDATA[
<p>Sadly, most of them are lost to time. There's one that I'm aware of at <a href="https://archive.org/details/hypercard_voyager-engineer-new" rel="nofollow">https://archive.org/details/hypercard_voyager-engineer-new</a> is just one station of about 15 from one of the ships.<p><a href="https://thoriumsim.com" rel="nofollow">https://thoriumsim.com</a> is a modern incarnation of the same software.</p>
]]></description><pubDate>Sun, 08 Jun 2025 14:45:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=44217318</link><dc:creator>brentjanderson</dc:creator><comments>https://news.ycombinator.com/item?id=44217318</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44217318</guid></item><item><title><![CDATA[New comment by brentjanderson in "Bill Atkinson has died"]]></title><description><![CDATA[
<p>Bill's contribution with HyperCard is of course legendary. Apart from the experience of classrooms and computer labs in elementary schools, it was also the primary software powering a fusion of bridge-simulator-meets-live-action-drama field trips (among many other things) for over 20 years at the Space Center in central Utah.[0] I was one of many beneficiaries of this program as a participant, volunteer, and staff member. It was among the best things I've ever done.<p>That seed crystal of software shaped hundreds of thousands of students that to this day continue to rave about this program (although the last bits of HyperCard retired permanently about 12 years ago, nowadays it's primarily web based tech).<p>HyperCard's impact on teaching students to program starship simulators, and then telling compelling, interactive, immersive, multi-player dramatic stories in those ships is something enabled by Atkinson's dream in 1985.<p>May your consciousness journey between infinite pools of light, Bill.<p>Also, if you've read this far, go donate to Pancreatic Cancer research.[1]<p>[0]: <a href="https://spacecenter.alpineschools.org" rel="nofollow">https://spacecenter.alpineschools.org</a>
[1]: <a href="https://pancan.org" rel="nofollow">https://pancan.org</a></p>
]]></description><pubDate>Sun, 08 Jun 2025 00:49:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=44213762</link><dc:creator>brentjanderson</dc:creator><comments>https://news.ycombinator.com/item?id=44213762</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44213762</guid></item><item><title><![CDATA[MCP Server for multi-channel notifications]]></title><description><![CDATA[
<p>Article URL: <a href="https://knock.app/blog/announcing-agent-toolkit-and-mcp-server">https://knock.app/blog/announcing-agent-toolkit-and-mcp-server</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43824152">https://news.ycombinator.com/item?id=43824152</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 28 Apr 2025 17:58:26 +0000</pubDate><link>https://knock.app/blog/announcing-agent-toolkit-and-mcp-server</link><dc:creator>brentjanderson</dc:creator><comments>https://news.ycombinator.com/item?id=43824152</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43824152</guid></item><item><title><![CDATA[New comment by brentjanderson in "Show HN: Kate's App"]]></title><description><![CDATA[
<p>IANAL either but if I were you, I’d start here: <a href="https://www.vanta.com/products/hipaa" rel="nofollow">https://www.vanta.com/products/hipaa</a> or look for competitors.<p>And perhaps look at Stripe Atlas for getting my corporate ducks in a row to start with. <a href="https://stripe.com/atlas" rel="nofollow">https://stripe.com/atlas</a><p>Wading into that to get oriented, you would then be better equipped to have at least a baseline. A corporate attorney would be the next step to verify what you’re doing.<p>Minnestar.org hosts networking events that may be useful for finding people in the intersection of tech, healthcare, and law. Attend and get some face time to find people who may want to help. Lots of corporate centers in Minneapolis (assuming you’re in or near the twin cities), including healthcare. Depending on financial considerations, you may be able to find on ramps to grants, investors, or donors to fund compliance. Not sure on that though, but it’s possible.<p>Good luck!</p>
]]></description><pubDate>Fri, 10 Jan 2025 12:46:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=42655145</link><dc:creator>brentjanderson</dc:creator><comments>https://news.ycombinator.com/item?id=42655145</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42655145</guid></item><item><title><![CDATA[New comment by brentjanderson in "Why does everyone run ancient Postgres versions?"]]></title><description><![CDATA[
<p>One of the linked pieces in the Neon blog post is from Knock, where we pulled off a practically zero downtime migration: <a href="https://knock.app/blog/zero-downtime-postgres-upgrades" rel="nofollow">https://knock.app/blog/zero-downtime-postgres-upgrades</a><p>In that post we walk through all the steps we took to go from Postgres 11.9 to 15.3.</p>
]]></description><pubDate>Fri, 18 Oct 2024 15:18:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=41880215</link><dc:creator>brentjanderson</dc:creator><comments>https://news.ycombinator.com/item?id=41880215</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41880215</guid></item><item><title><![CDATA[Mozilla becoming active in online advertising]]></title><description><![CDATA[
<p>Article URL: <a href="https://blog.mozilla.org/en/mozilla/improving-online-advertising/">https://blog.mozilla.org/en/mozilla/improving-online-advertising/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=41743201">https://news.ycombinator.com/item?id=41743201</a></p>
<p>Points: 35</p>
<p># Comments: 55</p>
]]></description><pubDate>Fri, 04 Oct 2024 16:46:37 +0000</pubDate><link>https://blog.mozilla.org/en/mozilla/improving-online-advertising/</link><dc:creator>brentjanderson</dc:creator><comments>https://news.ycombinator.com/item?id=41743201</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41743201</guid></item><item><title><![CDATA[Living Worlds – 8-bit color cycling demo]]></title><description><![CDATA[
<p>Article URL: <a href="http://www.effectgames.com/demos/canvascycle/">http://www.effectgames.com/demos/canvascycle/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=41649317">https://news.ycombinator.com/item?id=41649317</a></p>
<p>Points: 3</p>
<p># Comments: 1</p>
]]></description><pubDate>Wed, 25 Sep 2024 16:47:45 +0000</pubDate><link>http://www.effectgames.com/demos/canvascycle/</link><dc:creator>brentjanderson</dc:creator><comments>https://news.ycombinator.com/item?id=41649317</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41649317</guid></item><item><title><![CDATA[New comment by brentjanderson in "Ask HN: Platform for 11 year old to create video games?"]]></title><description><![CDATA[
<p><a href="https://rpginabox.com/" rel="nofollow">https://rpginabox.com/</a> is fantastic. Includes a lot of great assets, constrained enough to ensure you can actually build something, open enough that you can build a lot of different things with it.</p>
]]></description><pubDate>Wed, 25 Sep 2024 16:46:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=41649294</link><dc:creator>brentjanderson</dc:creator><comments>https://news.ycombinator.com/item?id=41649294</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41649294</guid></item><item><title><![CDATA[Ink – Inkle's narrative scripting language]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.inklestudios.com/ink/">https://www.inklestudios.com/ink/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=41320474">https://news.ycombinator.com/item?id=41320474</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 22 Aug 2024 14:10:14 +0000</pubDate><link>https://www.inklestudios.com/ink/</link><dc:creator>brentjanderson</dc:creator><comments>https://news.ycombinator.com/item?id=41320474</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41320474</guid></item><item><title><![CDATA[New comment by brentjanderson in "Zero downtime Postgres upgrades"]]></title><description><![CDATA[
<p>OP Here -<p>1. There was zero downtime - no dropped requests, no 5xx errors. There _was_ a latency spike that was carefully tuned to be within timeout limits for our customers, but we dropped zero requests from the cut over.<p>2. Yes, it's very tedious, and in its own way painful. We also did a MongoDB upgrade recently and, while we still took the time to verify our workloads on the more recent versions, because Mongo is an AP system, it's trivial to failover to the new version and move on.<p>That said, the application-level logic changes were not particularly complicated. The script to orchestrate the cutover was application-specific, and I think for migrations like this you have to do the work to get it done right.<p>I'd also add that the tedium of doing it right, while ideally avoidable, is precisely why customers pay us to do handle this complexity on their behalf. Sometimes you've just got to do the work. They want a service that's up all the time. While no one can guarantee that, we strive for it within reason, and even then going to "unreasonable" lengths to have a better customer experience is exactly what makes many products unreasonably good.<p>Stretching the work out and taking each step carefully did avoid critical mistakes. We had a few missteps along the way, and we were able to rollback without critically affecting the service. Doing an in-place upgrade, trying to minimize the time spent on this problem, would have been far more risky than spreading that risk out over the whole process we took. Of course, each team needs to figure out what's going to work for their situation & constraints.<p>3. We do use Aurora, but our instance was old enough to not be supported for zero-downtime patch upgrades (ZDP) which does not handle major version upgrades. They also recently released blue/green deployments for Aurora Postgres clusters, which may be a way to do what we did without having to resort to as many changes.</p>
]]></description><pubDate>Thu, 14 Dec 2023 16:30:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=38643156</link><dc:creator>brentjanderson</dc:creator><comments>https://news.ycombinator.com/item?id=38643156</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38643156</guid></item><item><title><![CDATA[New comment by brentjanderson in "Zero downtime Postgres upgrades"]]></title><description><![CDATA[
<p>OP here:<p>> Can't initialize a replica from a backup<p>You could, but you're not going to get any of the constant writes happening during the backup. You will have missing writes on the restored system without some kind of replication involved unless you move up to the application layer.<p>For example, you could update your app to apply dual writes. I'm aware of teams that have replatformed entire applications on to completely different DBs that way (e.g. going from an RDBMS to something completely different like Apache Cassandra).<p>For our situation, dual-writes seemed more risky than just doing the dirty work of setting up streaming replication using out of the box Postgres features. But, for some teams it could be a better move.<p>> This isn't "zero downtime"<p>and<p>> The article omits details on how consistency was preserved<p>In the post we go into detail about how we preserved consistency & avoided API downtime, but the gist is that the app was connected to both databases, but not using the new one by default. We then sent a signal to all instances of our app to cut over using Launch Darkly, which maintains a low-latency connection to all instances of our app.<p>For the first second after that signal, the servers queued up database requests to allow for replication to catch up. This caused a brief spike in latency that was within intentionally calculated tolerances. After that pause, requests flowed as usual but against the new database and the cut over was complete.<p>We included a force-disconnect against any pending traffic against the old database as well, with a 500 ms timeout. This timeout was much higher than our p99 query times, so no running queries were force terminated. This ensured that the old database's traffic had ceased, and gave replication plenty of time to catch up.<p>> No mention of a rollback option<p>Although it didn't make the cut for the blog post, we considered setting up a fallback database on PG 11.9 and replicating the 15.3 database into that third database. If we needed to abort, we could roll forward to this database on the same version.<p>We opted to not do this after practicing our upgrade procedure multiple times in staging to ensure we could do this successfully. Having practiced the procedure multiple times gave us confidence when it came to performing the cut over. We also used canary deployments in production to verify certain read-only workloads against the database, treating the 15.3 instance as a read replica.<p>To your point about it being late at night, we intentionally did this in the early evening on a weekend to avoid "fat finger" type mistakes. The cut over was carefully scripted and rehearsed to reduce the risk of human error as well.<p>In the event that we needed to rollback, the system was also prepared to flip back to the old database in the event of a catastrophic failure. This would have lead to some data loss against the new database, and we were prepared to reconcile any key pieces of the system in that scenario. To minimize the risk of data loss, we paused certain background tasks in the system briefly during the cutover to reduce the number of writes applied against the system. These details didn't make the blog post as we were going for more of the specifics to Postgres and less to Knock-specific considerations. Teams trying to apply this playbook will always need to build their own inventory of risks and seek to mitigate them in a context-dependent way.<p>Edit: More detail about rollback procedure</p>
]]></description><pubDate>Wed, 13 Dec 2023 15:34:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=38628874</link><dc:creator>brentjanderson</dc:creator><comments>https://news.ycombinator.com/item?id=38628874</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38628874</guid></item><item><title><![CDATA[New comment by brentjanderson in "Zero downtime Postgres upgrades"]]></title><description><![CDATA[
<p>OP here - we avoid sequences in all but one part of our application due to a dependency. We use [KSUIDs][1] and UUID v4 in various places. This one "gotcha" applies to any sequence, so it's worth calling out as general advice when running a migration like this.<p>[1]: <a href="https://segment.com/blog/a-brief-history-of-the-uuid/" rel="nofollow noreferrer">https://segment.com/blog/a-brief-history-of-the-uuid/</a></p>
]]></description><pubDate>Wed, 13 Dec 2023 03:50:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=38622411</link><dc:creator>brentjanderson</dc:creator><comments>https://news.ycombinator.com/item?id=38622411</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38622411</guid></item><item><title><![CDATA[New comment by brentjanderson in "Zero downtime Postgres upgrades"]]></title><description><![CDATA[
<p>OP here: we looked at this and were not confident in manually advancing the LSN as proposed, and detecting any inconsistency if we missed any replication as a result. Table by table seemed more reliable, despite being more painstaking.</p>
]]></description><pubDate>Wed, 13 Dec 2023 02:00:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=38621730</link><dc:creator>brentjanderson</dc:creator><comments>https://news.ycombinator.com/item?id=38621730</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38621730</guid></item><item><title><![CDATA[New comment by brentjanderson in "Zero downtime Postgres upgrades"]]></title><description><![CDATA[
<p>OP here. We don’t specify, but it’s big enough that it’s not reasonable to do a dump and restore style upgrade.<p>The strategies in the post should work for any size database. The limit becomes more a matter of individual table sizes, since we propose using an incremental approach to synchronizing one table at a time.</p>
]]></description><pubDate>Wed, 13 Dec 2023 01:57:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=38621706</link><dc:creator>brentjanderson</dc:creator><comments>https://news.ycombinator.com/item?id=38621706</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38621706</guid></item><item><title><![CDATA[New comment by brentjanderson in "Zero downtime Postgres upgrades"]]></title><description><![CDATA[
<p>Neat tool! Some of our findings for large tables could be interesting for a tool like this, making it easier to apply the right strategy to the right tables. Having something like this with those strategies could be indispensable to teams running a migration like this in the future.</p>
]]></description><pubDate>Wed, 13 Dec 2023 01:26:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=38621463</link><dc:creator>brentjanderson</dc:creator><comments>https://news.ycombinator.com/item?id=38621463</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38621463</guid></item><item><title><![CDATA[New comment by brentjanderson in "Zero downtime Postgres upgrades"]]></title><description><![CDATA[
<p>OP here - we have more coming about the role that the BEAM VM played in this migration too.<p>(The BEAM is the virtual machine for the Erlang ecosystem, analagous to the JVM for Java. Knock runs on Elixir, which is built on Erlang & the BEAM).</p>
]]></description><pubDate>Wed, 13 Dec 2023 01:24:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=38621450</link><dc:creator>brentjanderson</dc:creator><comments>https://news.ycombinator.com/item?id=38621450</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38621450</guid></item><item><title><![CDATA[New comment by brentjanderson in "Zero downtime Postgres upgrades"]]></title><description><![CDATA[
<p>That might be possible. We were within days of performing our upgrade when the blue/green feature became available for Postgres, so we didn't consider it for our work.<p>You may be able to boot up an 11.21 replica in an existing Aurora cluster as a read replica, and then failover to that replica as your primary, which would be a minimally disruptive process if your application is designed to tolerate replica failover.<p>From there, you could upgrade the rest of your cluster to 11.21, and then use the blue/green upgrade process for AWS. If you do, I'd love to hear about how it goes as we will definitely consider the blue/green feature next time.</p>
]]></description><pubDate>Wed, 13 Dec 2023 01:01:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=38621250</link><dc:creator>brentjanderson</dc:creator><comments>https://news.ycombinator.com/item?id=38621250</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38621250</guid></item><item><title><![CDATA[New comment by brentjanderson in "Zero downtime Postgres upgrades"]]></title><description><![CDATA[
<p>I looked at getting LSN numbers after an upgrade/cluster replacement, and IIRC restoring from a snapshot emits LSN information into the logs, but it's a bit of of a mixed bag as to whether or not you get the __right__ LSN out the other side. Because the LSN is more a measure of how many bytes have been written within a cluster, it's not something that meaningfully translates to other clusters, unfortunately.</p>
]]></description><pubDate>Wed, 13 Dec 2023 00:58:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=38621222</link><dc:creator>brentjanderson</dc:creator><comments>https://news.ycombinator.com/item?id=38621222</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38621222</guid></item><item><title><![CDATA[New comment by brentjanderson in "Zero downtime Postgres upgrades"]]></title><description><![CDATA[
<p>OP Here - that's great feedback! Our hope is to build confidence in both the reliability of our product _and_ the consistency of the workloads. Of course, presenting the illusion of consistency while being flaky is far worse than managing customer expectations and taking intentional downtime to, in the long run, have better uptime.<p>Indeed, having periodic maintenance windows expected up-front probably leads to more robust architectures overall: customers building in the failsafes they need to tolerate downtime leads to more resilience. Teams that can trust their customers in that way can, in turn, take the time they need to make the investments they need to build a better product.<p>Perhaps this will be the blog post we write after our next major version upgrade: expectation setting around downtime _is_ the way to very high uptime.</p>
]]></description><pubDate>Wed, 13 Dec 2023 00:45:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=38621118</link><dc:creator>brentjanderson</dc:creator><comments>https://news.ycombinator.com/item?id=38621118</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38621118</guid></item></channel></rss>