<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: JoelJacobson</title><link>https://news.ycombinator.com/user?id=JoelJacobson</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 17:13:24 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=JoelJacobson" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by JoelJacobson in "Eight years of wanting, three months of building with AI"]]></title><description><![CDATA[
<p>I agree! It should be very stable, IMO. If not, then please send a bug report and we'll look into it. Also, now it scales well with the number of listening connections (given clients listen on unique channel names): <a href="https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=282b1cde9dedf456ecf02eb27caf086023a7bb71" rel="nofollow">https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit...</a></p>
]]></description><pubDate>Mon, 06 Apr 2026 06:55:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47657724</link><dc:creator>JoelJacobson</dc:creator><comments>https://news.ycombinator.com/item?id=47657724</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47657724</guid></item><item><title><![CDATA[New comment by JoelJacobson in "Rust--: Rust without the borrow checker"]]></title><description><![CDATA[
<p>Rust without async maybe?</p>
]]></description><pubDate>Thu, 01 Jan 2026 19:14:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=46457044</link><dc:creator>JoelJacobson</dc:creator><comments>https://news.ycombinator.com/item?id=46457044</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46457044</guid></item><item><title><![CDATA[New comment by JoelJacobson in "Zig's new plan for asynchronous programs"]]></title><description><![CDATA[
<p>What a really like about concurrent(), is that it improves readability and expressiveness, making it clear when writing and reading that "this code MUST run in parallel".</p>
]]></description><pubDate>Wed, 03 Dec 2025 04:40:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=46130375</link><dc:creator>JoelJacobson</dc:creator><comments>https://news.ycombinator.com/item?id=46130375</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46130375</guid></item><item><title><![CDATA[New comment by JoelJacobson in "Listen to Database Changes Through the Postgres WAL"]]></title><description><![CDATA[
<p>It's a common misconception that the single queue is a poor design choice. The user reports, of seeing notifications/second severely degrade with lots of backends, cannot be explained by the single-queue design. An efficient implementation of a single-queue, should flatten out as parallellism increases, not degrade and go towards zero.</p>
]]></description><pubDate>Mon, 17 Nov 2025 10:47:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=45952456</link><dc:creator>JoelJacobson</dc:creator><comments>https://news.ycombinator.com/item?id=45952456</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45952456</guid></item><item><title><![CDATA[New comment by JoelJacobson in "Listen to Database Changes Through the Postgres WAL"]]></title><description><![CDATA[
<p>In the linked "Optimize LISTEN/NOTIFY" pgsql-hackers, I've shared a lot of benchmark results for different workloads, which also include results on how PostgreSQL currently works (this is "master" in the benchmark results), that can help you better understand the expectations for different workloads.<p>The work-around solution we used at Trustly (a company I co-founded), is a component named `allas` that a colleague of mine at that time, Marko Tikkaja, created to solve our problems, that massively reduced the load on our servers. Marko has open sourced and published this work here: <a href="https://github.com/johto/allas" rel="nofollow">https://github.com/johto/allas</a><p>Basically, `allas` opens up a single connection to PostgreSQL, on which it LISTEN on all the channels it needs to listen on. Then clients connect to `allas` over the PostgreSQL protocol, so it's basically faking a PostgreSQL server, and when clients do LISTEN on a channel with allas, allas will then LISTEN on that channel on the real PostgreSQL server on the single connection it needs. Thanks to `allas` being implemented in Go, using Go's efficient goroutines for concurrency, it efficiently scales with lots and lots of connections. I'm not a Go-expert myself, but I've understood Go is quite well suited for this type of application.<p>This component is still being used at Trustly, and is battle-tested and production grade.<p>That said, it would of course be much better to avoid the need for a separate component, and fix the scalability issues in core PostgreSQL, so that's what I'm currently working on.</p>
]]></description><pubDate>Mon, 17 Nov 2025 10:21:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=45952340</link><dc:creator>JoelJacobson</dc:creator><comments>https://news.ycombinator.com/item?id=45952340</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45952340</guid></item><item><title><![CDATA[New comment by JoelJacobson in "Listen to Database Changes Through the Postgres WAL"]]></title><description><![CDATA[
<p>Thanks for the report. For that use-case (if you have a single application using a single connection with a LISTEN) then it's expected that is should perform well, since then there is only a single backend which will be context-switched to when each NOTIFY signals it.</p>
]]></description><pubDate>Mon, 17 Nov 2025 08:51:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=45951919</link><dc:creator>JoelJacobson</dc:creator><comments>https://news.ycombinator.com/item?id=45951919</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45951919</guid></item><item><title><![CDATA[New comment by JoelJacobson in "Listen to Database Changes Through the Postgres WAL"]]></title><description><![CDATA[
<p>Here is the Commitfest entry if you want to help with reviewing/development/testing of the patch: <a href="https://commitfest.postgresql.org/patch/6078/" rel="nofollow">https://commitfest.postgresql.org/patch/6078/</a></p>
]]></description><pubDate>Mon, 17 Nov 2025 08:40:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=45951864</link><dc:creator>JoelJacobson</dc:creator><comments>https://news.ycombinator.com/item?id=45951864</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45951864</guid></item><item><title><![CDATA[New comment by JoelJacobson in "Listen to Database Changes Through the Postgres WAL"]]></title><description><![CDATA[
<p>> It works, but suddenly your query times explode! Instead of doing 1 million transactions per second* you can now do only 3 (*These numbers were exaggerated for dramatic effect)<p>In general, a single-queue design doesn’t make throughput collapse when you add more parallelism; it just gives you a fixed ceiling. With a well-designed queue, throughput goes up with concurrency, then flattens when the serialized section (the queue) saturates, maybe sagging a bit from context switching.<p>If instead you see performance severely degrade as you add workers, that typically means there’s an additional problem beyond “we have one queue” — things like broadcast wakeups (“every event wakes every listener”), global scans on each event, or other O(N) work per operation. That’s a very different, and more serious, scalability bug than simply relying on a single queue.</p>
]]></description><pubDate>Mon, 17 Nov 2025 08:01:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=45951685</link><dc:creator>JoelJacobson</dc:creator><comments>https://news.ycombinator.com/item?id=45951685</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45951685</guid></item><item><title><![CDATA[New comment by JoelJacobson in "Listen to Database Changes Through the Postgres WAL"]]></title><description><![CDATA[
<p>> The problem with Postgres' NOTIFY is that all notifications go through a single queue!<p>> Even if you have 20 database connections making 20 transactions in parallel, all of them need to wait for their turn to lock the notification queue, add their notification, and unlock the queue again. This creates a bottleneck especially in high-throughput databases.<p>We're currently working hard on optimizing LISTEN/NOTIFY: <a href="https://www.postgresql.org/message-id/flat/6899c044-4a82-49be-8117-e6f669765f7e%40app.fastmail.com" rel="nofollow">https://www.postgresql.org/message-id/flat/6899c044-4a82-49b...</a><p>If you have any experiences of actual workload where you are currently experiencing performance/scalability problems, I would be interested in hearing from you, to better understand the actual workload. In some workloads, you might only listen to a single channel. For such single-channel workloads, the current implementation seems hard to tweak further, given the semantics and in-commit-order guarantees. However, for multi-channel workloads, we could do a lot better, which is what the linked patch is about. The main problem with the current implementation for multi-channel workloads, is that we currently signal and wake <i>all</i> listening backends (a backend is the PostgreSQL processes your client is connected to), even if they are not interested in the specific channels being notified in the current commit. This means that if you have 100 connections open in which each connect client has made a LISTEN on a different channel, then when someone does a NOTIFY on one of those channels, instead of just signaling the backend that listen on that channel, all 100 backends will be signaled. For multi-channel workloads, this could mean an enormous extra cost coming from the context-switching due to the signaling.<p>I would greatly appreciate if you could please reply to this comment and share your different workloads when you've had problems with LISTEN/NOTIFY, to better understand approximately how many listening backends you had, and how many channels you had, and the mix of volume on such channels. Anything that could help us do better realistic simulations of such workloads, to improve the benchmark tests we're working on. Thank you.</p>
]]></description><pubDate>Mon, 17 Nov 2025 06:55:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=45951379</link><dc:creator>JoelJacobson</dc:creator><comments>https://news.ycombinator.com/item?id=45951379</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45951379</guid></item><item><title><![CDATA[New comment by JoelJacobson in "Listen to Database Changes Through the Postgres WAL"]]></title><description><![CDATA[
<p>> If you call pg_notify or NOTIFY inside a trigger, it will get called 100,000 times and send out 100,000 notifications if you change 100,000 rows in a single transaction which from a performance perspective is ... not ideal.<p>This is only true if those notifications are different; if they are identical, such as in the same the notification is to alert listeners some table has new data (for cache invalidation), they are sent out as one notification only. See source code comment in async.c:<p><pre><code>     *   Duplicate notifications from the same transaction are sent out as one
     *   notification only. This is done to save work when for example a trigger
     *   on a 2 million row table fires a notification for each row that has been
     *   changed. If the application needs to receive every single notification
     *   that has been sent, it can easily add some unique string into the extra
     *   payload parameter.</code></pre></p>
]]></description><pubDate>Mon, 17 Nov 2025 06:39:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=45951321</link><dc:creator>JoelJacobson</dc:creator><comments>https://news.ycombinator.com/item?id=45951321</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45951321</guid></item><item><title><![CDATA[New comment by JoelJacobson in "Control structures in programming languages: from goto to algebraic effects"]]></title><description><![CDATA[
<p>I think the sweet spot is to use exceptions for bugs. If the error is expected, make it data.</p>
]]></description><pubDate>Sun, 09 Nov 2025 06:59:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=45863513</link><dc:creator>JoelJacobson</dc:creator><comments>https://news.ycombinator.com/item?id=45863513</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45863513</guid></item><item><title><![CDATA[New comment by JoelJacobson in "What’s New in PostgreSQL 18 – a Developer’s Perspective"]]></title><description><![CDATA[
<p>It made me happy to see the pg_get_acl() function that I was involved in adding, is appreciated by users. I think there is still much improvement in the space of querying privileges. I think most users would probably struggle to come up with the query from the article:<p><pre><code>    postgres=# SELECT
        (pg_identify_object(s.classid,s.objid,s.objsubid)).*,
        pg_catalog.pg_get_acl(s.classid,s.objid,s.objsubid) AS acl
    FROM pg_catalog.pg_shdepend AS s
    JOIN pg_catalog.pg_database AS d
        ON d.datname = current_database() AND
        d.oid = s.dbid
    JOIN pg_catalog.pg_authid AS a
        ON a.oid = s.refobjid AND
        s.refclassid = 'pg_authid'::regclass
    WHERE s.deptype = 'a';
    -[ RECORD 1 ]-----------------------------------------
    type     | table
    schema   | public
    name     | testtab
    identity | public.testtab
    acl      | {postgres=arwdDxtm/postgres,foo=r/postgres}

</code></pre>
What I wanted to really add, was two new system views, pg_ownerships and pg_privileges [1]. The pg_get_acl() was a dependency that we needed to get in place first. In the end, I withdrew the patch trying to add these views. If there is enough interest from users, I might consider picking up the task of trying to work out the remaining obstacles.<p>Do people here need pg_ownerships and/or pg_privileges?<p>[1] <a href="https://www.postgresql.org/message-id/flat/bbe7d1cb-0435-4ee6-a9f5-7dbc79ab84b2%40app.fastmail.com" rel="nofollow">https://www.postgresql.org/message-id/flat/bbe7d1cb-0435-4ee...</a></p>
]]></description><pubDate>Sun, 28 Sep 2025 17:15:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=45406039</link><dc:creator>JoelJacobson</dc:creator><comments>https://news.ycombinator.com/item?id=45406039</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45406039</guid></item><item><title><![CDATA[New comment by JoelJacobson in "Ask HN: How do/did you use PostgreSQL's LISTEN/NOTIFY in production?"]]></title><description><![CDATA[
<p>I'll add comments to this thread with Github projects that I can find by searching for "postgres listen/notify" on Github.<p><a href="https://github.com/graphile/worker" rel="nofollow">https://github.com/graphile/worker</a><p><pre><code>    // Line 535 in src/main.ts
    client.query('LISTEN "jobs:insert"; LISTEN "worker:migrate";')
</code></pre>
Every worker pool seems to listen on the same shared channels:
jobs:insert - all workers get notified when new jobs are added
worker:migrate - all workers get notified about database migrations</p>
]]></description><pubDate>Thu, 07 Aug 2025 07:48:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=44821729</link><dc:creator>JoelJacobson</dc:creator><comments>https://news.ycombinator.com/item?id=44821729</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44821729</guid></item><item><title><![CDATA[New comment by JoelJacobson in "Ask HN: How do/did you use PostgreSQL's LISTEN/NOTIFY in production?"]]></title><description><![CDATA[
<p>Thanks! I found a link to a GitHub Issue where they described their experienced problems: <a href="https://github.com/pulp/pulpcore/issues/6805" rel="nofollow">https://github.com/pulp/pulpcore/issues/6805</a></p>
]]></description><pubDate>Thu, 07 Aug 2025 07:16:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=44821519</link><dc:creator>JoelJacobson</dc:creator><comments>https://news.ycombinator.com/item?id=44821519</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44821519</guid></item><item><title><![CDATA[Ask HN: How do/did you use PostgreSQL's LISTEN/NOTIFY in production?]]></title><description><![CDATA[
<p>Recently there was an HN thread:  
"Postgres LISTEN/NOTIFY does not scale",
https://news.ycombinator.com/item?id=44490510<p>We're now working on improving the scalability of LISTEN/NOTIFY in PostgreSQL, and
to guide that work, I'd like to better understand how it's used (or was used) in
real-world systems. What works well? What doesn't?<p>The current implementation has some known scalability bottlenecks:<p>1. Thundering Herd Problem:  
   A NOTIFY wakes up <i>all</i> listening backends in the current database, even those
   not listening on the notified channel. This is inefficient when many
   listeners are each listening to their own channels (e.g. in job queues).<p>2. Commit Lock Contention:  
   NOTIFY operations are serialized behind a heavyweight lock at transaction commit.
   This can become a bottleneck when many transactions send notifications in parallel.<p>If you've used LISTEN/NOTIFY in production, I'd love to hear:<p>- What is/was your use case?<p>- Does each client listen on its own channel, or do they share channels?<p>- How many listening backend processes?<p>- How many NOTIFYs in parallel?<p>- Are you sending payloads? If so, how large?<p>- What worked well for you? What didn't?<p>- Did you hit any scalability limits?<p>Feedback much appreciated, thanks!<p>/Joel</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44821373">https://news.ycombinator.com/item?id=44821373</a></p>
<p>Points: 2</p>
<p># Comments: 4</p>
]]></description><pubDate>Thu, 07 Aug 2025 06:52:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=44821373</link><dc:creator>JoelJacobson</dc:creator><comments>https://news.ycombinator.com/item?id=44821373</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44821373</guid></item><item><title><![CDATA[New comment by JoelJacobson in "From XML to JSON to CBOR"]]></title><description><![CDATA[
<p>Fun fact: CBOR is used within the WebAuthn (Passkey) protocol.<p>To do Passkey-verification server-side, I had to implement a pure-SQL/PLpgSQL CBOR parser, out of fear that a C-implementation could crash the PostgreSQL server: <a href="https://github.com/truthly/pg-cbor">https://github.com/truthly/pg-cbor</a></p>
]]></description><pubDate>Wed, 30 Jul 2025 21:09:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=44739513</link><dc:creator>JoelJacobson</dc:creator><comments>https://news.ycombinator.com/item?id=44739513</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44739513</guid></item><item><title><![CDATA[New comment by JoelJacobson in "Postgres LISTEN/NOTIFY does not scale"]]></title><description><![CDATA[
<p>Hey folks, I ran into similar scalability issues and ended up building a benchmark tool to analyze exactly how LISTEN/NOTIFY behaves as you scale up the number of listeners.<p>Turns out that all Postgres versions from 9.6 through current master scale linearly with the number of idle listeners — about 13 μs extra latency per connection. That adds up fast: with 1,000 idle listeners, a NOTIFY round-trip goes from ~0.4 ms to ~14 ms.<p>To better understand the bottlenecks, I wrote both a benchmark tool and a proof-of-concept patch that replaces the O(N) backend scan with a shared hash table for the single-listener case — and it brings latency down to near-O(1), even with thousands of listeners.<p>Full benchmark, source, and analysis here:
 <a href="https://github.com/joelonsql/pg-bench-listen-notify">https://github.com/joelonsql/pg-bench-listen-notify</a><p>No proposals yet on what to do upstream, just trying to gather interest and surface the performance cliff. Feedback welcome.</p>
]]></description><pubDate>Fri, 11 Jul 2025 10:52:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=44530687</link><dc:creator>JoelJacobson</dc:creator><comments>https://news.ycombinator.com/item?id=44530687</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44530687</guid></item><item><title><![CDATA[New comment by JoelJacobson in "Auntie PDF – an open source app built using Mistral OCR"]]></title><description><![CDATA[
<p>Thanks for creating, really useful!<p>Would be nice with a [Download Combined Rendered] button to download a self-contained .html web page of the rendered combined page.</p>
]]></description><pubDate>Sat, 08 Mar 2025 06:06:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=43297892</link><dc:creator>JoelJacobson</dc:creator><comments>https://news.ycombinator.com/item?id=43297892</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43297892</guid></item><item><title><![CDATA[New comment by JoelJacobson in "Proposal: Add "No Screen Sharing" Flag for Secure Messaging Apps"]]></title><description><![CDATA[
<p>Thanks for sharing your perspective—I really appreciate it. In hindsight, I should have given more weight to the "Any better approaches?" part of the original post. I fully acknowledge that my proposal involves trade-offs and isn't a one-size-fits-all solution.<p>That said, I’d like to explore how we might achieve security by design without sacrificing user experience. First, let’s agree on one core principle: if a user decides to share their screen, the OS should treat that choice uniformly across all apps—meaning it must always share the entire screen.<p>Given that, let's think about other ideas to address the risk scenario: a user might unwittingly share their screen with an adversary and then start a top-secret chat, accidentally leaking sensitive information. Ideally, users handling top-secret data would be exceptionally cautious, but in practice, mistakes happen.<p>Here's an alternative approach: a "Secret Chat Room" feature, that would rely upon OS checks, explicitly authorized by the user. Think of it as akin to physical secret meeting rooms with soundproof walls and Faraday cages—places where sensitive conversations are truly isolated. When a user enters such a room, they'd see a prompt like:<p><pre><code>    You are now entering a Secret Chat Room.
    This room is designed to ensure that no eavesdropping (such as keystroke logging, microphone tapping, or unauthorized screen sharing) is occurring.
    To proceed, please authorize the OS to perform an integrity check. You’ll be allowed in only if this check is successful.
</code></pre>
To preserve privacy and avoid penalizing users with poor security practices, the OS would return only one bit of information:<p><pre><code>    1: The user authorized the check AND it succeeded.
    0: Either the user did not authorize the check OR the check failed.
</code></pre>
This binary signal prevents the app from knowing whether a failure was due to a deliberate user choice or a technical issue, thus providing plausible deniability.<p>What do you think about this approach? I'd love to hear your thoughts on refining it further to balance robust security with a seamless user experience.</p>
]]></description><pubDate>Sun, 23 Feb 2025 11:29:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=43148558</link><dc:creator>JoelJacobson</dc:creator><comments>https://news.ycombinator.com/item?id=43148558</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43148558</guid></item><item><title><![CDATA[New comment by JoelJacobson in "Proposal: Add "No Screen Sharing" Flag for Secure Messaging Apps"]]></title><description><![CDATA[
<p>I agree that having options is crucial, but we also need to consider how these options fit together to ensure system-wide safety by design. To actually mitigate screen-sharing risks, you need a combined hardware/OS and secure communication app where the “No Screen Sharing” feature can’t be turned off—and that setup has to be widely used for it to matter at scale.<p>Hardware/OS choice is the first step. Some platforms, like iOS, are already more sandboxed than, say, Android. I personally use iOS and feel comfortable trusting Apple’s approach, even though zero-day exploits remain a possibility. For my server needs, I use Linux and appreciate full control and root access—but that’s a separate use case.<p>App choice is the second step. Secure messengers like Signal prioritize privacy as a core feature, while many other messaging apps don’t. If a few high-profile apps enforced “No Screen Sharing,” the people who genuinely need or want to share their screen could always switch to a different app. So in practice, this feature wouldn’t prevent screen sharing entirely; it would just block it in contexts where security is paramount.<p>All of which is to say: optionality still exists—you can choose a less-restrictive OS or a different communication app. But for those who opt into a more locked-down environment, having a secure messenger that outright prevents screen sharing can make all the difference in avoiding accidental leaks or social engineering attacks.</p>
]]></description><pubDate>Fri, 21 Feb 2025 02:51:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=43123442</link><dc:creator>JoelJacobson</dc:creator><comments>https://news.ycombinator.com/item?id=43123442</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43123442</guid></item></channel></rss>