<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: feike</title><link>https://news.ycombinator.com/user?id=feike</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 28 Apr 2026 15:23:23 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=feike" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by feike in "Pgbackrest is no longer being maintained"]]></title><description><![CDATA[
<p>pgbackrest is the most versatile piece of backup technology for PostgreSQL and in my experience the other products do not come close.<p>I am therefore quite sad to see this happen. It won't be easy to get feature parity with this great product.<p>I sincerely hope this is a reversible decision, or perhaps the postgres project could even absorb it into contrib.</p>
]]></description><pubDate>Mon, 27 Apr 2026 12:15:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47920587</link><dc:creator>feike</dc:creator><comments>https://news.ycombinator.com/item?id=47920587</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47920587</guid></item><item><title><![CDATA[New comment by feike in "Advent of Code 2024"]]></title><description><![CDATA[
<p>As every year, I try to solve it with 1 sql statement for every challenge,
<a href="https://gitlab.com/feike/adventofcode/-/tree/master/2024" rel="nofollow">https://gitlab.com/feike/adventofcode/-/tree/master/2024</a>, likely going to get stuck again around day 12/13 or so!</p>
]]></description><pubDate>Tue, 03 Dec 2024 06:06:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=42303405</link><dc:creator>feike</dc:creator><comments>https://news.ycombinator.com/item?id=42303405</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42303405</guid></item><item><title><![CDATA[New comment by feike in "Loading a trillion rows of weather data into TimescaleDB"]]></title><description><![CDATA[
<p>Many libraries for python, Rust, golang support COPY BINARY.<p>The times I've tested it, the improvement is very small as compared to plain copy, or copy with CSV, whereas it does require more work and thought upfront to ensure the binary actually works correctly.<p><a href="https://www.postgresql.org/docs/current/sql-copy.html" rel="nofollow">https://www.postgresql.org/docs/current/sql-copy.html</a></p>
]]></description><pubDate>Tue, 16 Apr 2024 15:19:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=40053124</link><dc:creator>feike</dc:creator><comments>https://news.ycombinator.com/item?id=40053124</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40053124</guid></item><item><title><![CDATA[New comment by feike in "Loading a trillion rows of weather data into TimescaleDB"]]></title><description><![CDATA[
<p>Timescaler here, if you configure the timescaledb.compress_segmentby well, and the data suits the compression, you can achieve 20x or more compression.<p>(On some metrics data internally, I have 98% reduction in size of the data).<p>One of the reasons this works is due to only having to pay the per-tuple overhead once per grouped row, which could be as much as a 1000 rows.<p>The other is the compression algorithm, which can be TimescaleDB or plain PostgreSQL TOAST<p><a href="https://www.timescale.com/blog/time-series-compression-algorithms-explained/" rel="nofollow">https://www.timescale.com/blog/time-series-compression-algor...</a>
<a href="https://www.postgresql.org/docs/current/storage-toast.html" rel="nofollow">https://www.postgresql.org/docs/current/storage-toast.html</a></p>
]]></description><pubDate>Tue, 16 Apr 2024 15:09:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=40052968</link><dc:creator>feike</dc:creator><comments>https://news.ycombinator.com/item?id=40052968</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40052968</guid></item><item><title><![CDATA[New comment by feike in "Pg_hint_plan: Force PostgreSQL to execute query plans the way you want"]]></title><description><![CDATA[
<p>You should be able nowadays with PG16<p><pre><code>    INSERT INTO tbl1 VALUES ($1, $2) \bind 'first value' 'second value' \g
</code></pre>
<a href="https://www.postgresql.org/docs/current/app-psql.html#APP-PSQL-META-COMMAND-BIND" rel="nofollow">https://www.postgresql.org/docs/current/app-psql.html#APP-PS...</a><p>For older versions, you can do:<p><pre><code>    \set v_x 'first value'
    \set v_y 'second value'
    INSERT INTO tbl1 VALUES (:'v_x', :'v_y');
    \set v_x 'next value'
    INSERT INTO tbl1 VALUES (:'v_x', :'v_y');</code></pre></p>
]]></description><pubDate>Sat, 16 Mar 2024 14:13:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=39726119</link><dc:creator>feike</dc:creator><comments>https://news.ycombinator.com/item?id=39726119</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39726119</guid></item><item><title><![CDATA[New comment by feike in "Lenovo ThinkPad X1 Carbon G12 laptop review: First major refresh in three years"]]></title><description><![CDATA[
<p>You can put it upright on a piano and put sheet music at eye level.
I personally use flowkey + an electrical piano connected through USB. Beats a tablet for me!</p>
]]></description><pubDate>Tue, 05 Mar 2024 19:29:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=39608133</link><dc:creator>feike</dc:creator><comments>https://news.ycombinator.com/item?id=39608133</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39608133</guid></item><item><title><![CDATA[New comment by feike in "PostgreSQL: No More Vacuum, No More Bloat"]]></title><description><![CDATA[
<p>PostgreSQL has used this term for decades!<p>The oldest I can find is from 1998 (PostgreSQL 6.3), but it was probably in use even before.<p>> Postgres offers substantial additional power by incorporating the following four additional basic concepts in such a way that users can easily extend the system:<p>classes
inheritance
types
functions<p>Other features provide additional power and flexibility:<p>constraints
triggers
rules
transaction integrity<p>These features put Postgres into the category of databases referred to as object-relational<p><a href="https://www.postgresql.org/docs/6.3/c0101.htm" rel="nofollow noreferrer">https://www.postgresql.org/docs/6.3/c0101.htm</a></p>
]]></description><pubDate>Sat, 15 Jul 2023 22:18:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=36741450</link><dc:creator>feike</dc:creator><comments>https://news.ycombinator.com/item?id=36741450</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36741450</guid></item><item><title><![CDATA[New comment by feike in "PL/Rust 1.0: now a trusted language for Postgres"]]></title><description><![CDATA[
<p>The trust is not just banning unsafe, it is using a limited std:<p>> The "trusted" version of PL/Rust uses a unique fork of Rust's std entitled postgrestd when compiling LANGUAGE plrust user functions.<p><a href="https://github.com/tcdi/postgrestd">https://github.com/tcdi/postgrestd</a></p>
]]></description><pubDate>Sun, 09 Apr 2023 20:06:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=35505976</link><dc:creator>feike</dc:creator><comments>https://news.ycombinator.com/item?id=35505976</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35505976</guid></item><item><title><![CDATA[New comment by feike in "Ask HN: How do you test SQL?"]]></title><description><![CDATA[
<p>Just did a sequential run (to get some better measurements), and this is an excerpt of the things happening in the PostgreSQL instance inside the Docker container, for creating and dropping the databases:<p><pre><code>    08:25:37.114 UTC [1456] LOG:  statement: CREATE DATABASE "test_1675239937111796557" WITH template = test_template
    [noise]
    08:25:48.002 UTC [1486] LOG:  statement: DROP DATABASE "test_1675239947937354435"

</code></pre>
Start time of first test:
2023-02-01 08:25:03.633 UTC<p>Finish time of last test:
2023-02-01 08:26:13.861 UTC<p>82 tests, or 0.856 seconds per test (sequentially).<p>In parallel, we take 6.941 seconds for 82 tests, or 0.085 seconds per test.</p>
]]></description><pubDate>Wed, 01 Feb 2023 08:34:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=34607975</link><dc:creator>feike</dc:creator><comments>https://news.ycombinator.com/item?id=34607975</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34607975</guid></item><item><title><![CDATA[New comment by feike in "Ask HN: How do you test SQL?"]]></title><description><![CDATA[
<p>We do this too for PostgreSQL: to ensure the tests are <i>really</i> fast:<p><pre><code>    - we create a template database using the migrations
    - for *every* integration test we do `CREATE DATABASE test123 TEMPLATE test_template;`
    - we tune the PostgreSQL instance inside Docker to speed up things, for exampling disabling synchronous_commit
</code></pre>
On a successful test, we drop the test123 database. On a failed test, we keep the database around, so we can inspect it a bit.<p>The really great thing about this approach (IMHO), is that you can validate certain constraint violations.<p>For example, exclusion constraints are great for modelling certain use cases where overlapping ranges should be avoided. In our (go) code, the test cases can use the sqlstate code, or the constraint name to figure out if we hit the error we expect to hit.<p>This approach is pretty much as fast as our unit tests (your mileage may vary), but it prevents way more bugs from being merged into our codebase.</p>
]]></description><pubDate>Wed, 01 Feb 2023 08:06:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=34607793</link><dc:creator>feike</dc:creator><comments>https://news.ycombinator.com/item?id=34607793</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34607793</guid></item><item><title><![CDATA[New comment by feike in "Apache AGE, a PostgreSQL extension with graph database functionality"]]></title><description><![CDATA[
<p>If you keep reading, the words following your quote make it clear that PG15 should be supported:<p>"and will support PostgreSQL 13 and all the future releases of PostgreSQL."</p>
]]></description><pubDate>Thu, 03 Nov 2022 08:40:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=33448465</link><dc:creator>feike</dc:creator><comments>https://news.ycombinator.com/item?id=33448465</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33448465</guid></item><item><title><![CDATA[New comment by feike in "Parsing SQL"]]></title><description><![CDATA[
<p>A single query can write to multiple tables, using CTE's in PostgreSQL for example.<p>You could compose a SQL query that allows you to map multiple resultsets to 1 resultset, although that feels a bit awkward.<p><pre><code>    WITH a AS (
        insert into a (k, v) values ('a', 1.0) returning *
    ), b AS (
        insert into b (k, v) values ('b', 2.0) returning *
    )
    SELECT
        row_to_json(a)
    FROM
        a
    UNION ALL
    SELECT
        row_to_json(b)
    FROM
        b;
</code></pre>
Returns:<p><pre><code>        row_to_json        
    --------------------------
    {"a_id":1,"k":"a","v":1}
    {"b_id":1,"k":"b","v":2}
    (2 rows)</code></pre></p>
]]></description><pubDate>Tue, 23 Aug 2022 08:46:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=32562128</link><dc:creator>feike</dc:creator><comments>https://news.ycombinator.com/item?id=32562128</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32562128</guid></item><item><title><![CDATA[New comment by feike in "We switched to cursor-based pagination"]]></title><description><![CDATA[
<p>Reminds me of Markus Winand who hands out stickers on database conferences banning offset.<p>His site is a great resource for anyone wanting to take a deeper dive on SQL performance:<p><a href="https://use-the-index-luke.com/sql/partial-results/fetch-next-page" rel="nofollow">https://use-the-index-luke.com/sql/partial-results/fetch-nex...</a></p>
]]></description><pubDate>Sun, 21 Aug 2022 18:04:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=32542747</link><dc:creator>feike</dc:creator><comments>https://news.ycombinator.com/item?id=32542747</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32542747</guid></item><item><title><![CDATA[New comment by feike in "Advent of Code 2021 with PostgreSQL"]]></title><description><![CDATA[
<p>You can use regexes to help you split the file into pieces (this is in PostgreSQL), I expect other dbms's to have similar functions available.<p><pre><code>    SELECT
        lineno::int              AS line,
        ((lineno-3)/6)::smallint AS card,
        ((lineno-3)%6)::smallint AS y,
        (col - 1)::smallint      AS x,
        value::smallint          AS value
    FROM
        regexp_split_to_table($1, '\n') WITH ORDINALITY AS sub(line, lineno)
    CROSS JOIN
        regexp_split_to_table(ltrim(line, ' '), '(\s+|,)') WITH ORDINALITY AS sub2(value, col)
    WHERE
        line != ''
        AND value != ''</code></pre></p>
]]></description><pubDate>Tue, 07 Dec 2021 10:30:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=29470963</link><dc:creator>feike</dc:creator><comments>https://news.ycombinator.com/item?id=29470963</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29470963</guid></item><item><title><![CDATA[New comment by feike in "Advent of Code 2021 with PostgreSQL"]]></title><description><![CDATA[
<p>Shameless plug, I've been doing this (without much success after day 8 or so) for a couple of years now. My rules are:<p>- 1 statement per part
- No schema needed<p><a href="https://gitlab.com/feike/adventofcode/-/tree/master/2021" rel="nofollow">https://gitlab.com/feike/adventofcode/-/tree/master/2021</a><p>Some others doing sql based solutions:<p><a href="https://github.com/xocolatl/advent-of-code" rel="nofollow">https://github.com/xocolatl/advent-of-code</a>
<a href="https://github.com/zr40/adventofcode" rel="nofollow">https://github.com/zr40/adventofcode</a></p>
]]></description><pubDate>Tue, 07 Dec 2021 02:13:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=29468425</link><dc:creator>feike</dc:creator><comments>https://news.ycombinator.com/item?id=29468425</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29468425</guid></item><item><title><![CDATA[New comment by feike in "Jepsen: PostgreSQL 12.3"]]></title><description><![CDATA[
<p>Patroni does have synchronous_mode_strict setting, which may be what you're looking for:<p>This parameter prevents Patroni from switching off the synchronous replication on the primary when no synchronous standby candidates are available. As a downside, the primary is not be available for writes (unless the Postgres transaction explicitly turns of synchronous_mode), blocking all client write requests until at least one synchronous replica comes up.<p><a href="https://patroni.readthedocs.io/en/latest/replication_modes.html#replication-modes" rel="nofollow">https://patroni.readthedocs.io/en/latest/replication_modes.h...</a><p>edit: seems I missed this discussion on twitter:
<a href="https://twitter.com/jepsen_io/status/1265626035380346881" rel="nofollow">https://twitter.com/jepsen_io/status/1265626035380346881</a></p>
]]></description><pubDate>Fri, 12 Jun 2020 14:43:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=23499702</link><dc:creator>feike</dc:creator><comments>https://news.ycombinator.com/item?id=23499702</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23499702</guid></item><item><title><![CDATA[New comment by feike in "Jepsen: PostgreSQL 12.3"]]></title><description><![CDATA[
<p>This postgresql mailing list thread allows you to read along with the PostgreSQL developers and Jepsen, seems like a very useful discussion:
<a href="https://www.postgresql.org/message-id/flat/db7b729d-0226-d162-a126-8a8ab2dc4443%40jepsen.io" rel="nofollow">https://www.postgresql.org/message-id/flat/db7b729d-0226-d16...</a></p>
]]></description><pubDate>Fri, 12 Jun 2020 14:24:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=23499516</link><dc:creator>feike</dc:creator><comments>https://news.ycombinator.com/item?id=23499516</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23499516</guid></item><item><title><![CDATA[New comment by feike in "Jepsen: MongoDB 4.2.6"]]></title><description><![CDATA[
<p>> Can Patroni tell if master node is not responsive because it is busy vs dead<p>No. But the contract Patroni has is this:<p>I only serve a master (primary) if I have the lock.
If I do not have the lock I will demote.<p>This results in that there can be only 1 primary active at any given point in time, even if the network is partitioned.<p>This in and of itself does not guarantee no-split-brain situations, a split-brain can occur if writes were made on the former primary, but not yet on the future primary.
This however can be mitigated with synchronous replication.</p>
]]></description><pubDate>Mon, 25 May 2020 09:10:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=23299395</link><dc:creator>feike</dc:creator><comments>https://news.ycombinator.com/item?id=23299395</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23299395</guid></item><item><title><![CDATA[New comment by feike in "Jepsen: MongoDB 4.2.6"]]></title><description><![CDATA[
<p>One of the authors of Patroni here.<p>Automatic failover for PostgreSQL works great and can be done safely if combined with synchronous replication.<p>Multiple tools will implement this correctly:<p><a href="https://patroni.readthedocs.io/en/latest/replication_modes.html#postgresql-synchronous-replication" rel="nofollow">https://patroni.readthedocs.io/en/latest/replication_modes.h...</a>
<a href="https://github.com/sorintlab/stolon/blob/master/doc/syncrepl.md#synchronous-replication" rel="nofollow">https://github.com/sorintlab/stolon/blob/master/doc/syncrepl...</a><p>Quoting a former colleague here, but "if it hurts, do it more often". That is what you should do with your PostgreSQL failovers.<p>I have clusters running on timelines in the hundreds without a byte of data loss due to using synchronous replication, tools that help out with leader election, and just doing it often.</p>
]]></description><pubDate>Sun, 24 May 2020 19:15:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=23293863</link><dc:creator>feike</dc:creator><comments>https://news.ycombinator.com/item?id=23293863</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23293863</guid></item><item><title><![CDATA[New comment by feike in "Advent of Code 2019"]]></title><description><![CDATA[
<p>I've been doing this by having a single SQL statement per puzzle. Didn't get beyond day 7 last year, but so far so good:<p><a href="https://gitlab.com/feike/adventofcode/tree/master/2019" rel="nofollow">https://gitlab.com/feike/adventofcode/tree/master/2019</a></p>
]]></description><pubDate>Tue, 03 Dec 2019 19:21:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=21695645</link><dc:creator>feike</dc:creator><comments>https://news.ycombinator.com/item?id=21695645</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=21695645</guid></item></channel></rss>