<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: sweatybridge</title><link>https://news.ycombinator.com/user?id=sweatybridge</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 06:08:12 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=sweatybridge" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by sweatybridge in "Declarative Schemas for simpler database management"]]></title><description><![CDATA[
<p>> it doesn’t tell you the consequences of changing an attribute (database restart or other downtime-inducing stuff)<p>Modern diff tools are designed to provide better guardrails in these situations. For eg, pg-schema-diff [0] tries to generate zero downtime migrations by using lock-free migrations and warns you about potentially hazardous migrations.<p>I think it's good direction to bake these best practices into the tooling itself, rather than relying purely on the experiences of engineers.<p>[0] <a href="https://github.com/stripe/pg-schema-diff">https://github.com/stripe/pg-schema-diff</a></p>
]]></description><pubDate>Thu, 03 Apr 2025 20:52:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=43575235</link><dc:creator>sweatybridge</dc:creator><comments>https://news.ycombinator.com/item?id=43575235</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43575235</guid></item><item><title><![CDATA[New comment by sweatybridge in "Declarative Schemas for simpler database management"]]></title><description><![CDATA[
<p>> When they started integrating this tool on some trial candidates they found SO many inconsistencies between environments: server settings differences, extraneous or missing indexes, vestigial "temp" tables created during previous migrations, enum tables that should be static with extra or missing rows, etc, etc. All the environment differences meant that deployments had to be manual in the past. Once they got through the initial pain of syncing up the environments the whole department got way more efficient.<p>That was exactly our experience too.<p>Perhaps we didn't highlight enough in the blog post that schema diff was not meant to replace manual review. It simply provided a good starting point for us to iterate on the migration, which often boosts efficiency.</p>
]]></description><pubDate>Thu, 03 Apr 2025 19:54:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=43574580</link><dc:creator>sweatybridge</dc:creator><comments>https://news.ycombinator.com/item?id=43574580</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43574580</guid></item><item><title><![CDATA[New comment by sweatybridge in "Supabase Local Dev: migrations, branching, and observability"]]></title><description><![CDATA[
<p>Thank you for the helpful feedback.<p>> Using supabase migration up would mean moving the latest migration out of the migrations folder, running supabase db reset, moving the file back in and then calling supabase migration up.<p>We can definitely do a better job here. I'm adding support for db reset --version flag [0]. This should allow you run migration up without moving files around directories.<p>> I wasn't really sure what the actual outcome would look like If I have something like this in a migration script<p>Agree that we can do a better job with the documentation for squash command. I will add more examples.<p>The current implementation does a schema only dump from the local database, created by running local migration files. Any insert statements will be excluded from the dump. I believe this is not the correct behaviour so I've filed a bug [1] to fix in the next stable release.<p>[0] <a href="https://github.com/supabase/cli/pull/1369">https://github.com/supabase/cli/pull/1369</a><p>[1] <a href="https://github.com/supabase/cli/issues/1370">https://github.com/supabase/cli/issues/1370</a></p>
]]></description><pubDate>Thu, 10 Aug 2023 15:05:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=37077104</link><dc:creator>sweatybridge</dc:creator><comments>https://news.ycombinator.com/item?id=37077104</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37077104</guid></item><item><title><![CDATA[New comment by sweatybridge in "Supabase Local Dev: migrations, branching, and observability"]]></title><description><![CDATA[
<p>> if your latest migration is destructive - you want to seed data and then run the next migration.<p>We have added supabase migration up [0] command that runs only pending migrations (ie. those that don't exist in local db's migration history table). You can use that to test destructive migration locally with data from seed.sql.<p>After testing, you want to update your seed.sql with a data-only dump [1] from your local db. That would make CI happy with both the new migration and the new seed file.<p>> 2. run a preseed script that disables any triggers and removes default data that has been previously seeded in migrations<p>It sounds like the default data is no longer relevant for your local development. If so, I would suggest running supabase migration squash [2] to remove the default data.<p>To disable triggers before seeding data, you can add the following line to seed.sql [3]<p>SET session_replication_role = replica;<p>[0] <a href="https://supabase.com/docs/reference/cli/supabase-migration-up">https://supabase.com/docs/reference/cli/supabase-migration-u...</a><p>[1] <a href="https://supabase.com/docs/reference/cli/supabase-db-dump">https://supabase.com/docs/reference/cli/supabase-db-dump</a><p>[2] <a href="https://supabase.com/docs/reference/cli/supabase-migration-squash">https://supabase.com/docs/reference/cli/supabase-migration-s...</a><p>[3] <a href="https://stackoverflow.com/questions/3942258/how-do-i-temporarily-disable-triggers-in-postgresql" rel="nofollow noreferrer">https://stackoverflow.com/questions/3942258/how-do-i-tempora...</a></p>
]]></description><pubDate>Thu, 10 Aug 2023 08:46:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=37073593</link><dc:creator>sweatybridge</dc:creator><comments>https://news.ycombinator.com/item?id=37073593</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37073593</guid></item><item><title><![CDATA[New comment by sweatybridge in "Migrating from Supabase"]]></title><description><![CDATA[
<p>Hello, I work on CLI full time. Things have certainly improved over the last few months on using this tool for migrating self-hosted databases.<p>Currently all supabase db and migration commands support --db-url flag [1] which allows you to point the CLI to any Postgres database by a connection string.<p>If there's any use case I missed, please feel free to open a GitHub issue and I will look into it promptly.<p>[1] <a href="https://supabase.com/docs/reference/cli/supabase-db">https://supabase.com/docs/reference/cli/supabase-db</a></p>
]]></description><pubDate>Sat, 20 May 2023 03:50:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=36009220</link><dc:creator>sweatybridge</dc:creator><comments>https://news.ycombinator.com/item?id=36009220</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36009220</guid></item></channel></rss>