<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: evanvolgas</title><link>https://news.ycombinator.com/user?id=evanvolgas</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 14 May 2026 15:12:08 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=evanvolgas" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by evanvolgas in "Launch HN: Ardent (YC P26) – Postgres sandboxes in seconds with zero migration"]]></title><description><![CDATA[
<p>Evan here, from Ardent.<p>It's not uncommon (hex.ai, etc all do this, as do developers, MCP tools, etc). One thing we do at Ardent is enable obfuscated read replicas. We can strip PII in the replicas, so your agents are operating on realistic (but not sensitive) data. Moreover, they can do so in a way that doesn't impact your production database and is fast enough to wire into your CI/CD processes.<p>Jeremy is correct, though. The main risk/concern is primarily agents with write access. There are two high profile instances in the last year of agents dropping production databases (even when, in one case, after being given explicit instructions to never do such a thing). While read-replicas of a primary DB solve the "agents can't destroy things" problem, they don't solve things like testing schema migrations (in particular) or updates to the data.</p>
]]></description><pubDate>Wed, 13 May 2026 18:32:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=48125636</link><dc:creator>evanvolgas</dc:creator><comments>https://news.ycombinator.com/item?id=48125636</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48125636</guid></item><item><title><![CDATA[Show HN: Sifaka – Simple AI text improvement through research-backed critique]]></title><description><![CDATA[
<p>Sifaka is an open-source framework that adds reflection and reliability to large language model (LLM) applications.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44629275">https://news.ycombinator.com/item?id=44629275</a></p>
<p>Points: 7</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 20 Jul 2025 21:01:55 +0000</pubDate><link>https://github.com/sifaka-ai/sifaka</link><dc:creator>evanvolgas</dc:creator><comments>https://news.ycombinator.com/item?id=44629275</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44629275</guid></item></channel></rss>