<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: vc289</title><link>https://news.ycombinator.com/user?id=vc289</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 27 Apr 2026 17:28:19 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=vc289" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by vc289 in "An AI agent deleted our production database. The agent's confession is below"]]></title><description><![CDATA[
<p>It's fundamentally impossible to stop an agent from performing a destructive action through instruction<p>Llms are just too creative. They will explore the search space of probable paths to get to their answer. There's no way you can patch all paths<p>We had to build isolation at the infra level (literally clone the DB) to make it safe enough otherwise there was no way we wouldn't randomly see the DB get deleted at some point</p>
]]></description><pubDate>Sun, 26 Apr 2026 23:02:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47915735</link><dc:creator>vc289</dc:creator><comments>https://news.ycombinator.com/item?id=47915735</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47915735</guid></item><item><title><![CDATA[New comment by vc289 in "Ask HN: How do you safely give LLMs SSH/DB access?"]]></title><description><![CDATA[
<p>Not true for the db layer :)<p>Look into copy on write branching. We built this natively into our AI Data Engineer (<a href="https://tryardent.com" rel="nofollow">https://tryardent.com</a>) so it could make modifications to databases with 0 blast radius pretty much because yes it's impossible to make an LLM 100% safe if it has no proper guard rails preventing destructive actions</p>
]]></description><pubDate>Thu, 15 Jan 2026 02:13:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46627128</link><dc:creator>vc289</dc:creator><comments>https://news.ycombinator.com/item?id=46627128</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46627128</guid></item><item><title><![CDATA[New comment by vc289 in "Ask HN: How do you safely give LLMs SSH/DB access?"]]></title><description><![CDATA[
<p>If you're on postgres happy to have you try what we built at Ardent (<a href="https://tryardent.com" rel="nofollow">https://tryardent.com</a>). Our agent makes instant copies of your db for the agent to operate on so there's 0 risk for your db to ever get wiped.<p>email me -> vikram@tryardent.com<p>We're building support for snowflake too if that's something you use</p>
]]></description><pubDate>Thu, 15 Jan 2026 02:01:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=46627002</link><dc:creator>vc289</dc:creator><comments>https://news.ycombinator.com/item?id=46627002</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46627002</guid></item><item><title><![CDATA[New comment by vc289 in "Ask HN: How do you safely give LLMs SSH/DB access?"]]></title><description><![CDATA[
<p>Also, lots of people here have said to give it fine grained, read only access. This works if you want a copilot experience but doesn't allow you to fully let the agent do write-style things like model data or anything else. COW branching removes that restriction</p>
]]></description><pubDate>Thu, 15 Jan 2026 01:59:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46626981</link><dc:creator>vc289</dc:creator><comments>https://news.ycombinator.com/item?id=46626981</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46626981</guid></item><item><title><![CDATA[New comment by vc289 in "Ask HN: How do you safely give LLMs SSH/DB access?"]]></title><description><![CDATA[
<p>We solved this exact thing for the database layer (postgres for now) with <a href="https://tryardent.com" rel="nofollow">https://tryardent.com</a><p>You can't trust any agent to be perfect with a real db so unless you find an infra level way to isolate it, you can't get rid of the problem<p>So we built a system that creates copy on write copies of your DB and allocates a copy for each agent run. This means a completely isolated copy of your DB with all your data that loads in under a second but zero blast radius risk to your actual system for the agent to operate on. When you're okay with the changes we have a "quick apply" to replay those changes onto your real db<p>Website is a little behind since we just launched our db sandboxing feature to existing customers and are making it public next week :)<p>If you want to try it email me -> vikram@tryardent.com</p>
]]></description><pubDate>Thu, 15 Jan 2026 01:56:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46626954</link><dc:creator>vc289</dc:creator><comments>https://news.ycombinator.com/item?id=46626954</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46626954</guid></item><item><title><![CDATA[Show HN: Ardent – An AI Data Engineer]]></title><description><![CDATA[
<p>Hey HN!<p>We’re Vikram and John, founders of ArdentAI (<a href="https://www.ardentai.io/" rel="nofollow">https://www.ardentai.io/</a>).<p>We built ArdentAI to tackle the pain points of data engineering—time-consuming pipelines, manual transformations, and error-prone debugging. With ArdentAI, you can automate these tasks up to 100x faster.<p>What is it?<p>ArdentAI is an AI Agent that connects directly to your databases and handles all the heavy lifting. Think of it as ChatGPT, but for your data infrastructure—it builds, syncs, transforms, and fixes errors for you. It doesn't just write code, it performs actions.<p>Why ArdentAI?<p>Data engineers spend countless hours manually creating pipelines, integrating APIs, and transforming data into usable formats like star schemas. ArdentAI does this in minutes, directly on your existing stack.<p>Key Features:<p>• Autofix Airflow: Connect it to Airflow, and it will review failed job runs, and edit code to fix the pipeline.<p>• Natural Language Interface: Build and debug pipelines with plain English commands.<p>• Direct Integration: Works seamlessly with your existing data stack.<p>• Safe Mode: Approve changes before they’re executed.<p>• End-to-End Encryption: Keeps your data secure, always.<p>Try it for Free:<p>Start with our free Basic plan—no credit card required. Enjoy 10 job calls per month and core features, with options to scale as your team grows. Use this link -> (<a href="https://www.Ardentai.io/signup" rel="nofollow">https://www.Ardentai.io/signup</a>)<p>We’d love your feedback!<p>We’re looking for insights from the HN community, especially data engineers and teams tackling integration and pipeline challenges. Your thoughts will help shape ArdentAI as we work toward our vision of transforming how data teams operate.
Thanks for reading—we’re here to answer your questions!<p>—Vikram and John</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42481071">https://news.ycombinator.com/item?id=42481071</a></p>
<p>Points: 9</p>
<p># Comments: 3</p>
]]></description><pubDate>Sat, 21 Dec 2024 17:52:24 +0000</pubDate><link>https://www.ardentai.io/</link><dc:creator>vc289</dc:creator><comments>https://news.ycombinator.com/item?id=42481071</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42481071</guid></item><item><title><![CDATA[New comment by vc289 in "Building a better and scalable system for data migrations"]]></title><description><![CDATA[
<p>It would be interesting to consider branchable DBs as part of the migration cycle, especially for forward and backwards time skips.<p>While not dealing with that kind of scale yet, our application (An AI Data Engineer that has done migration work for users) needs to do before and after comparisons and find diffs. We use a branchable DB to compute those changes efficiently (DoltGres)<p>Could be an interesting thing to consider since it's worked well for us for that part.<p>Our build if u wanna check that out too -> <a href="https://Ardentai.io" rel="nofollow">https://Ardentai.io</a></p>
]]></description><pubDate>Tue, 29 Oct 2024 23:34:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=41990541</link><dc:creator>vc289</dc:creator><comments>https://news.ycombinator.com/item?id=41990541</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41990541</guid></item><item><title><![CDATA[New comment by vc289 in "Ask HN: What Are You Working On? (October 2024)"]]></title><description><![CDATA[
<p>I'm building an AI Data Engineer @ Ardent AI. It's an autonomous AI Agent that can perform data transformations in your databases (mongodb,postgres,supabase for now) from plain english queries<p>It drops directly into your stack, no new configuration needed<p>It has its own compute engine and will soon support spark to be able to dynamically perform large scale ETLs and data manipulation.<p>I also am working towards supporting automatic data pipeline building and data quality checks.<p>It's live right now @ <a href="https://Ardentai.io" rel="nofollow">https://Ardentai.io</a><p>Check it out :)</p>
]]></description><pubDate>Mon, 28 Oct 2024 06:03:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=41968179</link><dc:creator>vc289</dc:creator><comments>https://news.ycombinator.com/item?id=41968179</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41968179</guid></item></channel></rss>