<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: hibouaile</title><link>https://news.ycombinator.com/user?id=hibouaile</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 27 Apr 2026 17:43:36 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=hibouaile" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by hibouaile in "An AI agent deleted our production database. The agent's confession is below"]]></title><description><![CDATA[
<p>This is a classic anchoring failure. The LLM read the request, framed
the risk space ("looks like cleanup is needed"), and the human didn't
challenge that framing before it acted.<p>The discipline that prevents a chunk of this is enumerating your traps
before the LLM sees any code or config. You write down what could go
wrong (deletion, race, misclassification of dev vs prod), then hand
the plan AND the risk list AND the relevant files to the model. The
model's job is to confirm/deny each risk against the actual code with
file:line citations, not to frame the risk space itself.<p>Pre-implementation. Anchoring defense. The opposite of "vibe coding."</p>
]]></description><pubDate>Sun, 26 Apr 2026 20:54:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47914325</link><dc:creator>hibouaile</dc:creator><comments>https://news.ycombinator.com/item?id=47914325</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47914325</guid></item></channel></rss>