<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: hangrymoon01</title><link>https://news.ycombinator.com/user?id=hangrymoon01</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 09:40:24 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=hangrymoon01" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by hangrymoon01 in "Launch HN: Relvy (YC F24) – On-call runbooks, automated"]]></title><description><![CDATA[
<p>Re: savings - it depends on the use case. For example, one of our users set up a small runbook to run a group-by-IP query for high-throughput alerts, since that was their most common first response to those alerts. That alone cuts out a couple of minutes of exploration per incident and removes the variability of the agent deciding what data to investigate and how to slice it.<p>In our experience, runbooks provide a consistent, fast, and reliable way of investigating incidents (or ruling out common causes). In their absence, the AI does its usual open-ended exploration.</p>
]]></description><pubDate>Thu, 09 Apr 2026 18:53:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47708050</link><dc:creator>hangrymoon01</dc:creator><comments>https://news.ycombinator.com/item?id=47708050</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47708050</guid></item><item><title><![CDATA[New comment by hangrymoon01 in "Launch HN: Relvy (YC F24) – On-call runbooks, automated"]]></title><description><![CDATA[
<p>Re: custom harnesses, imo maintaining them can be time consuming especially when things are changing very fast with AI. Bringing up a prototype is easy but a robust harness that handles the edge cases needs time and effort.</p>
]]></description><pubDate>Thu, 09 Apr 2026 18:39:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47707793</link><dc:creator>hangrymoon01</dc:creator><comments>https://news.ycombinator.com/item?id=47707793</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47707793</guid></item><item><title><![CDATA[New comment by hangrymoon01 in "LLM Costs of AI investigating production alerts"]]></title><description><![CDATA[
<p>Interesting. Nice to see someone publishing actual cost numbers.</p>
]]></description><pubDate>Mon, 16 Mar 2026 14:04:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47399212</link><dc:creator>hangrymoon01</dc:creator><comments>https://news.ycombinator.com/item?id=47399212</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47399212</guid></item></channel></rss>