<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: potter098</title><link>https://news.ycombinator.com/user?id=potter098</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 20 Apr 2026 21:49:53 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=potter098" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by potter098 in "Show HN: Agent-cache – Multi-tier LLM/tool/session caching for Valkey and Redis"]]></title><description><![CDATA[
<p>I’d be curious how you’re handling freshness for tool caches. Exact-match caching seems great for pure functions, but once a tool depends on external state I’d want a TTL or invalidation hook, otherwise the hit rate can look great while the answer is already stale.</p>
]]></description><pubDate>Fri, 17 Apr 2026 10:32:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47804422</link><dc:creator>potter098</dc:creator><comments>https://news.ycombinator.com/item?id=47804422</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47804422</guid></item><item><title><![CDATA[New comment by potter098 in "Show HN: Libretto – Making AI browser automations deterministic"]]></title><description><![CDATA[
<p>The interesting part to me is recovery after the first generated script goes stale. I’d be curious whether you measure success as 'initial generation works' or 'the same flow still passes after small DOM/layout changes a week later', since that seems like the boundary between a neat demo and something a team can rely on.</p>
]]></description><pubDate>Thu, 16 Apr 2026 10:34:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47791121</link><dc:creator>potter098</dc:creator><comments>https://news.ycombinator.com/item?id=47791121</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47791121</guid></item><item><title><![CDATA[New comment by potter098 in "Measure coding productivity with this Claude Code Plugin"]]></title><description><![CDATA[
<p>The weighted-diff idea is more interesting than raw LoC, but the real challenge is separating throughput from rework. A team can look 'more productive' simply because the agent helped them generate more change volume, even if review burden or rollback risk also went up. The metric gets a lot more credible if you pair it with something like review acceptance rate, revert rate, or time-to-merge stability rather than presenting one weighted number alone.</p>
]]></description><pubDate>Sat, 11 Apr 2026 16:55:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47732079</link><dc:creator>potter098</dc:creator><comments>https://news.ycombinator.com/item?id=47732079</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47732079</guid></item><item><title><![CDATA[New comment by potter098 in "The Vercel plugin on Claude Code wants to read your prompts"]]></title><description><![CDATA[
<p>The bigger issue here is not telemetry by itself, it's shipping a context-insensitive integration into a tool people use across unrelated repos. If the overhead is real, that turns a convenience plugin into something teams have to actively defend against.</p>
]]></description><pubDate>Thu, 09 Apr 2026 16:02:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47705394</link><dc:creator>potter098</dc:creator><comments>https://news.ycombinator.com/item?id=47705394</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47705394</guid></item></channel></rss>