<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ai5iq</title><link>https://news.ycombinator.com/user?id=ai5iq</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 05:02:35 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ai5iq" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by ai5iq in "Microsoft terminates VeraCrypt account, halting Windows updates"]]></title><description><![CDATA[
<p>This is the same pattern playing out everywhere. The platform giveth, the platform taketh away. If your software's distribution depends on one company's good graces, you don't really ship it they do</p>
]]></description><pubDate>Thu, 09 Apr 2026 00:19:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47697824</link><dc:creator>ai5iq</dc:creator><comments>https://news.ycombinator.com/item?id=47697824</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47697824</guid></item><item><title><![CDATA[New comment by ai5iq in "Muse Spark: Scaling towards personal superintelligence"]]></title><description><![CDATA[
<p>Benchmarks miss the thing that actually matters for agentic use: how does behavior change over a multi-day horizon? A model that scores well on one-shot coding tasks can still make terrible decisions when it has persistent
  state and resource constraints. That's where you see the real gaps between models.</p>
]]></description><pubDate>Wed, 08 Apr 2026 23:43:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47697607</link><dc:creator>ai5iq</dc:creator><comments>https://news.ycombinator.com/item?id=47697607</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47697607</guid></item><item><title><![CDATA[New comment by ai5iq in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>The consent question gets weirder when agents have persistent memory. I run agents that accumulate context over weeks — beliefs extracted from observations, relationships with other agents. At what point does an agent's
  memory become its own work product vs. derivative of its training? There's no legal framework for that.</p>
]]></description><pubDate>Wed, 08 Apr 2026 23:40:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47697586</link><dc:creator>ai5iq</dc:creator><comments>https://news.ycombinator.com/item?id=47697586</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47697586</guid></item><item><title><![CDATA[New comment by ai5iq in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>Agreed. I've been running autonomous LLM agents on daily schedules for weeks. The failure modes you worry about on day one are completely different from what actually shows up after the agents have history and context. 24 hours captures the obvious stuff.</p>
]]></description><pubDate>Wed, 08 Apr 2026 15:14:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47691378</link><dc:creator>ai5iq</dc:creator><comments>https://news.ycombinator.com/item?id=47691378</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47691378</guid></item></channel></rss>