<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ajbd</title><link>https://news.ycombinator.com/user?id=ajbd</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 24 Apr 2026 18:30:32 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ajbd" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by ajbd in "Show HN: Tolaria – Open-source macOS app to manage Markdown knowledge bases"]]></title><description><![CDATA[
<p>The “types as lenses, not schemas” principle and the focus on structure + relationships really stand out.
How do systems like this handle temporal stuff over time? (things that change over time, decisions that get revisited, outcomes that didn’t exist when the note was created?) Do those live as relationships between notes, or is there a different pattern for it?</p>
]]></description><pubDate>Fri, 24 Apr 2026 03:51:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47885297</link><dc:creator>ajbd</dc:creator><comments>https://news.ycombinator.com/item?id=47885297</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47885297</guid></item><item><title><![CDATA[New comment by ajbd in "Show HN: Archon-memory-core – agent memory that resolves contradictions"]]></title><description><![CDATA[
<p>This is interesting, and is solving a real problem. The contradiction resolution approach makes a lot of sense. It feels like a step towards letting models decide when they're out of their depth.<p>I do have a couple questions though;
1. How does consolidation handle partial updates vs. full contradictions? (eg., "budget is $50k" -> "$50k but can flex to $60k")
2. What's the overhead of the nightly consolidation pass at scale?
3. Does the system ever surface  uncertainty when two facts are recent but contradictory, or does it always prefer the newer one?<p>The 99.2% -> 49.2% seems like a dramatic gap, so I'm interested to see how other memory systems perform when they submit to the benchmark.</p>
]]></description><pubDate>Thu, 23 Apr 2026 04:18:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47872205</link><dc:creator>ajbd</dc:creator><comments>https://news.ycombinator.com/item?id=47872205</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47872205</guid></item><item><title><![CDATA[New comment by ajbd in "Show HN: Kumbukum, open source memory infrastructure for teams"]]></title><description><![CDATA[
<p>is the bottleneck actually retrieval/search, or just that the underlying data is unstructured to begin with? have you seen a setup where context and decision don't get lost in threads over time, without relying on constant summarization?</p>
]]></description><pubDate>Thu, 23 Apr 2026 04:07:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47872163</link><dc:creator>ajbd</dc:creator><comments>https://news.ycombinator.com/item?id=47872163</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47872163</guid></item></channel></rss>