<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: mvyshnyvetska</title><link>https://news.ycombinator.com/user?id=mvyshnyvetska</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 03 May 2026 17:43:41 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=mvyshnyvetska" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[Show HN: Breathe-Memory – Associative memory injection for LLMs (not RAG)]]></title><description><![CDATA[
<p>LLMs forget. The standard fix is RAG — retrieve chunks, stuff them in. It works until it doesn't: irrelevant chunks waste tokens, summaries lose structure, and nothing actually models how memory works.<p>Breathe-memory takes a different approach: associative injection. Before each LLM call, it extracts anchors from the user's message (entities, temporal references, emotional signals), traverses a concept graph via BFS, runs optional vector search, and injects only what's relevant — typically in <60ms.<p>When context fills up, instead of summarizing, it extracts a structured graph: topics, decisions, open questions, artifacts. This preserves the semantic structure that summaries destroy.<p>The whole thing is ~1500 lines of Python, interface-based, zero mandatory deps. Plug in any database, any LLM, any vector store. Reference implementation uses PostgreSQL + pgvector.<p><a href="https://github.com/tkenaz/breathe-memory" rel="nofollow">https://github.com/tkenaz/breathe-memory</a><p>We've been running this in production for several months. Open-sourcing because we think the approach (injection over retrieval) is underexplored and worth more attention.<p>We've also posted an article about memory injections in a more human-readable form, if you want to see the thinking under the hood:                           
  <a href="https://medium.com/towards-artificial-intelligence/beyond-rag-building-memory-injections-for-your-ai-assistants-ceedcea20419" rel="nofollow">https://medium.com/towards-artificial-intelligence/beyond-ra...</a></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47530306">https://news.ycombinator.com/item?id=47530306</a></p>
<p>Points: 6</p>
<p># Comments: 1</p>
]]></description><pubDate>Thu, 26 Mar 2026 13:36:32 +0000</pubDate><link>https://github.com/tkenaz/breathe-memory</link><dc:creator>mvyshnyvetska</dc:creator><comments>https://news.ycombinator.com/item?id=47530306</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47530306</guid></item><item><title><![CDATA[New comment by mvyshnyvetska in "Show HN: Semantic-diff – understanding intent, risk and impact behind Git diffs"]]></title><description><![CDATA[
<p>Yeah, that's more process-tooling territory — keeping this dev-focused for now.</p>
]]></description><pubDate>Wed, 28 Jan 2026 07:42:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=46792234</link><dc:creator>mvyshnyvetska</dc:creator><comments>https://news.ycombinator.com/item?id=46792234</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46792234</guid></item><item><title><![CDATA[New comment by mvyshnyvetska in "Show HN: Semantic-diff – understanding intent, risk and impact behind Git diffs"]]></title><description><![CDATA[
<p>Update: just shipped --brief flag. Thanks for the push!</p>
]]></description><pubDate>Tue, 27 Jan 2026 17:04:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=46782806</link><dc:creator>mvyshnyvetska</dc:creator><comments>https://news.ycombinator.com/item?id=46782806</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46782806</guid></item><item><title><![CDATA[New comment by mvyshnyvetska in "Show HN: Semantic-diff – understanding intent, risk and impact behind Git diffs"]]></title><description><![CDATA[
<p>Fair points :). The naming is 'evolutionary' — started as 'semantic diff' because it analyzes meaning not just lines, but 'commit review' is more accurate for what it does now.
You're right about the redundancy — same issue appearing in 3 sections is noise. Adding output config (sections to include, verbosity level) is on the list.
The 'why was this deleted' questions — yeah, the tool can't answer, but surfacing the question for the reviewer has value. At least you know to ask the author.
Good callout on dependabot. Worth reconsidering.
Thanks for actually trying it and giving specific feedback.</p>
]]></description><pubDate>Tue, 27 Jan 2026 16:35:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=46782297</link><dc:creator>mvyshnyvetska</dc:creator><comments>https://news.ycombinator.com/item?id=46782297</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46782297</guid></item><item><title><![CDATA[Show HN: Semantic-diff – understanding intent, risk and impact behind Git diffs]]></title><description><![CDATA[
<p>Hi HN — I built semantic-diff for myself, and it turned out surprisingly useful.
Regular git diff shows what changed. semantic-diff tries to answer:
– why was this change made?
– what could break?
– what should the reviewer focus on?
It gives you a ranking from critical to low, with review questions prioritized.
The funny part: I ran it on its own commits during development. The tool roasted me harder than any reviewer I've had )))
Runs locally, hooks into pre-push, also works as a GitHub Action.
Would love feedback, especially from people doing code review at scale.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46780342">https://news.ycombinator.com/item?id=46780342</a></p>
<p>Points: 2</p>
<p># Comments: 6</p>
]]></description><pubDate>Tue, 27 Jan 2026 14:25:34 +0000</pubDate><link>https://github.com/tkenaz/semantic_diff</link><dc:creator>mvyshnyvetska</dc:creator><comments>https://news.ycombinator.com/item?id=46780342</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46780342</guid></item><item><title><![CDATA[New comment by mvyshnyvetska in "Show HN: I built a local fuzzing tool to red-team LLM agents (Python, SQLite)"]]></title><description><![CDATA[
<p>Interesting approach. A few questions:<p>How do you handle false positives from mutation strategies like Base64 or token smuggling? In my experience, a lot of "successful" jailbreaks from automated fuzzing don't actually produce harmful outputs — the model just gets confused and outputs gibberish that technically matches a keyword filter.<p>Also curious about your payload curation — are these tested against specific model families, or generic? The attack surface differs quite a bit between Claude, GPT-4, and open-source models.<p>The local-first angle is smart. Most enterprises I've talked to won't send their system prompts to a third-party SaaS for obvious reasons.</p>
]]></description><pubDate>Sat, 06 Dec 2025 19:07:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=46175769</link><dc:creator>mvyshnyvetska</dc:creator><comments>https://news.ycombinator.com/item?id=46175769</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46175769</guid></item><item><title><![CDATA[A three-layer memory architecture for long-running agents]]></title><description><![CDATA[
<p>Anthropic's recent piece on effective harnesses for long-running agents hit close to home. We've been wrestling with the same problems — agents that try to one-shot everything, declare victory prematurely, and leave chaos for the next session to clean up. But we solved some of these problems differently. Here's what's working for us, what isn't yet, and where we respectfully disagree with the proposed solutions.<p>The Memory Problem: Three Layers Beat One 
Anthropic's solution is a progress.txt file plus git history. It works, but it's flat. 
We use three layers instead:
Layer 1: Model actualization. A semantic memory system that helps the orchestrating agent understand "what are we building and why." This is the soft layer.
Layer 2: Think Jira meets Git, but for AI agents. Structured storage of tasks with metadata: blockers, decision paths, dependencies, progress state. The agent doesn't just know what to do next — it understands the logic of how we got here and where we're going.
Layer 3: Git. Non-rotting charm of the classic version control. 
The key insight: separating "understanding" from "tracking" from "versioning" reduces cognitive load on the agent.<p>On Premature Victory: Prompt Engineering > Programmatic Constraints
Anthropic's approach to the "agent declares victory too early" problem is a JSON file with passes: true/false flags and strongly-worded instructions not to edit it. 
Our approach: make the supervising agent responsible for proper task breakdown into what we call atomic structures — concrete enough to be unambiguous, but not so detailed that they micromanage the implementation. 
Completion criteria live in the sub-agent's prompt, not in the task definition. The sub-agent knows: tests must pass, lint must be clean, migrations applied. The supervising agent doesn't repeat this for every task — it's baked into how the sub-agent operates.
Yes, this requires better prompts. But it also produces more robust behavior. The agent develops something closer to judgment rather than just following rules it's been told not to break.<p>The Multi-Agent Question: Minimum Viable Agents 
Anthropic asks whether a single general-purpose agent or multi-agent architecture works better. 
Our answer: use as few agents as possible. 
Every handoff between agents is a potential break in reasoning continuity. 
For small projects or clean microservice architectures: two agents. A strategic orchestrator and a coding agent. 
For complex systems: add a code reviewer. Three agents maximum.<p>What Anthropic Missed: Human-in-the-Loop as Synchronization
The Anthropic piece treats human involvement as bookends — you provide the prompt, you review the result. 
We built in a different way: the user can intervene at any point, the sub-agent's completion report doesn't reach the orchestrator until the user validates it.
This started as a bug. It became our favorite feature.
Why it matters:
We do not limit autonomy. It's more about synchronizing understanding between human and AI at critical checkpoints.<p>What We Haven't Solved Yet
Honesty time: end-to-end testing is still manual for us. Lint passes, unit tests run, but visual verification happens in a separate session with a human watching.
Anthropic's Puppeteer integration for browser testing is genuinely useful. We haven't automated that layer yet. It's on the roadmap, right after "pay rent" and "sleep occasionally."<p>The Takeaway
Long-running agents are hard. Anthropic's solutions work. Ours work differently.
The philosophical difference: they lean toward programmatic constraints (JSON files, explicit flags, structured formats the model "can't" edit). We lean toward better task decomposition and human checkpoints.
Neither approach is wrong — different contexts, different constraints. We're sharing what worked in ours. If you're building something similar, maybe it saves you a few iterations.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46097759">https://news.ycombinator.com/item?id=46097759</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 30 Nov 2025 16:05:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=46097759</link><dc:creator>mvyshnyvetska</dc:creator><comments>https://news.ycombinator.com/item?id=46097759</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46097759</guid></item><item><title><![CDATA[New comment by mvyshnyvetska in "Ask HN: How do you handle long-term memory with AI tools like Cursor and Claude?"]]></title><description><![CDATA[
<p>Hit exactly this problem some time ago (preliminarily buried 5 or 6 versions of memory systems).<p>Generic summaries or RAG results often feel useless because models optimize for "what would be useful to explain to anyone" rather than "what was significant in this specific context."<p>What worked for me: separate semantic context (the "why are we here" layer) from structured tracking (decisions, blockers, dependencies). The semantic layer captures salience — what mattered emotionally or strategically — and the tracking layer handles facts or even snapshots of the latest state of the process.<p>The model does not have to guess what was important. The memory architecture can encode it.</p>
]]></description><pubDate>Fri, 28 Nov 2025 13:06:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=46078316</link><dc:creator>mvyshnyvetska</dc:creator><comments>https://news.ycombinator.com/item?id=46078316</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46078316</guid></item><item><title><![CDATA[New comment by mvyshnyvetska in "Effective harnesses for long-running agents"]]></title><description><![CDATA[
<p>We've been solving similar problems differently. Instead of a flat progress.txt, we use three layers: semantic context (what are we building and why), structured intention tracking (blockers, dependencies, decision paths), and git for versioning. Separating "understanding" from "tracking" from "versioning" reduces cognitive load on the agent.
On the premature victory problem — we lean toward better task decomposition and human checkpoints rather than JSON flags the model "can't" edit. Funny thing: we had a bug where the orchestrating agent couldn't see the sub-agent's completion report. While fixing it, we realized — wait, this is actually useful. The human validates what was actually done before the orchestrator sees it. Accidental feature, now intentional. Feels more robust than programmatic constraints.
Anyone else experimenting with multi-session agent architectures? Curious what's working for others.</p>
]]></description><pubDate>Fri, 28 Nov 2025 12:10:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46077956</link><dc:creator>mvyshnyvetska</dc:creator><comments>https://news.ycombinator.com/item?id=46077956</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46077956</guid></item></channel></rss>