<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: Manik_agg</title><link>https://news.ycombinator.com/user?id=Manik_agg</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 26 Apr 2026 08:45:19 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=Manik_agg" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by Manik_agg in "GPT-5.5"]]></title><description><![CDATA[
<p>Recently started using Codex and Chatgpt again due to claude model getting nerfed or rate limits.<p>Tried gpt5.5 and so far good. Zapier also shared an automation benchmark where 5.5 came on top in the leaderboard
<a href="https://zapier.com/benchmarks" rel="nofollow">https://zapier.com/benchmarks</a></p>
]]></description><pubDate>Fri, 24 Apr 2026 14:58:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47891186</link><dc:creator>Manik_agg</dc:creator><comments>https://news.ycombinator.com/item?id=47891186</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47891186</guid></item><item><title><![CDATA[New comment by Manik_agg in "Show HN: Tolaria – Open-source macOS app to manage Markdown knowledge bases"]]></title><description><![CDATA[
<p>Hey luca, heavy obsidian user here and went through your website and github. Def will try it out. Connecting codex with Tolaria to manage your knowledgebase is something i'm looking forward to try.</p>
]]></description><pubDate>Fri, 24 Apr 2026 05:30:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47885956</link><dc:creator>Manik_agg</dc:creator><comments>https://news.ycombinator.com/item?id=47885956</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47885956</guid></item><item><title><![CDATA[New comment by Manik_agg in "GPT-5.5"]]></title><description><![CDATA[
<p>OpenAI finally catching up with claude</p>
]]></description><pubDate>Fri, 24 Apr 2026 05:05:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47885772</link><dc:creator>Manik_agg</dc:creator><comments>https://news.ycombinator.com/item?id=47885772</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47885772</guid></item><item><title><![CDATA[Show HN: Core – open-source AI butler that clears your backlog without you]]></title><description><![CDATA[
<p>Hi HN, we're Manik, Manoj and Harshith, and we're building CORE (<a href="https://github.com/RedPlanetHQ/core" rel="nofollow">https://github.com/RedPlanetHQ/core</a>), an open source AI butler that acts and clears out your backlog.<p>Write `[ ] Fix the search auth bug` in a scratchpad. Three minutes later, without you at the keyboard, CORE picks it up, pulls the relevant context from your codebase, drafts a plan in the task description, and spins up a Claude Code session in the background to do the work. You review the output in the task chat and unblock it when it gets stuck.<p>Every AI tool today is reactive. You open a chat, brief the agent, it responds. Before anything moves, you've already done the real work: opened the Sentry error, found the commit, read the Slack thread, grabbed the Linear ticket, and stitched it all together into a prompt. The model isn't the bottleneck. You are.<p>Demo Video: <a href="https://www.youtube.com/watch?v=PFk4RJvQg1Y" rel="nofollow">https://www.youtube.com/watch?v=PFk4RJvQg1Y</a><p>CORE removes you from that loop. The interface is a shared scratchpad, think a page you and a colleague both have open. You write what's on your mind. When you write a checkbox line like `[ ] Fix the search bug`, CORE converts it into a task and starts working on it after a short delay (long enough for you to add context if you want to). No prompt template. No workflow to configure.<p>The reason it can do this without you re-explaining everything: CORE keeps a persistent memory built from your tasks, conversations, and connected apps (Linear, Gmail, GitHub, Slack etc.). When it spins up a Claude Code session, it arrives with your codebase and project context already loaded.<p>A real example: we wrote `[ ] Create a widget in Linear integration`, about 14 minutes later, CORE had opened a PR .<p>What CORE is _not_: it's not Devin (no autonomous web browsing or shell loops you can't see), and it's not "Claude Code with memory bolted on." It's the layer above it that decides what should run, gathers the context, hands it to the right agent, and keeps the receipts in one place. Today the agent backend it spins up most often is Claude Code; the orchestration, scratchpad, memory, and integrations are CORE.<p>Open source, self-hostable with `docker compose up` and it supports multiple models.<p>GitHub: <a href="https://github.com/RedPlanetHQ/core" rel="nofollow">https://github.com/RedPlanetHQ/core</a> 
Website: <a href="https://getcore.me">https://getcore.me</a> (you can chat with Harshith's butler there) 
Demo: <a href="https://www.youtube.com/watch?v=PFk4RJvQg1Y" rel="nofollow">https://www.youtube.com/watch?v=PFk4RJvQg1Y</a></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47876724">https://news.ycombinator.com/item?id=47876724</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 23 Apr 2026 15:14:25 +0000</pubDate><link>https://www.getcore.me/</link><dc:creator>Manik_agg</dc:creator><comments>https://news.ycombinator.com/item?id=47876724</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47876724</guid></item><item><title><![CDATA[The AI Memory Solution We All Need (No, It's Not OpenClaw)]]></title><description><![CDATA[
<p>Article URL: <a href="https://chrislema.com/the-ai-memory-solution-we-all-need-no-its-not-openclaw/">https://chrislema.com/the-ai-memory-solution-we-all-need-no-its-not-openclaw/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46844507">https://news.ycombinator.com/item?id=46844507</a></p>
<p>Points: 3</p>
<p># Comments: 3</p>
]]></description><pubDate>Sun, 01 Feb 2026 08:24:36 +0000</pubDate><link>https://chrislema.com/the-ai-memory-solution-we-all-need-no-its-not-openclaw/</link><dc:creator>Manik_agg</dc:creator><comments>https://news.ycombinator.com/item?id=46844507</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46844507</guid></item><item><title><![CDATA[New comment by Manik_agg in "Building AI Memory at 10M+ Nodes: Architecture, Failures, and Lessons"]]></title><description><![CDATA[
<p>Hey we already had PostgreSQL so no new infrastructure to manage, it was easy way to see if vector database change has any value.
It also has good enough performance - handles 10M vectors with HNSW indexes adequately
open source -  leverages existing infrastructure
for future migration. we've created a vector service, easy to swap later if needed</p>
]]></description><pubDate>Wed, 31 Dec 2025 05:27:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46441644</link><dc:creator>Manik_agg</dc:creator><comments>https://news.ycombinator.com/item?id=46441644</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46441644</guid></item><item><title><![CDATA[New comment by Manik_agg in "Building AI Memory at 10M+ Nodes: Architecture, Failures, and Lessons"]]></title><description><![CDATA[
<p>Author here. We've been building CORE (open source) for the past year. Happy to answer questions about the architecture, reification approach, or what broke at scale.</p>
]]></description><pubDate>Tue, 30 Dec 2025 15:12:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=46434062</link><dc:creator>Manik_agg</dc:creator><comments>https://news.ycombinator.com/item?id=46434062</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46434062</guid></item><item><title><![CDATA[Building AI Memory at 10M+ Nodes: Architecture, Failures, and Lessons]]></title><description><![CDATA[
<p>Article URL: <a href="https://blog.getcore.me/building-a-knowledge-graph-memory-system-with-10m-nodes-architecture-failures-and-hard-won-lessons/">https://blog.getcore.me/building-a-knowledge-graph-memory-system-with-10m-nodes-architecture-failures-and-hard-won-lessons/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46434058">https://news.ycombinator.com/item?id=46434058</a></p>
<p>Points: 3</p>
<p># Comments: 3</p>
]]></description><pubDate>Tue, 30 Dec 2025 15:11:27 +0000</pubDate><link>https://blog.getcore.me/building-a-knowledge-graph-memory-system-with-10m-nodes-architecture-failures-and-hard-won-lessons/</link><dc:creator>Manik_agg</dc:creator><comments>https://news.ycombinator.com/item?id=46434058</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46434058</guid></item><item><title><![CDATA[New comment by Manik_agg in "MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline"]]></title><description><![CDATA[
<p>I agree. Asking LLM to write for you is being lazy and it also results in sub-par results (don't know about brain-rot).<p>I also like preparing a draft and using llm for critique, it helps me figure out some blind spots or ways to articulate better.</p>
]]></description><pubDate>Wed, 03 Sep 2025 13:29:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=45115551</link><dc:creator>Manik_agg</dc:creator><comments>https://news.ycombinator.com/item?id=45115551</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45115551</guid></item><item><title><![CDATA[Show HN: Core – open-source memory graph for AI agents (88.24% SOTA on LoCoMo)]]></title><description><![CDATA[
<p>You brainstorm in ChatGPT, debug in Cursor, try a new coding agent and re-explain everything from scratch. With every new AI tool, the cost of context switching grows.<p>2 months ago we launched CORE: an open-source, personal, portable memory layer for individuals designed to work the way humans think: context-rich, evolving, and shareable across AI tools.<p>We evaluated CORE using the LoCoMo benchmark, a rigorous test involving 1,540 questions across 10 multi-turn conversations. Achieving an overall accuracy of *88.24%*, CORE outperformed existing systems in all categories:<p>- Single-hop recall: 91%<p>- Multi-hop reasoning: 85%<p>- Temporal understanding: 88%<p>- Open-domain synthesis: 71%<p>- Overall: 88%<p>For detailed info on our approach, architecture, and the full benchmark results check our blog post.<p>Try it and give us some feedback:<p>- Hosted free tier (HN launch): <a href="https://core.heysol.ai">https://core.heysol.ai</a><p>- Local: docker compose up from <a href="https://github.com/RedPlanetHQ/core" rel="nofollow">https://github.com/RedPlanetHQ/core</a></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45093272">https://news.ycombinator.com/item?id=45093272</a></p>
<p>Points: 4</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 01 Sep 2025 15:08:27 +0000</pubDate><link>https://blog.heysol.ai/we-built-memory-for-individuals-and-achieved-sota-on-locomo-benchmark/</link><dc:creator>Manik_agg</dc:creator><comments>https://news.ycombinator.com/item?id=45093272</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45093272</guid></item><item><title><![CDATA[How to make Cursor an Agent that Never Forgets and has better project context]]></title><description><![CDATA[
<p>Article URL: <a href="https://redplanethq.ghost.io/how-to-make-cursor-an-agent-that-never-forgets-and-10x-your-productivity/">https://redplanethq.ghost.io/how-to-make-cursor-an-agent-that-never-forgets-and-10x-your-productivity/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44838557">https://news.ycombinator.com/item?id=44838557</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 08 Aug 2025 16:01:21 +0000</pubDate><link>https://redplanethq.ghost.io/how-to-make-cursor-an-agent-that-never-forgets-and-10x-your-productivity/</link><dc:creator>Manik_agg</dc:creator><comments>https://news.ycombinator.com/item?id=44838557</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44838557</guid></item><item><title><![CDATA[New comment by Manik_agg in "Ask HN: Can you take your AI's memory with you?"]]></title><description><![CDATA[
<p>You’re right dumping all memory into the context window doesn’t scale. But with CORE, we don’t do that.<p>We use a reified knowledge graph for memory, where:
Each fact is a first-class node (with timestamp, source, certainty, etc.)
- Nodes are typed (Person, Tool, Issue, etc.) and richly linked
- Activity (e.g. a Slack message) is decomposed and connected to relevant context<p>This structure allows precise subgraph retrieval based on semantic, temporal, or relational filters—so only what’s relevant is pulled into the context window.
It’s not just RAG over documents. It’s graph traversal over structured memory. The model doesn’t carry memory—it queries what it needs.<p>So yes, the memory problem is real—but reified graphs actually make it tractable.</p>
]]></description><pubDate>Tue, 05 Aug 2025 15:26:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=44799237</link><dc:creator>Manik_agg</dc:creator><comments>https://news.ycombinator.com/item?id=44799237</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44799237</guid></item><item><title><![CDATA[Ask HN: Can you take your AI's memory with you?]]></title><description><![CDATA[
<p>You use ChatGPT, Claude, Gemini, and Grok for writing, coding, and research. But none of them know what the others learned about you. This is today's reality: your AI memory is vendor-locked.<p>Currently, only Cursor, Grok, and ChatGPT have memory capabilities, with more following soon. But even when all AI systems develop individual memory, this problem will still exist.<p>I believe we need an independent 3rd party memory layer for the AI systems, one that can share context with any AI app or agent you use.<p>Many people on X are already discussing this memory layer problem. Some are building memory MCPs that connect to various providers, but this only partially solves the issue since it's not deeply integrated with these systems.<p>An open standard for memory should<p>- Adds context in your memory from everywhere (pdfs, webpage, blogs, linear, notion, document, meetings etc.) 
- Enable seamless context recall in any AI app you use
- No more vendor lock-in, you own your personal memory<p>Do you think your AI memory should be owned by you, or should it remain vendor-locked with each platform?<p>Disclaimer - I am building one of such 3rd party memory layer called CORE (https://github.com/RedPlanetHQ/core) hence it is important for me to understand how you folks see about memory ownership.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44798987">https://news.ycombinator.com/item?id=44798987</a></p>
<p>Points: 3</p>
<p># Comments: 2</p>
]]></description><pubDate>Tue, 05 Aug 2025 15:08:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=44798987</link><dc:creator>Manik_agg</dc:creator><comments>https://news.ycombinator.com/item?id=44798987</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44798987</guid></item><item><title><![CDATA[New comment by Manik_agg in "[dead]"]]></title><description><![CDATA[
<p>Claude is incredibly powerful but it's limitation is no persistent memory hence you have to repeat yourself again and again.<p>I integrated Claude with CORE memory MCP, making it an assistant that remembers everything and have a better memory than Cursor or chatgpt.<p>Before CORE : "Hey Claude, I need to know the pros and cons of hosting my project on cloudfare vs AWS, here is the detailed spec about my project...."<p>And i have to REPEAT MYSELF again and again regarding my preferences and my tech stack and project details.<p>After CORE: "Hey Claude, tell me pros n cons of hosting my project on cloudfare vs AWS."<p>Claude instantly knows everything from my memory context.<p>What This Means
- Persistent Context: You Never repeat yourself again
- Continuous Learning: Claude gets smarter with every interaction it ingest and recall from memory
- Personalized Responses: Tailored to your specific workflow and preferences<p>Check out full implementation guide here - <a href="https://docs.heysol.ai/providers/claude">https://docs.heysol.ai/providers/claude</a></p>
]]></description><pubDate>Mon, 04 Aug 2025 15:39:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=44787337</link><dc:creator>Manik_agg</dc:creator><comments>https://news.ycombinator.com/item?id=44787337</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44787337</guid></item><item><title><![CDATA[New comment by Manik_agg in "Figma Files Registration Statement for Proposed Initial Public Offering"]]></title><description><![CDATA[
<p>Figma has come a long way, from a blocked Adobe acquisition to now filing for an IPO.</p>
]]></description><pubDate>Wed, 02 Jul 2025 00:25:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=44439207</link><dc:creator>Manik_agg</dc:creator><comments>https://news.ycombinator.com/item?id=44439207</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44439207</guid></item><item><title><![CDATA[New comment by Manik_agg in "Show HN: Core – open source memory graph for LLMs – shareable, user owned"]]></title><description><![CDATA[
<p>Hey - well put!<p>I guess "semantic web" folks were right about the destination, just few years early :P</p>
]]></description><pubDate>Wed, 02 Jul 2025 00:11:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=44439118</link><dc:creator>Manik_agg</dc:creator><comments>https://news.ycombinator.com/item?id=44439118</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44439118</guid></item><item><title><![CDATA[New comment by Manik_agg in "Show HN: Core – open source memory graph for LLMs – shareable, user owned"]]></title><description><![CDATA[
<p>Hey - agreed that for basic fact recall, a simple text file + MCP works fine.<p>We designed CORE for complex, evolving memory where text files break down.<p>Example: Health conversations across ChatGPT, Claude, etc. where your parameters change over time.<p>A text file can't give you: "What medications have I tried, why did I stop each one, and when?" or "Show me how my symptoms evolved over 6 months."<p>For timeline and relational memory, CORE wins. For static facts, text files are enough i guess.</p>
]]></description><pubDate>Tue, 01 Jul 2025 23:56:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=44439045</link><dc:creator>Manik_agg</dc:creator><comments>https://news.ycombinator.com/item?id=44439045</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44439045</guid></item><item><title><![CDATA[New comment by Manik_agg in "Show HN: Core – open source memory graph for LLMs – shareable, user owned"]]></title><description><![CDATA[
<p>Hey we started with llama but since llama was not giving good results hence fall backed to using gpt and launch it.<p>We will evaluate qwen and deepseek going forward, thanks for mentioning.</p>
]]></description><pubDate>Tue, 01 Jul 2025 23:48:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=44438997</link><dc:creator>Manik_agg</dc:creator><comments>https://news.ycombinator.com/item?id=44438997</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44438997</guid></item><item><title><![CDATA[New comment by Manik_agg in "Show HN: Core – open source memory graph for LLMs – shareable, user owned"]]></title><description><![CDATA[
<p>Hey - i agree that the demonstrated use can be solved with simple plan.md file in the codebase itself.<p>With use-case we wanted to showcase the shareable aspect of CORE more. The main problem statement we wanted to address was "take your memory to every AI" and not repeating yourself again and again anymore.<p>The relational graph based aspect of CORE architecture is an overkill for simple fact recalling. But if you want an intelligent memory layer about you that can answer What, When, Why and also is accessible in all the major AI tools that you use, then CORE would make more sense.</p>
]]></description><pubDate>Tue, 01 Jul 2025 23:39:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=44438941</link><dc:creator>Manik_agg</dc:creator><comments>https://news.ycombinator.com/item?id=44438941</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44438941</guid></item><item><title><![CDATA[New comment by Manik_agg in "Show HN: Core – open source memory graph for LLMs – shareable, user owned"]]></title><description><![CDATA[
<p>Hey plan.md mostly will be a static file that you manually have to maintain. It won't be relational and not be able to form connections between info. You can't recall or query intelligently? (When did my preference change?)<p>CORE lets you
- Automatically extracts and stores facts from conversations
- Builds intelligent connections between related information
- Answers complex queries ("What did I say about something and when?")
- Detects contradictions and explains changes with full context<p>For simple fact recall, plan.md should work but for complex systems a relational memory should be able to help better.</p>
]]></description><pubDate>Tue, 01 Jul 2025 23:18:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=44438824</link><dc:creator>Manik_agg</dc:creator><comments>https://news.ycombinator.com/item?id=44438824</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44438824</guid></item></channel></rss>