<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: rush86999</title><link>https://news.ycombinator.com/user?id=rush86999</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 07:23:20 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=rush86999" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by rush86999 in "Launch HN: Cekura (YC F24) – Testing and monitoring for voice and chat AI agents"]]></title><description><![CDATA[
<p>Nope - just use memory layer with model routing system.<p><a href="https://github.com/rush86999/atom/blob/main/docs/EPISODIC_MEMORY_IMPLEMENTATION.md" rel="nofollow">https://github.com/rush86999/atom/blob/main/docs/EPISODIC_ME...</a></p>
]]></description><pubDate>Tue, 03 Mar 2026 20:36:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47238597</link><dc:creator>rush86999</dc:creator><comments>https://news.ycombinator.com/item?id=47238597</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47238597</guid></item><item><title><![CDATA[New comment by rush86999 in "Launch HN: Cekura (YC F24) – Testing and monitoring for voice and chat AI agents"]]></title><description><![CDATA[
<p>Only solution is to train the issue for the next time.<p>Architecturally focusing on Episodic memory with feedback system.<p>This training is retrieved next time when something similar happens</p>
]]></description><pubDate>Tue, 03 Mar 2026 19:22:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47237447</link><dc:creator>rush86999</dc:creator><comments>https://news.ycombinator.com/item?id=47237447</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47237447</guid></item><item><title><![CDATA[New comment by rush86999 in "OpenClaw surpasses React to become the most-starred software project on GitHub"]]></title><description><![CDATA[
<p>Everyone is using OpenClaw for personal productivity, but you're right. Not much value, as you can get that from existing products.<p>The market will eventually realize the business case for an OpenClaw-like product, and I'm waiting to ride its coattails!<p><a href="https://github.com/rush86999/atom" rel="nofollow">https://github.com/rush86999/atom</a></p>
]]></description><pubDate>Mon, 02 Mar 2026 21:26:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47224337</link><dc:creator>rush86999</dc:creator><comments>https://news.ycombinator.com/item?id=47224337</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47224337</guid></item><item><title><![CDATA[Show HN: Atom – open-source AI agent with "visual" episodic memory]]></title><description><![CDATA[
<p>Hey HN,<p>I’ve been building Atom (<a href="https://github.com/rush86999/atom" rel="nofollow">https://github.com/rush86999/atom</a>), an open-source, self-hosted AI automation platform.<p>I built this because while tools like OpenClaw are excellent for one-off scripts and personal tasks, I found them difficult to use for complex business workflows (e.g., managing invoices or SaaS ops). The main issue was State Blindness: the agent would fire a command and assume it worked, without "seeing" if the UI or state actually updated.<p>I just shipped a new architecture to solve this called Canvas AI Accessibility.<p>The Technical Concept:
Instead of relying on token-heavy screenshots or raw HTML, I built a hidden semantic layer—essentially a "Screen Reader" for the LLM.<p>Hidden Visual Description: When the agent works, the system generates a structured, hidden description of the visual state.<p>Episodic Memory: The agent "reads" this layer to verify its actions. Crucially, it snapshots this state into a vector database (LanceDB).<p>Maturity/Governance: Before an agent is promoted from "Student" to "Autonomous," it must demonstrate it can recall these past visual states to avoid repeating errors.<p>Atom vs. OpenClaw:
I view them as complementary. OpenClaw is the "Hands" (great for raw execution/terminal), while Atom is the "Brain" (handling state, memory, and audit trails). Atom uses Python/FastAPI vs OpenClaw's Node.js, and focuses heavily on this governance/memory layer.<p>The repo is self-hosted and includes the new Canvas architecture. I’d love feedback on the implementation of the hidden accessibility layer—is anyone else using "synthetic accessibility trees" for agent grounding?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47202431">https://news.ycombinator.com/item?id=47202431</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 01 Mar 2026 00:59:30 +0000</pubDate><link>https://github.com/rush86999/atom</link><dc:creator>rush86999</dc:creator><comments>https://news.ycombinator.com/item?id=47202431</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47202431</guid></item><item><title><![CDATA[New comment by rush86999 in "Following 35% growth, solar has passed hydro on US grid"]]></title><description><![CDATA[
<p>This is an incomplete thought.<p>A strong checks and balances without influence of bias, relationships, and politics can be implemented using a 2-way blind system where:<p>1. decision makers (of sound judgement) are not aware of any identifiable information related to any users on whom the decision will be made, nor of each other.<p>2. Users are not aware of the decision makers who will decide on them, nor of each other.<p>Possibly AI can play a role here, but a strong system of checks & balances would be a prerequisite for this.<p>The justice system would definitely benefit from this.</p>
]]></description><pubDate>Thu, 26 Feb 2026 22:51:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47173225</link><dc:creator>rush86999</dc:creator><comments>https://news.ycombinator.com/item?id=47173225</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47173225</guid></item><item><title><![CDATA[New comment by rush86999 in "AI is not a coworker, it's an exoskeleton"]]></title><description><![CDATA[
<p>Funny you described everything I worked on for this project: <a href="https://github.com/rush86999/atom" rel="nofollow">https://github.com/rush86999/atom</a><p>Cats out of the bag. Everyone knows the issue and I bet a lot of people are trying to deliver the same thing.</p>
]]></description><pubDate>Fri, 20 Feb 2026 17:17:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47090826</link><dc:creator>rush86999</dc:creator><comments>https://news.ycombinator.com/item?id=47090826</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47090826</guid></item><item><title><![CDATA[New comment by rush86999 in "Ask HN: How do you employ LLMs for UI development?"]]></title><description><![CDATA[
<p>I would create a custom <canvas> component that integrates into your IDE or create a plugin and add AI accessibility via logs. I 'm doing something similar to my current app that I'm building: <a href="https://github.com/rush86999/atom/blob/main/docs/CANVAS_AI_ACCESSIBILITY.md" rel="nofollow">https://github.com/rush86999/atom/blob/main/docs/CANVAS_AI_A...</a></p>
]]></description><pubDate>Thu, 19 Feb 2026 17:34:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47076485</link><dc:creator>rush86999</dc:creator><comments>https://news.ycombinator.com/item?id=47076485</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47076485</guid></item><item><title><![CDATA[Show HN: Atom – Safer Version of OpenClaw with Episodic Memory]]></title><description><![CDATA[
<p>Atom is what you would want if you wanted OpenClaw to run your business. Here are the key differences:<p>- Governance and audit trails
- You want graduated autonomy (agents earn trust over time)
- You need structured episodic memory for learning and graduation
- You require real-time visibility into agent operations
- Canvas UI - not just a terminal with AI accessibility without using images for visual memory
- Want to still use OpenClaw 5000+ skills safely</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47062524">https://news.ycombinator.com/item?id=47062524</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 18 Feb 2026 16:08:12 +0000</pubDate><link>https://github.com/rush86999/atom</link><dc:creator>rush86999</dc:creator><comments>https://news.ycombinator.com/item?id=47062524</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47062524</guid></item><item><title><![CDATA[New comment by rush86999 in "Show HN: Rowboat – AI coworker that turns your work into a knowledge graph (OSS)"]]></title><description><![CDATA[
<p>I'm really working towards getting something similar to work. Lots of bug fixing for now. Any help is appreciated if interested.</p>
]]></description><pubDate>Wed, 11 Feb 2026 02:05:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46969858</link><dc:creator>rush86999</dc:creator><comments>https://news.ycombinator.com/item?id=46969858</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46969858</guid></item><item><title><![CDATA[New comment by rush86999 in "Experts Have World Models. LLMs Have Word Models"]]></title><description><![CDATA[
<p>I do understand what you're saying, but that's impossible to resonate with real-world context, as in the real world, each person not only plays politics but also, to a degree, follows their own internal world model for self-reflection created by experience. It's highly specific and constrained to the context each person experiences.</p>
]]></description><pubDate>Mon, 09 Feb 2026 20:35:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46950832</link><dc:creator>rush86999</dc:creator><comments>https://news.ycombinator.com/item?id=46950832</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46950832</guid></item><item><title><![CDATA[New comment by rush86999 in "Experts Have World Models. LLMs Have Word Models"]]></title><description><![CDATA[
<p>Game theory, at the end of the day, is also a form of teaching points that can be added to an LLM by an expert. You're cloning the expert's decision process by showing past decisions taken in a similar context. This is very specific but still has value in a business context.</p>
]]></description><pubDate>Mon, 09 Feb 2026 19:49:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=46950143</link><dc:creator>rush86999</dc:creator><comments>https://news.ycombinator.com/item?id=46950143</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46950143</guid></item><item><title><![CDATA[New comment by rush86999 in "Experts Have World Models. LLMs Have Word Models"]]></title><description><![CDATA[
<p>Basically the conclusion is LLMs don't have world models. For work that's basically done on a screen, you can make world models. Harder for other context for example visual context.<p>For a screen (coding, writing emails, updating docs) -> you can create world models with episodic memories that can be used as background context before making a new move (action). Many professions rely partially on email or phone (voice) so LLMs can be trained for world models in these context. Just not every context.<p>The key is giving episodic memory to agents with visual context about the screen and conversation context. Multiple episodes of similar context can be used to make the next move. That's what I'm building on.</p>
]]></description><pubDate>Mon, 09 Feb 2026 19:02:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=46949367</link><dc:creator>rush86999</dc:creator><comments>https://news.ycombinator.com/item?id=46949367</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46949367</guid></item><item><title><![CDATA[New comment by rush86999 in "Ask HN: What are you working on? (February 2026)"]]></title><description><![CDATA[
<p>I'm building a safer Agent system for SMBs.<p>The biggest problem is internal knowledge and external knowledge systems are completely different. One reason internal knowledge is different it is very specific business context and/or it's value prop for the business that allows charging clients for access.<p>To bridge this gap, the best approach is to train agents to your use case. Agents need to be students -> interns -> supervised -> independent before they can be useeful for your business.<p><a href="https://github.com/rush86999/atom" rel="nofollow">https://github.com/rush86999/atom</a> . it's still in alpha.</p>
]]></description><pubDate>Mon, 09 Feb 2026 03:55:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=46941402</link><dc:creator>rush86999</dc:creator><comments>https://news.ycombinator.com/item?id=46941402</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46941402</guid></item><item><title><![CDATA[Show HN: Atom – The Open Source AI Workforce and Multi-Agent Orchestrator]]></title><description><![CDATA[
<p>Unlike generic automation tools, ATOM combines visual workflow building with specialty agents (Sales, Marketing, Engineering) that have "Universal Memory." They can remember your context, search across your integrations (500+ apps), and even execute tasks like a real assistant.
Key Features:
- Hybrid Engine: Python orchestration + Node.js Piece Engine (ActivePieces catalog).
- Multi-Agent Coordination: Specialized teams that progress from 'Student' to 'Autonomous' based on confidence.
- Universal Memory: Indexed context across Slack, Notion, Jira, and more.
- Voice Interface: Speak naturally to build and trigger workflows.
I’d love to hear your feedback on the architecture, the "Swarm Discovery" mechanism for tools, and how you think we can improve the safety of sandboxed 'Computer Use' agents.
Installation is easiest via Docker: `docker-compose up -d`</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46575998">https://news.ycombinator.com/item?id=46575998</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 11 Jan 2026 14:22:02 +0000</pubDate><link>https://github.com/rush86999/atom</link><dc:creator>rush86999</dc:creator><comments>https://news.ycombinator.com/item?id=46575998</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46575998</guid></item><item><title><![CDATA[New comment by rush86999 in "Ask HN: What Are You Working On? (December 2025)"]]></title><description><![CDATA[
<p><a href="https://github.com/rush86999/atom" rel="nofollow">https://github.com/rush86999/atom</a><p>Marketing line: Atom is your conversational AI agent that automates complex workflows through natural language chat. Now with Computer Use Agent capabilities, Atom can see and interact with your desktop applications, automate repetitive tasks, and create visual workflows that bridge web services with local desktop software.<p>work in progress</p>
]]></description><pubDate>Mon, 15 Dec 2025 15:31:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46275757</link><dc:creator>rush86999</dc:creator><comments>https://news.ycombinator.com/item?id=46275757</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46275757</guid></item><item><title><![CDATA[New comment by rush86999 in "Ask HN: What are you working on? (July 2025)"]]></title><description><![CDATA[
<p>I'm working on a superpowered version of Siri/Alexa that can manage finances, notes, meetings, research, automation, and communication - including email/Slack<p><a href="https://github.com/rush86999/atom">https://github.com/rush86999/atom</a><p>Check it out.</p>
]]></description><pubDate>Sun, 27 Jul 2025 20:16:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=44704323</link><dc:creator>rush86999</dc:creator><comments>https://news.ycombinator.com/item?id=44704323</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44704323</guid></item><item><title><![CDATA[I grew my app sign ups to 500 users in 2 days]]></title><description><![CDATA[
<p>Article URL: <a href="https://rish-from-atomic-life.medium.com/how-i-grew-my-app-sign-ups-to-500-users-in-2-days-31f4e6482cc8">https://rish-from-atomic-life.medium.com/how-i-grew-my-app-sign-ups-to-500-users-in-2-days-31f4e6482cc8</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=29243088">https://news.ycombinator.com/item?id=29243088</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 16 Nov 2021 17:06:07 +0000</pubDate><link>https://rish-from-atomic-life.medium.com/how-i-grew-my-app-sign-ups-to-500-users-in-2-days-31f4e6482cc8</link><dc:creator>rush86999</dc:creator><comments>https://news.ycombinator.com/item?id=29243088</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29243088</guid></item><item><title><![CDATA[Show HN: I built a data tracker app to track your MAUs]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.atomiclife.app/">https://www.atomiclife.app/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=29218990">https://news.ycombinator.com/item?id=29218990</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 14 Nov 2021 17:56:33 +0000</pubDate><link>https://www.atomiclife.app/</link><dc:creator>rush86999</dc:creator><comments>https://news.ycombinator.com/item?id=29218990</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29218990</guid></item><item><title><![CDATA[Show HN: Atomic Life – to-do with auto reminders, data tracker and network]]></title><description><![CDATA[
<p> I created an app based on widely accepted recommendations on getting disciplined. The app is free. It has a todo list that automatically creates reminders in your calendar using natural language processing, trackers for monitoring progress of anything you can think of, and an accountability network to share your daily progress in or to find others.<p>here's an intro video:   https://youtu.be/p8xpBj1HZSc<p>app info: https://www.atomiclife.app/<p>iOS App store link: https://apps.apple.com/us/app/atomic-life/id1594368125<p>Android App Store link: https://play.google.com/store/apps/details?id=com.atomiclifenoexpo<p><pre><code>  You can provide feedback at feedback@atomiclife.app or https://www.reddit.com/r/atomiclife/</code></pre></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=29218884">https://news.ycombinator.com/item?id=29218884</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Sun, 14 Nov 2021 17:46:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=29218884</link><dc:creator>rush86999</dc:creator><comments>https://news.ycombinator.com/item?id=29218884</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29218884</guid></item><item><title><![CDATA[Show HN: Atomic Life – smart to-do list with automated reminders]]></title><description><![CDATA[
<p>Article URL: <a href="https://atomiclife.aspect.app/">https://atomiclife.aspect.app/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=27974133">https://news.ycombinator.com/item?id=27974133</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 27 Jul 2021 16:11:55 +0000</pubDate><link>https://atomiclife.aspect.app/</link><dc:creator>rush86999</dc:creator><comments>https://news.ycombinator.com/item?id=27974133</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=27974133</guid></item></channel></rss>