<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: rellfy</title><link>https://news.ycombinator.com/user?id=rellfy</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 13:42:34 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=rellfy" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by rellfy in "Show HN: Axe – A 12MB binary that replaces your AI framework"]]></title><description><![CDATA[
<p>This is a great concept. I fully agree with small, focused and composable design. I've been exploring a similar direction at asterai.io but focusing more on the tool layer than agent layer, with portable WASM components you write once in any language and compose together.<p>I currently use Claude web with an MCP component for my workflows but axe looks like it could be a nicer and quicker way to work with the tools I have.</p>
]]></description><pubDate>Fri, 13 Mar 2026 01:53:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47359793</link><dc:creator>rellfy</dc:creator><comments>https://news.ycombinator.com/item?id=47359793</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47359793</guid></item><item><title><![CDATA[New comment by rellfy in "Tell HN: I'm 60 years old. Claude Code has re-ignited a passion"]]></title><description><![CDATA[
<p>I don't think AI coding means you stop being a craftsman. It is just a different tool. Manual coding is a hand tool, AI coding is a power tool. You still retain all of the knowledge and as much control over the codebase as you want, same with any tool.<p>It's a different conversation when we talk about people learning to code now though. I'd probably not recommend going for the power tool until you have a solid understanding of the manual tools.</p>
]]></description><pubDate>Sat, 07 Mar 2026 03:17:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47284126</link><dc:creator>rellfy</dc:creator><comments>https://news.ycombinator.com/item?id=47284126</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47284126</guid></item><item><title><![CDATA[Show HN: Asterbot, AI agent where every capability is a sandboxed WASM component]]></title><description><![CDATA[
<p>Asterbot is a modular AI agent where every capability, such as web search, memory, LLM provider, is a swappable WASM component, sandboxed via WASI.<p>Components only have access to what you explicitly grant (e.g. a single directory). They're written in any language (Rust, Go, Python, JS) and pulled from the asterai registry.<p>Under the hood, asterai is a WASM component model registry and runtime built on wasmtime. You publish a component, set an env var to authorize it as a tool, and asterbot discovers and calls it automatically.<p>I built this because I think the WASM component model is a great way to build software but the ecosystem is missing key infrastructure (especially an open, central registry). AI agents felt like a natural fit since tool security is a real problem, and WASM sandboxing addresses it by default.<p>Still early stage, but all functionality in the repo is tested and working. Happy to answer questions!</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46961468">https://news.ycombinator.com/item?id=46961468</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 10 Feb 2026 15:51:38 +0000</pubDate><link>https://github.com/asterai-io/asterbot</link><dc:creator>rellfy</dc:creator><comments>https://news.ycombinator.com/item?id=46961468</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46961468</guid></item><item><title><![CDATA[New comment by rellfy in "Ask HN: What are you working on? (February 2026)"]]></title><description><![CDATA[
<p>I've spent my weekend building asterbot: <a href="https://github.com/asterai-io/asterbot" rel="nofollow">https://github.com/asterai-io/asterbot</a><p>Asterbot is a modular AI agent where every capability (such as tools, memory, LLM provider etc.) is a swappable WASM component.<p>Components are written in any language (Rust, Go, Python, JS), sandboxed via WASI, and pulled from the open asterai registry. Think microkernel architecture for AI agents.</p>
]]></description><pubDate>Mon, 09 Feb 2026 07:11:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=46942437</link><dc:creator>rellfy</dc:creator><comments>https://news.ycombinator.com/item?id=46942437</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46942437</guid></item><item><title><![CDATA[Show HN: Asterbot – AI agent built from sandboxed WASM components]]></title><description><![CDATA[
<p>For the past few months, I've been working on a WebAssembly (WASM) component model registry and runtime (built on wasmtime) called asterai.
My goal is to help make the WASM component model mainstream, because I think it's a great way to build software. I think the ecosystem is missing a few key things and an open, central registry is one of those things.<p>Recently I saw how ClawHub had "341 malicious skills", and couldn't help but think how WASM/WASI resolves most of these issues by default, since everything is sandboxed.<p>So I've spent my weekend building Asterbot, a modular AI agent where every capability is a swappable WASM component.<p>Want to add web search? That's just another WASM component. Memory? another component. LLM provider? component.<p>The components are all sandboxed, they only have access to what you explicitly grant, e.g. a single directory like ~/.asterbot (the default).
It can't read any other part of the system.<p>Components are written in any language (Rust, Go, Python, JS), sandboxed via WASI, and pulled from the asterai registry.
Publish a component, set an env var to authorise it as a tool, and asterbot discovers and calls it automatically. Asterai provides a lightweight runtime on top of wasmtime that makes it possible to bundle components, configure env vars, and run it.<p>It's still a proof of concept, but I've tested all functionality in the repo and I'm happy with how it's shaping up.<p>Happy to answer any questions!</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46937297">https://news.ycombinator.com/item?id=46937297</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 08 Feb 2026 18:55:15 +0000</pubDate><link>https://github.com/asterai-io/asterbot</link><dc:creator>rellfy</dc:creator><comments>https://news.ycombinator.com/item?id=46937297</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46937297</guid></item><item><title><![CDATA[New comment by rellfy in "Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory"]]></title><description><![CDATA[
<p>That's a great idea, it makes a lot of sense for dynamic use cases.<p>I suppose I'm thinking of it as a more elegant way of doing something equivalent to top-down agent routing, where the top agent routes to 2-legged agents.<p>I'd be interested to hear more about how you handle the provenance tracking in practice, especially when the agent chains multiple data sources together. I think my question would be: what's the practical difference between dynamic attenuation and just statically removing the third leg upfront? Is it "just" a more elegant solution, or are there other advantages that I'm missing?</p>
]]></description><pubDate>Sun, 08 Feb 2026 07:56:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=46932293</link><dc:creator>rellfy</dc:creator><comments>https://news.ycombinator.com/item?id=46932293</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46932293</guid></item><item><title><![CDATA[New comment by rellfy in "Automatic Programming"]]></title><description><![CDATA[
<p>Yes, I definitely think it's much faster than writing it manually. For a few weeks now, >95% of the code I've authored wasn't written manually.<p>Sometimes you only care about the high level aspect of it. The requirements and the high-level specification. But writing the implementation code can take hours if you're unfamiliar with a specific library, API or framework.<p>"review every diff line by line" is maybe not the best way to have described it, I essentially I meant that I review the AI's code as if it were a PR written by a team member, so I'd still care about alignment with the rest of the codebase, overall quality, reasonable performance, etc.</p>
]]></description><pubDate>Sun, 08 Feb 2026 07:36:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=46932176</link><dc:creator>rellfy</dc:creator><comments>https://news.ycombinator.com/item?id=46932176</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46932176</guid></item><item><title><![CDATA[New comment by rellfy in "Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory"]]></title><description><![CDATA[
<p>The lethal trifecta is the most important problem to be solved in this space right now.<p>I can only think of two ways to address it:<p>1. Gate all sensitive operations (i.e. all external data flows) through a manual confirmation system, such as an OTP code that the human operator needs to manually approve every time, and also review the content being sent out. Cons: decision fatigue over time, can only feasibly be used if the agent only communicates externally infrequently or if the decision is easy to make by reading the data flowing out (wouldn't work if you need to review a 20-page PDF every time).<p>2. Design around the lethal trifecta: your agent can only have 2 legs instead of all 3. I believe this is the most robust approach for all use cases that support it. For example, agents that are privately accessed, and can work with private data and untrusted content but cannot externally communicate.<p>I'd be interested to know if you have reached similar conclusions or have a different approach to it?</p>
]]></description><pubDate>Sun, 08 Feb 2026 07:26:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46932113</link><dc:creator>rellfy</dc:creator><comments>https://news.ycombinator.com/item?id=46932113</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46932113</guid></item><item><title><![CDATA[New comment by rellfy in "Automatic Programming"]]></title><description><![CDATA[
<p>I arrived at a very similar conclusion since trying Claude Code with Opus 4.5 (a huge paradigm shift in terms of tech and tools). I've been calling it "zen coding", where you treat the codebase like a zen garden. You maintain a mental map of the codebase, spec everything before prompting for the implementation, and review every diff line by line. The AI is a tool to implement the system design, not the system designer itself (at least not for now...).<p>The distinction drawn between both concepts matters. The expertise is in knowing what to spec and catching when the output deviates from your design. Though, the tech is so good now that a carefully reviewed spec will be reliably implemented by a state-of-the-art LLM. The same LLM that produces mediocre code for a vague request will produce solid code when guided by someone who understands the system deeply enough to constrain it. This is the difference between vibe coding and zen coding.<p>Zen coders are masters of their craft; vibe coders are amateurs having fun.<p>And to be clear, nothing wrong with being an amateur and having fun. I "vibe code" several areas with AI that are not really coding, but other fields where I don't have professional knowledge in. And it's great, because LLMs try to bring you closer to the top of human knowledge on any field, so as an amateur it is incredible to experience it.</p>
]]></description><pubDate>Sat, 31 Jan 2026 13:02:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46836304</link><dc:creator>rellfy</dc:creator><comments>https://news.ycombinator.com/item?id=46836304</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46836304</guid></item><item><title><![CDATA[New comment by rellfy in "Show HN: Amla Sandbox – WASM bash shell sandbox for AI agents"]]></title><description><![CDATA[
<p>In my example above I wasn't referring to AI composing the tools, but you as the agent builder composing the tool call workflow. So, I suppose we can call it AI-time composition vs build-time composition.<p>For example, say you have a shell script to make a bank transfer. This just makes an API call to your bank.<p>You can't trust the AI to reliably make a call to your traceability tool, and then to your OTP confirmation gate, and only then to proceed with the bank transfer. This will eventually fail and be compromised.<p>If you're running your agent on a "composable tool runtime", rather than raw shell for tool calls, you can easily make it so the "transfer $500 to Alice" call always goes through the route trace -> confirm OTP -> validate action. This is configured at build time.<p>Your alternative with raw shell would be to program the tool itself to follow this workflow, but then you'd end up with a lot of duplicate source code if you have the same workflow for different tool calls.<p>Of course, any AI agent SDK will let you configure these workflows. But they are locked to their own ecosystems, it's not a global ecosystem like you can achieve with WASM, allowing for interop between components written in any language.</p>
]]></description><pubDate>Fri, 30 Jan 2026 19:08:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46828530</link><dc:creator>rellfy</dc:creator><comments>https://news.ycombinator.com/item?id=46828530</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46828530</guid></item><item><title><![CDATA[New comment by rellfy in "Show HN: Amla Sandbox – WASM bash shell sandbox for AI agents"]]></title><description><![CDATA[
<p>Shell commands work for individual tools, but you lose composability. If you want to chain components that share a sandboxed environment, say, add a tracing component alongside an OTP confirmation layer that gates sensitive actions, you need a shared runtime and typed interfaces. That's the layer I'm building with asterai: standard substrate so components compose without glue code. Plus, having a central ecosystem lets you add features like the traceability with almost 1 click complexity. Of course, this only wins long term if WASM wins.</p>
]]></description><pubDate>Fri, 30 Jan 2026 18:53:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=46828340</link><dc:creator>rellfy</dc:creator><comments>https://news.ycombinator.com/item?id=46828340</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46828340</guid></item><item><title><![CDATA[New comment by rellfy in "Show HN: Amla Sandbox – WASM bash shell sandbox for AI agents"]]></title><description><![CDATA[
<p>I really like the capability enforcement model, it's a great concept. One thing this discussion is missing though is the ecosystem layer. Sandboxing solves execution safety, but there's a parallel problem: how do agents discover and compose tools portably across frameworks? Right now every framework has its own tool format and registry (or none at all). WASM's component model actually solves this — you get typed interfaces (WIT), language interop, and composability for free. I've been building a registry and runtime (also based on wasmtime!) for this: components written in any language, published to a shared registry, runnable locally or in the cloud. Sandboxes like amla-sandbox could be a consumer of these components. <a href="https://asterai.io/why" rel="nofollow">https://asterai.io/why</a></p>
]]></description><pubDate>Fri, 30 Jan 2026 17:55:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46827552</link><dc:creator>rellfy</dc:creator><comments>https://news.ycombinator.com/item?id=46827552</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46827552</guid></item><item><title><![CDATA[New comment by rellfy in "Clawdbot Renames to Moltbot"]]></title><description><![CDATA[
<p>I agree, that's the main issue with this approach. Long-term, it should only be used for truly sensitive actions. More mundane things like replying to emails will need a better solution.</p>
]]></description><pubDate>Fri, 30 Jan 2026 16:53:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=46826741</link><dc:creator>rellfy</dc:creator><comments>https://news.ycombinator.com/item?id=46826741</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46826741</guid></item><item><title><![CDATA[New comment by rellfy in "OpenClaw – Moltbot Renamed Again"]]></title><description><![CDATA[
<p>I don't think you're being too harsh, but I do think you're missing the point.<p>OpenClaw is just an idea of what's coming. Of what the future of human-software interface will look like.<p>People already know what it will look like to some extent. We will no longer have UIs there you have dozens or hundreds of buttons as the norm, instead you will talk to an LLM/agent that will trigger the workflows you need through natural language. AI will eat UI.<p>Of course, OpenClaw/Moltbot/Clawdbot has lots of security issues. That's not really their fault, the industry has not yet reached consensus on how to fix these issues. But OpenClaw's rapid rise to popularity (fastest growing GH repo by star count ever) shows how people want that future to come ASAP. The security problems do need to be solved. And I believe they will be, soon.<p>I think the demand comes also from the people wanting an open agent. We don't want the agentic future to be mainly closed behind big tech ecosystems. OpenClaw plants that flag now, setting a boundary that people will have their data stored locally (even if inference happens remotely, though that may not be the status quo forever).</p>
]]></description><pubDate>Fri, 30 Jan 2026 16:40:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46826546</link><dc:creator>rellfy</dc:creator><comments>https://news.ycombinator.com/item?id=46826546</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46826546</guid></item><item><title><![CDATA[Your Own AI Developer on GitHub]]></title><description><![CDATA[
<p>Article URL: <a href="https://rellfy.com/blog/your-own-ai-developer-on-github/">https://rellfy.com/blog/your-own-ai-developer-on-github/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46820377">https://news.ycombinator.com/item?id=46820377</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 30 Jan 2026 03:59:09 +0000</pubDate><link>https://rellfy.com/blog/your-own-ai-developer-on-github/</link><dc:creator>rellfy</dc:creator><comments>https://news.ycombinator.com/item?id=46820377</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46820377</guid></item><item><title><![CDATA[New comment by rellfy in "Clawdbot Renames to Moltbot"]]></title><description><![CDATA[
<p>The only solution I can think of at the moment is a human in the loop, authorising every sensitive action. Of course it has the classic tradeoff between convenience and security, but it would work. For it to work properly, the human needs to take a minute or so reviewing the content associated with request before authorising the action.<p>For most actions that don't have much content, this could work well as a simple phone popup where you authorise or deny.<p>The annoying parts would be if you want the agent to reply to an email that has a full PDF or a lot of text, you'd have to review to make sure the content does not include prompt injections. I think this can be further mitigated and improved with static analysis tools specifically for this purpose.<p>But I think it helps to think of it not as a way to prevent LLMs to be prompt injected. I see social engineering as the equivalent of prompt injection but for humans. So if you have a personal assistant, you'd also them to be careful with that and to authorise certain sensitive actions every time they happen. And you would definitely want this for things like making payments, changing subscriptions, etc.</p>
]]></description><pubDate>Wed, 28 Jan 2026 03:35:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46790746</link><dc:creator>rellfy</dc:creator><comments>https://news.ycombinator.com/item?id=46790746</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46790746</guid></item><item><title><![CDATA[New comment by rellfy in "Claude's new constitution"]]></title><description><![CDATA[
<p>I don’t think it’s wrong to see it as Anthropic’s constitution that Claude has to follow. Claude governs over  your data/property when you ask it to perform as an agent, similarly to how company directors govern the company which is the shareholders property. I think it’s just semantics.</p>
]]></description><pubDate>Thu, 22 Jan 2026 05:26:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=46715637</link><dc:creator>rellfy</dc:creator><comments>https://news.ycombinator.com/item?id=46715637</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46715637</guid></item><item><title><![CDATA[New comment by rellfy in "Anthropic's original take home assignment open sourced"]]></title><description><![CDATA[
<p>By that same logic, humans would not be able to do anything novel either.</p>
]]></description><pubDate>Wed, 21 Jan 2026 13:20:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=46705330</link><dc:creator>rellfy</dc:creator><comments>https://news.ycombinator.com/item?id=46705330</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46705330</guid></item><item><title><![CDATA[New comment by rellfy in "IKEA for Software"]]></title><description><![CDATA[
<p>I think this hasn't been yet achieved because components need to interface with each other easily. This requires a standard that all components implement, from which everything can be assembled together.<p>From that perspective, the idea of microservices is basically "IKEA for software" relying on (primarily) HTTP as the interface between components. But this doesn't really solve it entirely, or very elegantly, because you still need to write the server boilerplate and deploy it, which will be different depending on the programming language being used. Also, your app may require different protocols, so you'll be relying on different standards for different component interactions, therefore the interface is not constant across your entire application.<p>I believe there's one way we can achieve this reliably, which is via WebAssembly, specifically via the WASM component model [1].<p>But we need an ecosystem of components, and building an ecosystem that everyone uses and contributes to will likely be the challenging part. I'm actually working on this right now, the platform I've been building (asterai.io) started out as an agent building platform (using WASM components for tool calls) but is evolving into being mostly a registry and (open source) lightweight runtime for WASM components.<p>The idea of using WASM to solve for this is very simple in concept. Think about a tool like Docker, but instead of images you have an "environment" which is a file that defines a set of WASM components and ENV vars. That's basically it, you can then run that environment which will run all components that are executable. Components can call each other dynamically, so a component can act as a library as well, or it may be only a library and not an executable. A component can also only define an interface (which other components can implement), rather than contain any implementation code.<p>This architecture solves the main challenges that stop "IKEA for software" from being a reality:
1. You can write WASM components in any programming language.
2. You can add components to your environment/app with a single click, and interfacing is standardised via WIT [2].
3. Deploying it is the same process for any component or app.<p>Of course, it still would require significant WASM adoption to become a reality. But I think WASM is the best bet for this.<p>[1]: <a href="https://component-model.bytecodealliance.org" rel="nofollow">https://component-model.bytecodealliance.org</a>
[2]: <a href="https://component-model.bytecodealliance.org/design/wit.html" rel="nofollow">https://component-model.bytecodealliance.org/design/wit.html</a></p>
]]></description><pubDate>Sat, 17 Jan 2026 07:29:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46656044</link><dc:creator>rellfy</dc:creator><comments>https://news.ycombinator.com/item?id=46656044</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46656044</guid></item><item><title><![CDATA[New comment by rellfy in "iPhone Air"]]></title><description><![CDATA[
<p>n=3</p>
]]></description><pubDate>Wed, 10 Sep 2025 02:50:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=45192582</link><dc:creator>rellfy</dc:creator><comments>https://news.ycombinator.com/item?id=45192582</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45192582</guid></item></channel></rss>