<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: abi</title><link>https://news.ycombinator.com/user?id=abi</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 26 Apr 2026 08:20:47 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=abi" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by abi in "Show HN: Lilo – a self-hosted, open-source intelligent personal OS"]]></title><description><![CDATA[
<p>No, we mostly spent our time on data structures and algorithms.</p>
]]></description><pubDate>Sat, 25 Apr 2026 17:36:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47903096</link><dc:creator>abi</dc:creator><comments>https://news.ycombinator.com/item?id=47903096</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47903096</guid></item><item><title><![CDATA[New comment by abi in "Show HN: Lilo – a self-hosted, open-source intelligent personal OS"]]></title><description><![CDATA[
<p>Ugh, good point.</p>
]]></description><pubDate>Fri, 24 Apr 2026 19:54:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47895025</link><dc:creator>abi</dc:creator><comments>https://news.ycombinator.com/item?id=47895025</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47895025</guid></item><item><title><![CDATA[Show HN: Lilo – a self-hosted, open-source intelligent personal OS]]></title><description><![CDATA[
<p>Hey everyone, I’ve been working on Lilo for the last few months. In short, it’s an intelligent personal OS. Lilo = your apps + your AI assistant + your files + your memories.<p>For a visual intro, here’s a YouTube video demonstrating the features and use cases: <a href="https://youtu.be/Jz0l_izoA1w" rel="nofollow">https://youtu.be/Jz0l_izoA1w</a><p>I started this project because I wanted a few small AI-powered apps for myself — a bookmarks tool, a calorie tracker, a TODO list — but deploying N separate apps with N deployments, URLs, and auth configs is too much effort for a single-user use case. So I built one container that holds all the apps, runs them at the same URL, and lets the agent inside modify them. If I want to change my bookmarks app, I don't open Claude Code and push to a repo — I tell the agent, and it edits the HTML directly. Not great for a large SaaS with lots of users but works great for a single-user app.<p>Each app is just an HTML file but with access to a filesystem API, full network access and full agentic capabilities.<p>Since then, Lilo has grown to also support a filesystem/workspace that can hold more than just apps. You can upload PDFs or screenshots and have the AI analyze and organize them for you. The AI also remembers key details about you in a “LLM wiki” style tree of markdown files. It’s a full-on personal assistant.<p>Inspired by OpenClaw, I added support for additional channels like WhatsApp, email, and Telegram. Now I take a photo of my lunch, text it to Lilo, and the calorie tracker updates. If I didn't eat the pizza crust, I text <i>"didn’t eat the crust"</i> and it adjusts the entry. Cal AI couldn't do that. And unlike say a calorie tracker WhatsApp bot, I also have a nice visual interface to look at my meals.<p>This combo of personal assistant + personal apps is very powerful. And very flexible. The UI is nice for glancing at data. The chat is nice for operations the UI doesn't cover. I don't have to build a search into every app, I can just ask the agent.<p>Lilo is open source and alpha software. Bring your own keys. The setup is not the easiest (a lot of API keys and you need to self host). All security advisories for LLM apps with network access apply here. But at the start, since there is no personal data, no data exfiltration is possible but credential exfiltration certainly is. Your entire workspace can be backed up and versioned using a git repo so the data is durable.<p>I’d love to hear feedback, and hope people find this as useful as I have.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47894947">https://news.ycombinator.com/item?id=47894947</a></p>
<p>Points: 7</p>
<p># Comments: 4</p>
]]></description><pubDate>Fri, 24 Apr 2026 19:46:30 +0000</pubDate><link>https://github.com/abi/lilo</link><dc:creator>abi</dc:creator><comments>https://news.ycombinator.com/item?id=47894947</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47894947</guid></item><item><title><![CDATA[New comment by abi in "GPT-5.5"]]></title><description><![CDATA[
<p>Usually, those get released a few weeks later.</p>
]]></description><pubDate>Thu, 23 Apr 2026 18:20:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47879395</link><dc:creator>abi</dc:creator><comments>https://news.ycombinator.com/item?id=47879395</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47879395</guid></item><item><title><![CDATA[New comment by abi in "All your agents are going async"]]></title><description><![CDATA[
<p>I'm quite confused by this article. If you persist conversation history in a database, and have all agentic turns run on the server, and merely listen to the streaming events/history via a websocket on the client, this is easily achieved. You can have as many clients as you want.<p>The HTTP layer is fine. Websockets work great. This is how the Codex app server works, I believe: <a href="https://openai.com/index/unlocking-the-codex-harness/" rel="nofollow">https://openai.com/index/unlocking-the-codex-harness/</a> Same pattern I've used in my agentic OS/personal assistant project: <a href="https://github.com/abi/lilo" rel="nofollow">https://github.com/abi/lilo</a> Works great!</p>
]]></description><pubDate>Thu, 23 Apr 2026 16:24:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47877669</link><dc:creator>abi</dc:creator><comments>https://news.ycombinator.com/item?id=47877669</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47877669</guid></item><item><title><![CDATA[New comment by abi in "Claude Code Down"]]></title><description><![CDATA[
<p>Codex is great.</p>
]]></description><pubDate>Mon, 06 Apr 2026 15:50:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47662502</link><dc:creator>abi</dc:creator><comments>https://news.ycombinator.com/item?id=47662502</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47662502</guid></item><item><title><![CDATA[New comment by abi in "Ask HN: Founders who offer free/OS and paid SaaS, how do you manage your code?"]]></title><description><![CDATA[
<p>Exactly.</p>
]]></description><pubDate>Tue, 14 May 2024 13:06:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=40354830</link><dc:creator>abi</dc:creator><comments>https://news.ycombinator.com/item?id=40354830</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40354830</guid></item><item><title><![CDATA[New comment by abi in "Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU"]]></title><description><![CDATA[
<p>I’ve experienced that issue as well. Clearing the cache and redownloading seemed to fix it for me. It’s an issue with the upstream library tvmjs that I need to dig deeper into.  You should be totally fine on a 32gb system.</p>
]]></description><pubDate>Sun, 05 May 2024 01:48:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=40261708</link><dc:creator>abi</dc:creator><comments>https://news.ycombinator.com/item?id=40261708</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40261708</guid></item><item><title><![CDATA[New comment by abi in "Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU"]]></title><description><![CDATA[
<p>Use secret llama in a incognito window. Turn off the Internet and close the window when done.</p>
]]></description><pubDate>Sat, 04 May 2024 20:34:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=40260100</link><dc:creator>abi</dc:creator><comments>https://news.ycombinator.com/item?id=40260100</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40260100</guid></item><item><title><![CDATA[New comment by abi in "Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU"]]></title><description><![CDATA[
<p>Thanks for the bug report. Yeah, it’s a bug with not resetting the state properly when new chat is clicked. Will fix tomorrow.<p>Chat history shouldn’t be hard to add with local storage and Indexed DB.</p>
]]></description><pubDate>Sat, 04 May 2024 20:33:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=40260088</link><dc:creator>abi</dc:creator><comments>https://news.ycombinator.com/item?id=40260088</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40260088</guid></item><item><title><![CDATA[New comment by abi in "Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU"]]></title><description><![CDATA[
<p>Yes. Web-llm is a wrapper of tvmjs: <a href="https://github.com/apache/tvm">https://github.com/apache/tvm</a><p>Just wrappers all the way down</p>
]]></description><pubDate>Sat, 04 May 2024 20:29:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=40260072</link><dc:creator>abi</dc:creator><comments>https://news.ycombinator.com/item?id=40260072</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40260072</guid></item><item><title><![CDATA[New comment by abi in "Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU"]]></title><description><![CDATA[
<p>Appreciate the kind words :)</p>
]]></description><pubDate>Sat, 04 May 2024 20:28:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=40260061</link><dc:creator>abi</dc:creator><comments>https://news.ycombinator.com/item?id=40260061</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40260061</guid></item><item><title><![CDATA[New comment by abi in "Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU"]]></title><description><![CDATA[
<p>Window AI (<a href="https://windowai.io/" rel="nofollow">https://windowai.io/</a>) is an attempt to do something like this with a browser extension.</p>
]]></description><pubDate>Sat, 04 May 2024 20:28:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=40260059</link><dc:creator>abi</dc:creator><comments>https://news.ycombinator.com/item?id=40260059</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40260059</guid></item><item><title><![CDATA[New comment by abi in "Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU"]]></title><description><![CDATA[
<p>Try to switch models to something other than tinyllama (default only because it’s the fastest to load). Mistral and Llama 3 are great.</p>
]]></description><pubDate>Sat, 04 May 2024 13:16:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=40257435</link><dc:creator>abi</dc:creator><comments>https://news.ycombinator.com/item?id=40257435</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40257435</guid></item><item><title><![CDATA[New comment by abi in "Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU"]]></title><description><![CDATA[
<p>This is amazing. Thanks both for sharing your stories. Made my day.</p>
]]></description><pubDate>Sat, 04 May 2024 06:11:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=40255317</link><dc:creator>abi</dc:creator><comments>https://news.ycombinator.com/item?id=40255317</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40255317</guid></item><item><title><![CDATA[New comment by abi in "Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU"]]></title><description><![CDATA[
<p>Well, it should be possible to just drag and drop a file/folder</p>
]]></description><pubDate>Sat, 04 May 2024 05:22:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=40255100</link><dc:creator>abi</dc:creator><comments>https://news.ycombinator.com/item?id=40255100</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40255100</guid></item><item><title><![CDATA[New comment by abi in "Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU"]]></title><description><![CDATA[
<p>Just bought the domain a couple of hours ago so DNS might not have propagated. Try back tomorrow or download and install it from GitHub (it’s just 2 steps)</p>
]]></description><pubDate>Sat, 04 May 2024 01:07:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=40253984</link><dc:creator>abi</dc:creator><comments>https://news.ycombinator.com/item?id=40253984</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40253984</guid></item><item><title><![CDATA[New comment by abi in "Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU"]]></title><description><![CDATA[
<p>Would love to. It uses MLC AIs webllm so just need to convert it to that format.</p>
]]></description><pubDate>Sat, 04 May 2024 01:06:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=40253974</link><dc:creator>abi</dc:creator><comments>https://news.ycombinator.com/item?id=40253974</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40253974</guid></item><item><title><![CDATA[New comment by abi in "Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU"]]></title><description><![CDATA[
<p>Yes, it only starts the download after you send the first message so visiting the site won’t use up any space.<p>Approx sizes are listed in the GitHub README.<p>Models are stored in indexeddb and will be managed by the browser. Might get evicted.</p>
]]></description><pubDate>Sat, 04 May 2024 00:15:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=40253724</link><dc:creator>abi</dc:creator><comments>https://news.ycombinator.com/item?id=40253724</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40253724</guid></item><item><title><![CDATA[Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU]]></title><description><![CDATA[
<p>I spent the last few days building out a nicer ChatGPT-like interface to use Mistral 7B and Llama 3 fully within a browser (no deps and installs).<p>I’ve used the WebLLM project by MLC AI for a while to interact with LLMs in the browser when handling sensitive data but I found their UI quite lacking for serious use so I built a much better interface around WebLLM.<p>I’ve been using it as a therapist and coach. And it’s wonderful knowing that my personal information never leaves my local computer.<p>Should work on Desktop with Chrome or Edge. Other browsers are adding WebGPU support as well - see the Github for details on how you can get it to work on other browsers.<p>Note: after you send the first message, the model will be downloaded to your browser cache. That can take a while depending on the model and your internet connection. But on subsequent page loads, the model should be loaded from the IndexedDB cache so it should be much faster.<p>The project is open source (Apache 2.0) on Github. If you like it, I’d love contributions, particularly around making the first load faster.<p>Github: <a href="https://github.com/abi/secret-llama">https://github.com/abi/secret-llama</a>
Demo: <a href="https://secretllama.com" rel="nofollow">https://secretllama.com</a></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=40252569">https://news.ycombinator.com/item?id=40252569</a></p>
<p>Points: 547</p>
<p># Comments: 139</p>
]]></description><pubDate>Fri, 03 May 2024 21:26:46 +0000</pubDate><link>https://github.com/abi/secret-llama</link><dc:creator>abi</dc:creator><comments>https://news.ycombinator.com/item?id=40252569</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40252569</guid></item></channel></rss>