<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: theaniketmaurya</title><link>https://news.ycombinator.com/user?id=theaniketmaurya</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 20 Apr 2026 19:32:15 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=theaniketmaurya" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by theaniketmaurya in "What if database branching was easy?"]]></title><description><![CDATA[
<p>i was using neon and they had some similar feature but now using planetscale. would be curious to know how you all are doing it?</p>
]]></description><pubDate>Mon, 20 Apr 2026 12:57:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47833636</link><dc:creator>theaniketmaurya</dc:creator><comments>https://news.ycombinator.com/item?id=47833636</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47833636</guid></item><item><title><![CDATA[New comment by theaniketmaurya in "Open-source isolated runtime for AI agents"]]></title><description><![CDATA[
<p>I've been dogfooding it by running coding agents. I mount my local filesystem and pull the changes back when done.</p>
]]></description><pubDate>Sat, 18 Apr 2026 17:04:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47817506</link><dc:creator>theaniketmaurya</dc:creator><comments>https://news.ycombinator.com/item?id=47817506</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47817506</guid></item><item><title><![CDATA[New comment by theaniketmaurya in "Open-source isolated runtime for AI agents"]]></title><description><![CDATA[
<p>I've been working on an open-source isolated runtime for AI agents. Over a period of time I was able to bring the cold start time in sub-seconds, persistent filesystem, and network control.<p>I'm looking for feedback and if you're into this field - why or *why not* you would use it.</p>
]]></description><pubDate>Sat, 18 Apr 2026 16:55:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47817396</link><dc:creator>theaniketmaurya</dc:creator><comments>https://news.ycombinator.com/item?id=47817396</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47817396</guid></item><item><title><![CDATA[Open-source isolated runtime for AI agents]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/CelestoAI/SmolVM">https://github.com/CelestoAI/SmolVM</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47817395">https://news.ycombinator.com/item?id=47817395</a></p>
<p>Points: 4</p>
<p># Comments: 3</p>
]]></description><pubDate>Sat, 18 Apr 2026 16:55:55 +0000</pubDate><link>https://github.com/CelestoAI/SmolVM</link><dc:creator>theaniketmaurya</dc:creator><comments>https://news.ycombinator.com/item?id=47817395</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47817395</guid></item><item><title><![CDATA[I rewrote network setup for sandboxes in Rust and it sped up by 57x]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/CelestoAI/SmolVM/pull/145">https://github.com/CelestoAI/SmolVM/pull/145</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47771136">https://news.ycombinator.com/item?id=47771136</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 14 Apr 2026 20:34:53 +0000</pubDate><link>https://github.com/CelestoAI/SmolVM/pull/145</link><dc:creator>theaniketmaurya</dc:creator><comments>https://news.ycombinator.com/item?id=47771136</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47771136</guid></item><item><title><![CDATA[New comment by theaniketmaurya in "Show HN: SmolVM – open-source sandbox for coding and computer-use agents"]]></title><description><![CDATA[
<p>yes, we have an example of OpenClaw.<p>SmolVM uses microVMs which means the resource consumption is quite minimal. 
For now, you can specify the amount of RAM while starting the VM.
We are making it elastic so you can configure to expand the RAM based on usage.<p>Here is the link to the [documentation](<a href="https://docs.celesto.ai/smolvm" rel="nofollow">https://docs.celesto.ai/smolvm</a>), but I can also show you on a call.</p>
]]></description><pubDate>Fri, 10 Apr 2026 11:34:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47716542</link><dc:creator>theaniketmaurya</dc:creator><comments>https://news.ycombinator.com/item?id=47716542</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47716542</guid></item><item><title><![CDATA[Show HN: SmolVM – open-source sandbox for coding and computer-use agents]]></title><description><![CDATA[
<p>SmolVM is an open-source local sandbox for AI agents on macOS and Linux.<p>I started building it because agent workflows need more than isolated code execution. They need a reusable environment: write files in one step, come back later, snapshot state, pause/resume, and increasingly interact with browsers or full desktop environments.<p>Right now SmolVM is a Python SDK and CLI focused on local developer experience.<p>Current features include:
- local sandbox environments
- macOS and Linux support
- snapshotting
- pause/resume
- persistent environments across turns<p>Install:
```
curl -sSL <a href="https://celesto.ai/install.sh" rel="nofollow">https://celesto.ai/install.sh</a> | bash
smolvm
```<p>I’d love feedback from people building coding agents or computer-use agents. Interested in what feels missing, what feels clunky, and what you’d expect from a sandbox like this.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47711887">https://news.ycombinator.com/item?id=47711887</a></p>
<p>Points: 8</p>
<p># Comments: 3</p>
]]></description><pubDate>Fri, 10 Apr 2026 00:01:00 +0000</pubDate><link>https://github.com/CelestoAI/SmolVM</link><dc:creator>theaniketmaurya</dc:creator><comments>https://news.ycombinator.com/item?id=47711887</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47711887</guid></item><item><title><![CDATA[Open-source sandboxes to run AI code, browser agents and computer-use]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/CelestoAI/smolVM">https://github.com/CelestoAI/smolVM</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47668778">https://news.ycombinator.com/item?id=47668778</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 06 Apr 2026 23:29:23 +0000</pubDate><link>https://github.com/CelestoAI/smolVM</link><dc:creator>theaniketmaurya</dc:creator><comments>https://news.ycombinator.com/item?id=47668778</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47668778</guid></item><item><title><![CDATA[New comment by theaniketmaurya in "An attempt to make MicroVMs more accessible (SmolVM – Python SDK)"]]></title><description><![CDATA[
<p>Also, I've been running OpenClaw using SmolVM on my macOS. It adds a hardware level isolation (using KVM).<p><a href="https://github.com/CelestoAI/SmolVM/blob/main/examples/openclaw.py" rel="nofollow">https://github.com/CelestoAI/SmolVM/blob/main/examples/openc...</a></p>
]]></description><pubDate>Mon, 16 Feb 2026 22:42:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47041360</link><dc:creator>theaniketmaurya</dc:creator><comments>https://news.ycombinator.com/item?id=47041360</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47041360</guid></item><item><title><![CDATA[An attempt to make MicroVMs more accessible (SmolVM – Python SDK)]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/CelestoAI/SmolVM">https://github.com/CelestoAI/SmolVM</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47041334">https://news.ycombinator.com/item?id=47041334</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 16 Feb 2026 22:40:41 +0000</pubDate><link>https://github.com/CelestoAI/SmolVM</link><dc:creator>theaniketmaurya</dc:creator><comments>https://news.ycombinator.com/item?id=47041334</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47041334</guid></item><item><title><![CDATA[Show HN: Built an open-source SDK to simplify tool authentication for AI Agents]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/CelestoAI/agentor">https://github.com/CelestoAI/agentor</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45554626">https://news.ycombinator.com/item?id=45554626</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 12 Oct 2025 02:27:05 +0000</pubDate><link>https://github.com/CelestoAI/agentor</link><dc:creator>theaniketmaurya</dc:creator><comments>https://news.ycombinator.com/item?id=45554626</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45554626</guid></item><item><title><![CDATA[New comment by theaniketmaurya in "Abundant Intelligence"]]></title><description><![CDATA[
<p>let me put some more in nvidia now</p>
]]></description><pubDate>Tue, 23 Sep 2025 15:05:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=45348110</link><dc:creator>theaniketmaurya</dc:creator><comments>https://news.ycombinator.com/item?id=45348110</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45348110</guid></item><item><title><![CDATA[New comment by theaniketmaurya in "Deploy your own AI vibe coding platform from Cloudflare"]]></title><description><![CDATA[
<p>Also found this Vibe coding OSS from Modal - <a href="https://github.com/modal-labs/modal-vibe" rel="nofollow">https://github.com/modal-labs/modal-vibe</a><p>Curious if Vercel is also gonna open-source theirs.</p>
]]></description><pubDate>Tue, 23 Sep 2025 14:33:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=45347610</link><dc:creator>theaniketmaurya</dc:creator><comments>https://news.ycombinator.com/item?id=45347610</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45347610</guid></item><item><title><![CDATA[New comment by theaniketmaurya in "Deploy your own AI vibe coding platform from Cloudflare"]]></title><description><![CDATA[
<p>Nice! they forgot to add the Github link for the repo lol.<p>PS: If you're looking too - <a href="https://github.com/cloudflare/vibesdk" rel="nofollow">https://github.com/cloudflare/vibesdk</a></p>
]]></description><pubDate>Tue, 23 Sep 2025 14:32:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=45347594</link><dc:creator>theaniketmaurya</dc:creator><comments>https://news.ycombinator.com/item?id=45347594</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45347594</guid></item><item><title><![CDATA[New comment by theaniketmaurya in "Use One Big Server (2022)"]]></title><description><![CDATA[
<p>I’m using Neon too and upgraded to the scale up version today. Curious, what do you mean rhat they can get pricey?</p>
]]></description><pubDate>Sun, 31 Aug 2025 22:06:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=45087541</link><dc:creator>theaniketmaurya</dc:creator><comments>https://news.ycombinator.com/item?id=45087541</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45087541</guid></item><item><title><![CDATA[New comment by theaniketmaurya in "Llama 3.2: Revolutionizing edge AI and vision with open, customizable models"]]></title><description><![CDATA[
<p>You can run with LitServe. here is the code - <a href="https://lightning.ai/lightning-ai/studios/deploy-llama-3-2-vision-with-litserve" rel="nofollow">https://lightning.ai/lightning-ai/studios/deploy-llama-3-2-v...</a></p>
]]></description><pubDate>Thu, 26 Sep 2024 11:30:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=41657116</link><dc:creator>theaniketmaurya</dc:creator><comments>https://news.ycombinator.com/item?id=41657116</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41657116</guid></item><item><title><![CDATA[New comment by theaniketmaurya in "Llama 3.2: Revolutionizing edge AI and vision with open, customizable models"]]></title><description><![CDATA[
<p>You can run it with LitServe (MPS GPU), here is the code - <a href="https://lightning.ai/lightning-ai/studios/deploy-llama-3-2-vision-with-litserve" rel="nofollow">https://lightning.ai/lightning-ai/studios/deploy-llama-3-2-v...</a></p>
]]></description><pubDate>Thu, 26 Sep 2024 11:29:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=41657105</link><dc:creator>theaniketmaurya</dc:creator><comments>https://news.ycombinator.com/item?id=41657105</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41657105</guid></item><item><title><![CDATA[A full-stack Python model serving library]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/Lightning-AI/LitServe">https://github.com/Lightning-AI/LitServe</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=41378951">https://news.ycombinator.com/item?id=41378951</a></p>
<p>Points: 8</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 28 Aug 2024 12:57:16 +0000</pubDate><link>https://github.com/Lightning-AI/LitServe</link><dc:creator>theaniketmaurya</dc:creator><comments>https://news.ycombinator.com/item?id=41378951</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41378951</guid></item><item><title><![CDATA[New comment by theaniketmaurya in "LLM training in simple, pure C/CUDA"]]></title><description><![CDATA[
<p>LLM training in simple, pure C/CUDA. There is no need for 245MB of PyTorch or 107MB of cPython.</p>
]]></description><pubDate>Mon, 08 Apr 2024 20:17:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=39973269</link><dc:creator>theaniketmaurya</dc:creator><comments>https://news.ycombinator.com/item?id=39973269</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39973269</guid></item><item><title><![CDATA[LLM training in simple, pure C/CUDA]]></title><description><![CDATA[
<p>Article URL: <a href="https://twitter.com/karpathy/status/1777427944971083809">https://twitter.com/karpathy/status/1777427944971083809</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=39973268">https://news.ycombinator.com/item?id=39973268</a></p>
<p>Points: 4</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 08 Apr 2024 20:17:32 +0000</pubDate><link>https://twitter.com/karpathy/status/1777427944971083809</link><dc:creator>theaniketmaurya</dc:creator><comments>https://news.ycombinator.com/item?id=39973268</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39973268</guid></item></channel></rss>