<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: cyw</title><link>https://news.ycombinator.com/user?id=cyw</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 14:18:04 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=cyw" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by cyw in "[dead]"]]></title><description><![CDATA[
<p>Hello HN,<p>I have many tasks I ask my AI agent to handle, like searching flight tickets, finding rentals on Zillow, or browsing events on lu.ma. My agent would try to crawl or use a browser (both real and headless), and as you might know, it is slow and burns a lot of tokens.<p>I remember seeing parse.bot on HN but it is too pricey, and I prefer running requests locally, so I built a CLI tool that finds API schemas of websites and contributes them to a public registry so I can use them across any agent I have. I showed it to a few friends and they really liked it and pushed me to build a platform where everyone can use discovered schemas locally and contribute new ones as well.<p>Right now the CLI is focused on public-facing actions and supports Cloudflare-protected websites (requires browser bootstrap). I am continuing to improve on discovering schemas behind auth gates, which should come soon.<p>I have collected about 100 website schemas for you to try. My goal is to build a HuggingFace for website API schemas so agents can interact with any website without a browser.<p>If you don't want to sign up, there is a playground in the nav menu where you can test discovered websites and see exactly how to get the data you need. The main goal is to help agents act faster and use fewer tokens. If you find this helpful, please star me on GitHub as it encourage me and contribute schemas if you want to</p>
]]></description><pubDate>Wed, 15 Apr 2026 20:27:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47784788</link><dc:creator>cyw</dc:creator><comments>https://news.ycombinator.com/item?id=47784788</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47784788</guid></item><item><title><![CDATA[New comment by cyw in "Tell HN: Tired of Generic Long Form A.I Posts"]]></title><description><![CDATA[
<p>hahaha, i guess it just the nature that wanted to get things done correctly and don't want to look bad? but I kind get it now and you prob can tell from how I am writing this comment. lol</p>
]]></description><pubDate>Thu, 12 Mar 2026 01:33:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47345129</link><dc:creator>cyw</dc:creator><comments>https://news.ycombinator.com/item?id=47345129</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47345129</guid></item><item><title><![CDATA[New comment by cyw in "[dead]"]]></title><description><![CDATA[
<p>If your main working machine is Windows and you're working with multiple coding agents in parallel, you know the pain. So I built this terminal based IDE in Rust with GPUI. Beside the features that tmux offers (status notifications, custom layout and pane window adjustment), Codirigent also lets you paste images directly into the session, and if you close the app without ending the session, it automatically resumes with the same permission mode!<p>It supports Claude Code, Codex CLI, Gemini CLI with status detection and notifications, which wasn't a thing in Windows (or did I miss anything?).<p>Let me know if you have any issues!</p>
]]></description><pubDate>Tue, 10 Mar 2026 01:29:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47318103</link><dc:creator>cyw</dc:creator><comments>https://news.ycombinator.com/item?id=47318103</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47318103</guid></item><item><title><![CDATA[New comment by cyw in "Tell HN: Tired of Generic Long Form A.I Posts"]]></title><description><![CDATA[
<p>I used to do that but not anymore, I now write it myself first and only tell AI to fix any grammar issues since English is my third language but that’s it.</p>
]]></description><pubDate>Mon, 09 Mar 2026 01:33:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47303789</link><dc:creator>cyw</dc:creator><comments>https://news.ycombinator.com/item?id=47303789</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47303789</guid></item><item><title><![CDATA[New comment by cyw in "Ask HN: What Are You Working On? (March 2026)"]]></title><description><![CDATA[
<p>I used Rust to build a terminal based IDE for parallel coding cli workflow. It works with Claude Code, Codex and Gemini!<p>My favorite features are:
- custom layout and drag and drop to change window
- auto resume to last working session on app starting
- notifications
- copy and paste images directly to Claude Code/Codex/Gemini CLI
- file tree with right click to insert file path to the session directly<p>OH and it works on both Windows and MacOS! Fully open source too!<p><a href="https://github.com/oso95/Codirigent" rel="nofollow">https://github.com/oso95/Codirigent</a></p>
]]></description><pubDate>Mon, 09 Mar 2026 01:30:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47303773</link><dc:creator>cyw</dc:creator><comments>https://news.ycombinator.com/item?id=47303773</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47303773</guid></item><item><title><![CDATA[New comment by cyw in "[dead]"]]></title><description><![CDATA[
<p>Hello HN,<p>I was previously using VSCode then switched to Zed with PowerShell. Although I have a Mac, my main working machine is Windows so I can never use tools like Ghostty, Tmux, etc. I decided to build my own terminal based IDE (or is it IDE?) in Rust, specifically designed for coding CLIs to work in parallel.<p>My favorite features are:<p>- Custom layout: Define your layout and you can drag the header to move it to wherever you like
- Notifications: No need to explain this, when you have multiple sessions running, you need it.
- Session persistence: If you close Codirigent without closing your session, next time you reopen it, it will resume to the last session with the same permission settings!
- Clipboard: Finally can copy and paste images directly into the terminal!
- File tree: You can just double click files to open them in your default IDE for review. You can also right click on files to insert the file path directly instead of typing the file name each time.
- Session menus: You can rename your session and group them for easy visual hints.<p>I am still experimenting with my Task Board feature which it let you can pre-create numerous tasks with plan files and custom prompts, then assign them when needed.<p>This is the first time I've built in Rust so feel free to roast me!</p>
]]></description><pubDate>Sun, 08 Mar 2026 23:26:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47302750</link><dc:creator>cyw</dc:creator><comments>https://news.ycombinator.com/item?id=47302750</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47302750</guid></item><item><title><![CDATA[New comment by cyw in "I analyzes how different LLMs bluff, lie, and survive in the game Liar's Bar"]]></title><description><![CDATA[
<p>I came across a YouTube video where different large language models played a social deception game called Liar’s Bar, and it caught my interest. I decided to build a website that tracks and visualizes how models like GPT-5, Claude Sonnet 4.5, Gemini 2.5 Flash, Qwen Max, Deepseek R1, and Grok 4 Fast perform in this game — including full behavioral metrics, head-to-head matchups, and playstyle profiles.<p>How Liar’s Bar works<p>- Each round uses a deck of 20 cards: 6 Aces, 6 Kings, 6 Queens, and 2 Jokers.
- Every player (model) gets 5 cards. A “target card” is announced, and players take turns placing cards and bluffing.
- If a bluff is called and proven false, the liar must “play Russian roulette.” One of six revolver chambers has a live round, and it isn’t reshuffled, so the longer the game goes, the higher the risk.<p>Some interesting finding:<p>GPT-5 dominates:
- Bluff rate ≈ 48% but ~90% success, showing it knows when to lie.<p>Claude Sonnet 4.5 is analytical but cautious:
- Lowest bluff frequency among top models (34%), yet 75% lie-detection accuracy — a top “truth-sniffer.”
- Balanced archetype, often exposing bluffs but losing in final rounds due to low aggression.<p>Qwen Max barely bluffs (9%) but scores 100% bluff success and challenges often. It behaves like an over-cautious logic bot that rarely lies — surprisingly human-like in restraint.<p>Gemini 2.5 Flash is fast but inconsistent — good average rounds but low detection accuracy (22%), often losing head-to-head against stronger liars.<p>Deepseek R1 and Grok 4 Fast show moderate deception but higher risk scores, suggesting a more “shoot-first” mentality with inconsistent survival.<p>---<p>f there’s a specific matchup or metric you’d like to see, let me know and I will add it to the website.
In the future, I’m planning to let users upload their own prompts and compete against others. If that sounds interesting, I’d love to hear your thoughts or ideas.</p>
]]></description><pubDate>Tue, 07 Oct 2025 20:24:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=45508366</link><dc:creator>cyw</dc:creator><comments>https://news.ycombinator.com/item?id=45508366</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45508366</guid></item><item><title><![CDATA[I analyzes how different LLMs bluff, lie, and survive in the game Liar's Bar]]></title><description><![CDATA[
<p>Article URL: <a href="https://liars-bar-one.vercel.app">https://liars-bar-one.vercel.app</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45508365">https://news.ycombinator.com/item?id=45508365</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Tue, 07 Oct 2025 20:24:47 +0000</pubDate><link>https://liars-bar-one.vercel.app</link><dc:creator>cyw</dc:creator><comments>https://news.ycombinator.com/item?id=45508365</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45508365</guid></item><item><title><![CDATA[Show HN: I Built "Vercel for Stateful AI Agents" – open-source, cost-efficient]]></title><description><![CDATA[
<p>tl;dr: Like Vercel, but for stateful AI agents. Deploy your container and instantly get an agent with persistent memory, auto-recovery, and a live API endpoint—zero infrastructure work required.<p>Hey HN, I’m Cyw, the founder of Agentainer (<a href="https://agentainer.io/" rel="nofollow">https://agentainer.io/</a>), a platform designed to deploy and manage long-running AI agents with zero DevOps. We just launched the first open source version of Agentainer: Agentainer Lab (<a href="https://github.com/oso95/Agentainer-lab">https://github.com/oso95/Agentainer-lab</a>) on GitHub.<p>Little bit of background: most infrastructure today is built for short-lived, stateless workloads—Lambda, Cloud Run, or even Kubernetes pods. But AI agents aren’t like that. They’re long-running processes with memory, history, and evolving state. Running them reliably in production usually means gluing together a bunch of services (volume mounts, retry queues, crash recovery, gateways, etc.) just to approximate what a simple web app gets out of the box.<p>To make my life easier when deploying agents for projects (both personal and work-related), I started designing an infrastructure layer that could treat agents as durable services from day one. No YAML. No juggling services. Just give it a Docker image or Dockerfile, and Agentainer handles the rest. Basically, a Vercel-like solution.<p>Each agent runs in its own isolated container, with persistent volume mounts, crash recovery, and queued request replay. If an agent crashes mid-task, it restarts and picks up where it left off. Agentainer gives every agent a clean proxy endpoint by default, so you don’t have to worry about port management or network config. Oh, if you’ve ever built long-running agents, you know how important checkpoints are—I got it taken care of already. (Check out: <a href="https://github.com/oso95/Agentainer-lab/blob/main/docs/RESILIENT_AGENTS.md">https://github.com/oso95/Agentainer-lab/blob/main/docs/RESIL...</a>)<p>Everything is CLI-first and API-accessible. In fact, I originally built this so my own coding agent could manage infrastructure without burning tokens repeating shell commands lol. You can deploy, restart, or remove agents programmatically—and the same flow works in dev and prod.<p>I did some math, and for the right workloads like agentic backends with frequent requests or persistent state, this architecture could reduce cloud costs significantly, even by 30~40%, by replacing per-request billing and minimizing infra sprawl. We’re still early, but excited to see what others build on top of it.<p>Anyway, right now Agentainer Lab is focused on local dev and self-hosting. The bigger Agentainer.io roadmap includes observability, audit logs, backup/restore, and full auto-scaling to unlock the full experience. If you’re interested, you can sign up for early access on our website, we’ll only send you one email when the production version launches, and then your email will be deleted from our database.<p>GitHub: <a href="https://github.com/oso95/Agentainer-lab">https://github.com/oso95/Agentainer-lab</a>
Platform: <a href="https://agentainer.io" rel="nofollow">https://agentainer.io</a><p>Would love to hear feedback from others working on LLM agents or trying to run stateful workloads in production. What’s your current setup? Do you think this can help you?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44716929">https://news.ycombinator.com/item?id=44716929</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 28 Jul 2025 23:15:36 +0000</pubDate><link>https://github.com/oso95/Agentainer-lab</link><dc:creator>cyw</dc:creator><comments>https://news.ycombinator.com/item?id=44716929</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44716929</guid></item><item><title><![CDATA[New comment by cyw in "A solution I built to deploy your LLM agents into production with one click"]]></title><description><![CDATA[
<p>Hello HN,<p>I posted this project last week and received some valuable feedback. I’ve made a lot of changes since then and wanted to share it again to see what you think.<p>The idea came from while I was working on a personal project. When I tried to deploy my agent into the cloud, I ran into a lot of headaches — setting up VMs, writing config, handling crashes. I decided to build a solution for it and called it Agentainer.<p>Agentainer’s goal is to let anyone (even coding agents) deploy LLM agents into production without spending hours setting up infrastructure.<p>Here’s what Agentainer does:<p>One-click deployment: Deploy your containerized LLM agent (any language) as a Docker image<p>Lifecycle management: Start, stop, pause, resume, and auto-recover via UI or API<p>Auto-recovery: Agents restart automatically after a crash and return to their last working state<p>State persistence: Uses Redis for in-memory state and PostgreSQL for snapshots<p>Per-agent secure APIs: Each agent gets its own REST/gRPC endpoint with token-based auth and usage logging (e.g. <a href="https://agentainer.io/{agentId}/{agentEndpoint}" rel="nofollow">https://agentainer.io/{agentId}/{agentEndpoint}</a>)<p>Most cloud platforms are designed for stateless apps or short-lived functions. They’re not ideal for long-running autonomous agents. Since a lot of dev work is now being done by coding agents themselves, Agentainer exposes all platform functions through an API. That means even non-technical founders can ship their own agents into production without needing to manage infrastructure.<p>If you visit the site, you’ll find a link to our GitHub repo with a working demo that includes all the features above. You can also sign up for early access to the production version, which is launching soon.<p>We’re applying to YC and would love to hear feedback — especially from folks running agents in production or building with them now. If you try Agentainer Lab, I’d really appreciate any thoughts or feature suggestions.<p>Note: Agentainer doesn’t provide any LLM models or reasoning frameworks. We’re infrastructure only — you bring the agent, and we handle deployment, state, and APIs.</p>
]]></description><pubDate>Mon, 21 Jul 2025 01:35:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=44631014</link><dc:creator>cyw</dc:creator><comments>https://news.ycombinator.com/item?id=44631014</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44631014</guid></item><item><title><![CDATA[this let you deploy your LLM agents into production with one click]]></title><description><![CDATA[
<p>Article URL: <a href="https://agentainer.io/">https://agentainer.io/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44631013">https://news.ycombinator.com/item?id=44631013</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 21 Jul 2025 01:35:45 +0000</pubDate><link>https://agentainer.io/</link><dc:creator>cyw</dc:creator><comments>https://news.ycombinator.com/item?id=44631013</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44631013</guid></item><item><title><![CDATA[New comment by cyw in "Run LLM Agents as Microservices with One-Click Deployment"]]></title><description><![CDATA[
<p>I apologize for misunderstood, English isn't my first language but I totally get what you mean now. We have updated our web page with Github repo to a demo version and documentation page is available now as well. Although we don't know what would be valuable to put in documentation page but if there is something you are interesting to learn, please let me know! Thanks again for your time</p>
]]></description><pubDate>Fri, 18 Jul 2025 23:04:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=44610761</link><dc:creator>cyw</dc:creator><comments>https://news.ycombinator.com/item?id=44610761</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44610761</guid></item><item><title><![CDATA[New comment by cyw in "Run LLM Agents as Microservices with One-Click Deployment"]]></title><description><![CDATA[
<p>Great news! We have released our demo version in the Github with updated home page. Please feel free to give us some feedback after you try!</p>
]]></description><pubDate>Fri, 18 Jul 2025 23:01:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=44610742</link><dc:creator>cyw</dc:creator><comments>https://news.ycombinator.com/item?id=44610742</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44610742</guid></item><item><title><![CDATA[New comment by cyw in "Run LLM Agents as Microservices with One-Click Deployment"]]></title><description><![CDATA[
<p>You are not wrong, the website was actually built with help of Claude Code (like I mentioned in other reply) and we missed to update the social link of it. We will be releasing our demo version repo later this week once we get our codebase cleaned up and then we will post it here again so we can get actual feedback on the product itself. I appreciate you don't roast on my English tho lol</p>
]]></description><pubDate>Wed, 16 Jul 2025 21:39:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=44587074</link><dc:creator>cyw</dc:creator><comments>https://news.ycombinator.com/item?id=44587074</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44587074</guid></item><item><title><![CDATA[New comment by cyw in "Run LLM Agents as Microservices with One-Click Deployment"]]></title><description><![CDATA[
<p>For sure! We’re planning to release a demo version later this week once we clean up the codebase (it’s a bit messy since we hadn’t originally planned to open it up publicly). This is our first time putting something like this out, so definitely a learning opportunity. I’ll follow up with a repo link once it’s ready — appreciate the push and feedback!</p>
]]></description><pubDate>Wed, 16 Jul 2025 21:35:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=44587039</link><dc:creator>cyw</dc:creator><comments>https://news.ycombinator.com/item?id=44587039</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44587039</guid></item><item><title><![CDATA[New comment by cyw in "Run LLM Agents as Microservices with One-Click Deployment"]]></title><description><![CDATA[
<p>Totally fair, appreciate the honesty. we’re still building and not quite ready yet, but wanted to start getting feedback early so we can build something people actually need while we fix our own pain point.<p>Also, yes — I did use AI to help clean up my comment because English isn’t my first language, and I didn’t want it to come across sloppy. I get how it might’ve made it sound kinda off. thanks for pointing it out.</p>
]]></description><pubDate>Wed, 16 Jul 2025 15:42:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=44583532</link><dc:creator>cyw</dc:creator><comments>https://news.ycombinator.com/item?id=44583532</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44583532</guid></item><item><title><![CDATA[New comment by cyw in "Run LLM Agents as Microservices with One-Click Deployment"]]></title><description><![CDATA[
<p>Thank you for your feedback — I can see where the concerns are coming from. We did use Claude Code to help build the landing page, which I believe is why it has that “template-ish” feel. We’ve been mostly heads-down on the backend and infrastructure, and yeah… design isn’t really our strength yet.<p>The GitHub (and other) icons linking to github.com was just a miss — they were on our checklist, but we somehow let them slip through. We’ll get that fixed right away.<p>I know a lot of startups fall into the vaporware category, but I’m confident that’s not the case here. We’re building this to solve our own problems, and we’re committed to shipping it. We really appreciate the skepticism — it's the kind of feedback we might have missed if you hadn’t pointed it out.</p>
]]></description><pubDate>Wed, 16 Jul 2025 15:37:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=44583472</link><dc:creator>cyw</dc:creator><comments>https://news.ycombinator.com/item?id=44583472</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44583472</guid></item><item><title><![CDATA[New comment by cyw in "Run LLM Agents as Microservices with One-Click Deployment"]]></title><description><![CDATA[
<p>That’s a great point — and you’re absolutely right.<p>Agentainer isn’t responsible for determining what to manage or why — it's not an orchestration brain or planner itself. It’s designed to enable that level of automation by giving agents the tools to act. Think of it more like the runtime and control plane that an intelligent planning agent (built by the developer) can use to execute its decisions.<p>So for example, if you’ve built a supervisor agent that analyzes workloads and spins up child agents to handle different tasks — Agentainer provides the infrastructure APIs to make that possible (create, monitor, terminate, etc.), but it’s up to you (or your planner agent) to define the logic based on business rules, goals, and evolving context.<p>We’re not building AGI — we’re just trying to remove the DevOps wall for people building toward that vision.</p>
]]></description><pubDate>Wed, 16 Jul 2025 04:23:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=44578646</link><dc:creator>cyw</dc:creator><comments>https://news.ycombinator.com/item?id=44578646</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44578646</guid></item><item><title><![CDATA[New comment by cyw in "Run LLM Agents as Microservices with One-Click Deployment"]]></title><description><![CDATA[
<p>We are working on Agentainer, a platform to make deploying and managing LLM-based agents as microservices feel effortless — especially for developers who don’t want to wrestle with infrastructure. This isn’t a pitch. we're sharing a pain we’ve run into hard, and we want to know if others are feeling it too. We’ve built agents using AutoGen, LangChain, and custom setups to monitor APIs, automate tasks, or manage systems autonomously. But running these in production? It’s a mess.<p>Most cloud platforms are designed for stateless apps or short-lived functions — not long-running agents that need to:<p>- Stay alive for hours or days<p>- Recover from crashes without losing context<p>- Expose secure APIs for integrations<p>- Scale up when demand spikes<p>- Persist state across redeploys<p>Dealing with Dockerfiles, Kubernetes, and manually wiring Redis/PostgreSQL eats up too much time — time we'd rather spend improving the agent’s logic.<p>Agentainer is our attempt to fix this. It’s a platform that gives agents the runtime treatment they deserve. Highlights:<p>- One-click deployment: Upload your code or Docker image, no YAML or infra scripts. (oh, and we designed it in a way where other AI agent can do it as well!)<p>- Lifecycle management: Start, stop, pause, resume, and auto-recover — via UI or API.<p>- Persistent state: Redis (runtime), Postgres (config), with automatic rehydration.<p>- Per-agent secure APIs: Each agent gets its own REST/gRPC endpoint with token auth and usage logging.<p>- Scaling and cloning: Horizontal scaling with optional memory cloning.<p>- Logs and metrics: Real-time logs, crash history, uptime, Prometheus-backed metrics.<p>What makes Agentainer uniquely flexible is that we expose the entire platform through APIs. This means not just you, the developer, but also your own developer agent can programmatically deploy, monitor, or retire other agents. Want a planning agent that spins up task-specific agents on demand? That’s a first-class use case. We’re building toward a world where autonomous agents can coordinate and manage infrastructure without human input — and Agentainer is designed with that architecture in mind.<p>We are applying to YC and would love unfiltered feedback from anyone who’s run agents in production:<p>1. What’s the hardest part of deploying or scaling agents for you?<p>2. What infrastructure or tooling would actually make your life easier?<p>3. What debugging/monitoring features would save your sanity?<p>Honest takes are super welcome. If this idea feels useful — or totally off-base — we’d love to hear why.<p>Note: Agentainer doesn’t provide any LLM models or reasoning frameworks. We’re infra-only — you bring your own agent code, and we handle the deployment, state, scaling, and API exposure.</p>
]]></description><pubDate>Wed, 16 Jul 2025 04:03:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=44578559</link><dc:creator>cyw</dc:creator><comments>https://news.ycombinator.com/item?id=44578559</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44578559</guid></item><item><title><![CDATA[Run LLM Agents as Microservices with One-Click Deployment]]></title><description><![CDATA[
<p>Article URL: <a href="https://agentainer.io/">https://agentainer.io/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44578558">https://news.ycombinator.com/item?id=44578558</a></p>
<p>Points: 6</p>
<p># Comments: 15</p>
]]></description><pubDate>Wed, 16 Jul 2025 04:03:04 +0000</pubDate><link>https://agentainer.io/</link><dc:creator>cyw</dc:creator><comments>https://news.ycombinator.com/item?id=44578558</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44578558</guid></item></channel></rss>