<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: jsunderland323</title><link>https://news.ycombinator.com/user?id=jsunderland323</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 13 Apr 2026 08:40:57 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=jsunderland323" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by jsunderland323 in "Ask HN: What Are You Working On? (April 2026)"]]></title><description><![CDATA[
<p>This is very cool. Would love to chat with you.<p>I’m working on <a href="https://coasts.dev">https://coasts.dev</a>.<p>I’ve been thinking a lot about the light vm side lately but it’s not an area we are going to attack ourselves. I think there’s a really good pairing between what we’re working on.</p>
]]></description><pubDate>Sun, 12 Apr 2026 23:57:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47745867</link><dc:creator>jsunderland323</dc:creator><comments>https://news.ycombinator.com/item?id=47745867</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47745867</guid></item><item><title><![CDATA[Horizontally scaling localhost for worktrees (FOSS)]]></title><description><![CDATA[
<p>Article URL: <a href="https://coasts.dev/blog/introducing-remote-coasts">https://coasts.dev/blog/introducing-remote-coasts</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47743285">https://news.ycombinator.com/item?id=47743285</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 12 Apr 2026 19:14:14 +0000</pubDate><link>https://coasts.dev/blog/introducing-remote-coasts</link><dc:creator>jsunderland323</dc:creator><comments>https://news.ycombinator.com/item?id=47743285</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47743285</guid></item><item><title><![CDATA[New comment by jsunderland323 in "Tell HN: Anthropic no longer allowing Claude Code subscriptions to use OpenClaw"]]></title><description><![CDATA[
<p>I'm not sure. The docs on claude -p are sort of ambiguous on third party usage</p>
]]></description><pubDate>Sat, 04 Apr 2026 01:58:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47634850</link><dc:creator>jsunderland323</dc:creator><comments>https://news.ycombinator.com/item?id=47634850</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47634850</guid></item><item><title><![CDATA[New comment by jsunderland323 in "Show HN: Coasts – Containerized Hosts for Agents"]]></title><description><![CDATA[
<p>I’m sorry I didn’t reply sooner. I’d love to hear about the diff/commit/rollback stuff you’re working on. Feel free to message me on discord or however you like (I’m pretty easy to find).<p>I wanna hear how they compose naturally.</p>
]]></description><pubDate>Wed, 01 Apr 2026 05:57:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47597331</link><dc:creator>jsunderland323</dc:creator><comments>https://news.ycombinator.com/item?id=47597331</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47597331</guid></item><item><title><![CDATA[New comment by jsunderland323 in "Show HN: Coasts – Containerized Hosts for Agents"]]></title><description><![CDATA[
<p>I have waited 12 hours for someone to ask this! You are my hero.<p>So the name "hot" is a bit misleading. The containers don't actually stay alive through the switch. What happens is we do the umount -l /workspace, mount --bind, mount --make-rshared sequence first, and then we run docker compose up --force-recreate. Force-recreate skips compose down (which would tear down the network, named volumes, everything) and just swaps the container processes in place. The old containers and their file watchers are killed and new ones start up.<p>By the time the new container processes start, /workspace already points at the new worktree so all their file handles are fresh and correct. There's no window where a watcher could be writing to stale paths because the old processes are just gone.<p>I was pretty afraid of this at first too but it turns out the force-recreate sidesteps the whole problem.</p>
]]></description><pubDate>Tue, 31 Mar 2026 05:44:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47583184</link><dc:creator>jsunderland323</dc:creator><comments>https://news.ycombinator.com/item?id=47583184</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47583184</guid></item><item><title><![CDATA[New comment by jsunderland323 in "Show HN: Coasts – Containerized Hosts for Agents"]]></title><description><![CDATA[
<p>Actually pretty reliably but you do need to explicitly call out the skill. I usually start agent threads with /coasts or in codex $coasts. Once it’s in the conversation they stick to it though.<p>One cool thing we do is we have the docs and semantic search of our docs baked into the CLI, so if the agents get lost they can usually figure things out kind of quickly by searching the docs via the cli.<p>Also we have a little section our agent.md and claude.md,I’m not sure how well it works without that.</p>
]]></description><pubDate>Tue, 31 Mar 2026 03:32:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47582437</link><dc:creator>jsunderland323</dc:creator><comments>https://news.ycombinator.com/item?id=47582437</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47582437</guid></item><item><title><![CDATA[New comment by jsunderland323 in "Show HN: Coasts – Containerized Hosts for Agents"]]></title><description><![CDATA[
<p>I think this was a common perspective from early docker days with regard to local bind mounts (before docker switched from virtual box with hyperkit on macos). I do use Orb Stack and have noticed faster build times with Orb Stack but I haven't really noticed any difference in runtime performance between Orb Stack and Docker Desktop.</p>
]]></description><pubDate>Tue, 31 Mar 2026 03:14:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47582342</link><dc:creator>jsunderland323</dc:creator><comments>https://news.ycombinator.com/item?id=47582342</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47582342</guid></item><item><title><![CDATA[New comment by jsunderland323 in "Show HN: Coasts – Containerized Hosts for Agents"]]></title><description><![CDATA[
<p>Interesting, I was not aware.<p>Well fortunately it's the name of a local observability ui and not the actual product. We'll change it if it becomes a problem.</p>
]]></description><pubDate>Mon, 30 Mar 2026 23:20:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47580911</link><dc:creator>jsunderland323</dc:creator><comments>https://news.ycombinator.com/item?id=47580911</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47580911</guid></item><item><title><![CDATA[New comment by jsunderland323 in "Show HN: Coasts – Containerized Hosts for Agents"]]></title><description><![CDATA[
<p>There a couple of ways you can go about MCP within coasts (also depends on what the MCP does). You can either install the MCP service host-side (something like playwright), in which case everything should just work out of the box for you.<p>Alternatively, you can setup the Coast to install MCP services in the containers. There are some cases around specific logging or db MCP's where this might make sense.<p>>Would love to see this support stdio-to-HTTP bridging so local MCP servers can be exposed as remote ones without rewriting them.<p>Are you saying if you exposed the MCP service in the Coast and hosted it remotely you could expose back the MCP service remotely? That's actually a sort of interesting idea. Right now, the agents basically need to exec the mcp calls if they are running host-side and need to call an inner mcp. I hadn't considered the case of proxying the stdout to http. I'll think about how best to implement that!</p>
]]></description><pubDate>Mon, 30 Mar 2026 22:40:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47580611</link><dc:creator>jsunderland323</dc:creator><comments>https://news.ycombinator.com/item?id=47580611</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47580611</guid></item><item><title><![CDATA[New comment by jsunderland323 in "Show HN: Coasts – Containerized Hosts for Agents"]]></title><description><![CDATA[
<p>I could definitely see that being useful for folks who are Docker-fearful or just less infra literate in general.<p>I think we're focused on the other end of the spectrum. Folks who like docker and have a good docker setup but want to have parallel runtimes. Anyway, best of luck!</p>
]]></description><pubDate>Mon, 30 Mar 2026 22:15:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47580386</link><dc:creator>jsunderland323</dc:creator><comments>https://news.ycombinator.com/item?id=47580386</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47580386</guid></item><item><title><![CDATA[New comment by jsunderland323 in "Show HN: Coasts – Containerized Hosts for Agents"]]></title><description><![CDATA[
<p>We started with an approach like that but I think our grounding principal has been that you shouldn't have to modify your docker-compose to get parallelized local development. I think we want to layer onto your existing setup, not make you re-write your stack around us.<p>I haven't really had a bad experience with Docker on Mac. but Is the idea you basically just build your service on top of specific.dev's provided services (postgres and redis) and those run bare-metal locally and then you can deploy to specific.dev's hosted solution?</p>
]]></description><pubDate>Mon, 30 Mar 2026 22:00:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47580249</link><dc:creator>jsunderland323</dc:creator><comments>https://news.ycombinator.com/item?id=47580249</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47580249</guid></item><item><title><![CDATA[New comment by jsunderland323 in "Show HN: Coasts – Containerized Hosts for Agents"]]></title><description><![CDATA[
<p>So technically you could use Coasts to sandbox but our default approach is actually not sandboxed at all. The agents still run host-side so unless you're sandboxing the agent host-side, you're not sandboxed. With coasts you're basically running exec commands against the coast container to extract runtime information.<p>>One thing I've been thinking about with agent infrastructure: the auth model gets complex fast when agents need to call external APIs on behalf of users. Per-key rate limiting and usage tracking at the edge (rather than in the container) has worked well for me. Curious how you’re handling the credential passing to containerized agents.<p>The way we handle secrets is at build-time we allow you to run scripts that can extract secrets and env vars host-side. The secrets get stored in a sqlite table (not baked into the coast image). When you start a coast, it injects those secrets -- you can decide how you the secrets should appear either as env vars, or if they should be written to the write layer. You're then able to trigger a re-injection of the secrets, so you can extract all the secrets again host-side and have them injected into all running coasts. This is useful because you don't have to rebuild and re-run just to update secrets.</p>
]]></description><pubDate>Mon, 30 Mar 2026 20:46:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47579507</link><dc:creator>jsunderland323</dc:creator><comments>https://news.ycombinator.com/item?id=47579507</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47579507</guid></item><item><title><![CDATA[New comment by jsunderland323 in "Show HN: Coasts – Containerized Hosts for Agents"]]></title><description><![CDATA[
<p>It does not. It works through Docker Desktop, Orb Stack, or Colima on macOS.</p>
]]></description><pubDate>Mon, 30 Mar 2026 20:32:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47579362</link><dc:creator>jsunderland323</dc:creator><comments>https://news.ycombinator.com/item?id=47579362</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47579362</guid></item><item><title><![CDATA[New comment by jsunderland323 in "Show HN: Coasts – Containerized Hosts for Agents"]]></title><description><![CDATA[
<p>>One thing I'm curious about: how do you handle state drift when agents are working on the same service across different worktrees? For example, if two agents are both making schema changes to a shared database service, do you have any coordination primitives, or is that left to the orchestration layer above? In my experience the runtime isolation is the easy part - the hard part is when agents need to share state (like a test database) without stepping on each other.<p>Great question! You can configure multiple coasts, so you could have a coast running with isolated dbs/state and also a shared version (you can either share the volume amongst the running coasts or move your db to run host-side as a singleton). So its sort of left to the orchestration layer: you put rules in your md file about when to use each. There's trade-offs to each scenario. I've been using isolated dbs for integration tests, but then for UI things I end up going with shared services.<p>>Re: For example, if two agents are both making schema changes to a shared database service<p>Obviously things can still go wrong here in the shared scenario, but it's worked fine for us and I haven't hit anything so far. It's just like having developers introducing schema changes across feature branches.<p>>Also, the per-service strategy config (none/hot/restart/rebuild) seems like the right abstraction. Most of the overhead in switching worktrees comes from unnecessary full restarts of services that don't actually care about the code change.<p>Totally, at first switching worktrees for our 1m+ loc repo was like 2 minutes. Then we introduced the hot/none strategies and got it down to like 8s. This is by far one of the best features we have.</p>
]]></description><pubDate>Mon, 30 Mar 2026 19:24:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47578593</link><dc:creator>jsunderland323</dc:creator><comments>https://news.ycombinator.com/item?id=47578593</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47578593</guid></item><item><title><![CDATA[New comment by jsunderland323 in "Show HN: Coasts – Containerized Hosts for Agents"]]></title><description><![CDATA[
<p>Why do you say that?<p>It works fine on mac (that's what we developed it on) and it's not nearly as much overhead as I was initially expecting. There's probably some added latency from virtual box but it hasn't been noticeable in our usage.</p>
]]></description><pubDate>Mon, 30 Mar 2026 18:43:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47578088</link><dc:creator>jsunderland323</dc:creator><comments>https://news.ycombinator.com/item?id=47578088</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47578088</guid></item><item><title><![CDATA[New comment by jsunderland323 in "Show HN: Coasts – Containerized Hosts for Agents"]]></title><description><![CDATA[
<p>Hey thanks! To be clear it does use docker. It's a docker-in-docker solution.<p>I think there's a quite a few things:<p>1) You need a control plane to manage the host-side ports. Docker alone cannot do that, so you're either going to write a docker-compose for your development environment where you hard code dynamic ports into a special docker-compose or you're going to end up writing your own custom control plane.<p>2) You can preserve your regular Docker setup without needing to alter it around dynamic ports and parallelized runtimes. I like this a lot because I want to know that my docker-compose is an approximation of production.<p>3) Docker basically leaves you with one type of strategy... docker compose up and docker compose down. With coasts you can decide on different strategies when you switch worktrees on a per service basis.<p>4) This is sort of back to point 2, but more often than not you want to do things like have some shared services or volumes across parallelized runtimes, Coasts makes that trivial (You can also have multiple coast configs so you can easily create a coast type that has isolated volumes). If you go the pure docker route, you are going to end up having multiple docker-composes for different scenarios that are easily abstracted by coasts.<p>5) The UI you get out of the box for keeping track of your assigned worktrees is super useful.<p>6) There's a lot of built in optimizations around switching worktrees in the inner bind mount that you'll have to manually code up yourself.<p>7) I think the ergonomics are just way better. I know that's kind of a vibesey answer but it was sort of the impetus for making Coasts in the first place.<p>8) There's a lot of stuff around secrets management that I think Coasts does particularly well but can get cumbersome if you're hand-rolling a docker solution.</p>
]]></description><pubDate>Mon, 30 Mar 2026 18:20:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47577817</link><dc:creator>jsunderland323</dc:creator><comments>https://news.ycombinator.com/item?id=47577817</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47577817</guid></item><item><title><![CDATA[New comment by jsunderland323 in "Show HN: Coasts – Containerized Hosts for Agents"]]></title><description><![CDATA[
<p>Thanks!<p>Yeah, I think there's a ton of great remote solutions right now. I think worktrees make the local stuff tricky but hopefully Coasts can help you out.<p>Let me know how it goes!</p>
]]></description><pubDate>Mon, 30 Mar 2026 16:05:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47576058</link><dc:creator>jsunderland323</dc:creator><comments>https://news.ycombinator.com/item?id=47576058</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47576058</guid></item><item><title><![CDATA[New comment by jsunderland323 in "Show HN: Coasts – Containerized Hosts for Agents"]]></title><description><![CDATA[
<p>HN questions we know are coming our way:<p>1) Could you run an agent in the coast?<p>You could... sort of. We started out with this in mind. We wanted to get Claude Max plans to work so we built a way to inject OAuth secrets from the host into the containerized host... unfortunately because the Coast runtime doesn't match the host machine the OAuth token is created on, Anthropic rapidly invalidates the OAuth tokens. This would really only work for TUIs/CLIs and you'd almost certainly have to bring a usage key (at least for Anthropic). You would also need to figure out how to get a browser runtime into the containerized host if you wanted things like playwright to work for your agent.<p>There's so many good host-side solutions for sandboxing. Coasts is not a sandboxing tool and we don't try to be. We should play well with all host-side sandboxing solutions though.<p>2) Why DinD and why not mount namespaces with unshare / nsenter?<p>Yes, DinD is heavy. A core principle of our design was to run the user's docker-compose unmodified. We wanted the full docker api inside the running containerized host. Raw mount namespaces can't provide image caches, network namespaces, and build layers without running against the host daemon or reimplementing Docker itself.<p>In practice, I've seen about 200mb of overhead with each containerized host running Dind. We have a Podman runtime in the works, which may cut that down some. But the bulk of utilization comes from the services you're running and how you decide to optimize your containerized hosts and docker stack. We have a concept of "shared-services". For example if you don't need isolated postgres or redis, you can declare those services as shared in your Coastfile, and they'll run once on the host Docker daemon instead of being duplicated inside each containerized host, coasts will route to them.</p>
]]></description><pubDate>Mon, 30 Mar 2026 15:18:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47575434</link><dc:creator>jsunderland323</dc:creator><comments>https://news.ycombinator.com/item?id=47575434</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47575434</guid></item><item><title><![CDATA[Show HN: Coasts – Containerized Hosts for Agents]]></title><description><![CDATA[
<p>Hi HN - We've been working on Coasts (“containerized hosts”) to make it so you can run multiple localhost instances, and multiple docker-compose runtimes, across git worktrees on the same computer. Here’s a demo: <a href="https://www.youtube.com/watch?v=yRiySdGQZZA" rel="nofollow">https://www.youtube.com/watch?v=yRiySdGQZZA</a>. There are also videos in our docs that give a good conceptual overview: <a href="https://coasts.dev/docs/learn-coasts-videos">https://coasts.dev/docs/learn-coasts-videos</a>.<p>Agents can make code changes in different worktrees in isolation, but it's hard for them to test their changes without multiple localhost runtimes that are isolated and scoped to those worktrees as well. You can do it up to a point with port hacking tricks, but it becomes impractical when you have a complex docker-compose with many services and multiple volumes.<p>We started playing with Codex and Conductor in the beginning of this year and had to come up with a bunch of hacky workarounds to give the agents access to isolated runtimes. After bastardizing our own docker-compose setup, we came up with Coasts as a way for agents to have their own runtimes without having to change your original docker-compose.<p>A containerized host (from now on we’ll just say “coast” for short) is a representation of your project's runtime, like a devcontainer but without the IDE stuff—it’s just focused on the runtime. You create a Coastfile at your project root and usually point to your project's docker-compose from there. When you run `coast build` next to the Coastfile you will get a build (essentially a docker image) that can be used to spin up multiple Docker-in-Docker runtimes of your project.<p>Once you have a coast running, you can then do things like assign it to a worktree, with `coast assign dev-1 -w worktree-1`. The coast will then point at the worktree-1 root.<p>Under the hood the host project root and any external worktree directories are Docker-bind-mounted into the container at creation time but the /workspace dir, where we run the services of the coast from, is a separate Linux bind mount that we create inside the running container. When switching worktrees we basically just do umount -l /workspace, mount --bind <path_to_worktree_root>, mount --make-rshared /workspace inside of the running coast. The rshared flag sets up mount propagation so that when we remount /workspace, the change flows down to the inner Docker daemon's containers.<p>The main idea is that the agents can continue to work host-side but then run exec commands against a specific coast instance if they need to test runtime changes or access runtime logs. This makes it so that we are harness agnostic and create interoperability around any agent or agent harness that runs host-side.<p>Each coast comes with its own set of dynamic ports: you define the ports you wish to expose back to the host machine in the Coastfile. You're also able to "checkout" a coast. When you do that, socat binds the canonical ports of your coast (e.g. web 3000, db 5432) to the host machine. This is useful if you have hard coded ports in your project or need to do something like test webhooks.<p>In your Coastfile you point to all the locations on your host-machine where you store your worktrees for your project (e.g. ~/.codex/worktrees). When an agent runs `coast lookup` from a host-side worktree directory, it is able to find the name of the coast instance it is running on, so it can do things like call `coast exec dev-1 make tests`. If your agent needs to do things like test with Playwright it can so that host-side by using the dynamic port of your frontend.<p>You can also configure volume topologies, omit services and volumes that your agent doesn't need, as well as share certain services host-side so you don't add overhead to each coast instance. You can also do things like define strategies for how each service should behave after a worktree assignment change (e.g. none, hot, restart, rebuild). This helps you optimize switching worktrees so you don't have to do a whole docker-compose down and up cycle every time.<p>We'd love to answer any questions and get your feedback!</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47575417">https://news.ycombinator.com/item?id=47575417</a></p>
<p>Points: 99</p>
<p># Comments: 38</p>
]]></description><pubDate>Mon, 30 Mar 2026 15:17:51 +0000</pubDate><link>https://github.com/coast-guard/coasts</link><dc:creator>jsunderland323</dc:creator><comments>https://news.ycombinator.com/item?id=47575417</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47575417</guid></item><item><title><![CDATA[New comment by jsunderland323 in "OpenClaw is a security nightmare dressed up as a daydream"]]></title><description><![CDATA[
<p>>Yeah, you do need to be ok with not being able to later chargeback your grocery store, just like with cash.<p>Well the reason that works is because in grocery stores you have a concept of card present so the liability shifts to the issuing bank... so there are no chargebacks. Concepts like card present and card not present demand a centralized authority and really can't exist in a decentralized payment rail, unless you're going to somehow invent decentralized pos hardware for merchants. Once you enter the world of atoms, you have re-introduced centralized trust into your payment rail though.<p>> Convenient without poor people with no rewards cards subsidizing everyone else to cover Visa’s take.<p>I fully agree. This is a crappy part of ccs and the best remedy is to disallow rewards programs for credit products. This isn't a fault of the card networks its a fault of issuing banks (and the airlines). Every crypto company in 2021 was offering 8% APY, you think those guys would have been better about this than Amex?<p>> Maybe we don’t need an alternative when Visa handles everything, but it might be nice to not pay a 3% markup on everything.<p>I'm actually not bothered by a take from the banks and networks involved. They are underwriting risk and affording insurances to me and the merchant. I guess my main argument is that it's good to have centralized insurance in money transfer facilitation. 3% is high and a failure of Dodd Frank. The Durbin Amendment should have reigned in cc fees and not just focused on debit interchange.<p>> Alternatively, we could try to be more like India and Brazil, which each built instant bank to bank transfer setups you can use at the grocery store, without the risks that come with losing debit/credit cards.<p>I don't disagree. As you pointed out it really comes down to the crappy reward programs from the issuing banks that make merchants and poor people suffer.<p>I don't mind crypto as an idea. I don't have a horse in the crypto race either. What I mind is the notion that it is somehow a viable payment rail. I'm sorry, it's been 20 years and crypto's best use case for payments has been buying acid on the internet because it was the only payment option.<p>I think one of the most interesting business stories in the world is about the guy who invented the Visa network, Dee Hock. It truly is a story of decentralization at its finest. John Coogan did a great video on him a couple of years ago I highly recommend: <a href="https://www.youtube.com/watch?v=RNbi2cUZt1o" rel="nofollow">https://www.youtube.com/watch?v=RNbi2cUZt1o</a>.</p>
]]></description><pubDate>Tue, 24 Mar 2026 06:04:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47499080</link><dc:creator>jsunderland323</dc:creator><comments>https://news.ycombinator.com/item?id=47499080</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47499080</guid></item></channel></rss>