<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: GreenGames</title><link>https://news.ycombinator.com/user?id=GreenGames</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 15 Apr 2026 07:18:03 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=GreenGames" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[Show HN: OS Megakernel that match M5 Max Tok/w at 2x the Throughput on RTX 3090]]></title><description><![CDATA[
<p>Hey there, we fused all 24 layers of Qwen3.5-0.8B (a hybrid DeltaNet + Attention model) into a single CUDA kernel launch and made it open-source for everyone to try it.<p>On an RTX 3090 power-limited to 220W:
- 411 tok/s vs 229 tok/s on M5 Max (1.8x)
- 1.87 tok/J, beating M5 Max efficiency
- 1.55x faster decode than llama.cpp on the same GPU
- 3.4x faster prefill<p>The RTX 3090 launched in 2020. Everyone calls it power-hungry. It isn't, the software is.
The conventional wisdom NVIDIA is fast but thirsty. Apple Silicon is slow but sips power. Pick a side.<p>With stock frameworks, the numbers back that up:
Setup                | tok/s | Power | tok/J
RTX 3090 (llama.cpp) | 267   | 350W  | 0.76
M5 Max (LM Studio)   | 229   | ~130W | 1.76<p>Case closed. Except the 3090 has 936 GB/s of bandwidth and 142 TFLOPS of FP16 compute, and llama.cpp extracts 267 tok/s out of it. That ratio is absurd.<p>Traditional inference dispatches one kernel per operation. For 24 layers, that's roughly 100 launches per token. Every boundary means:
- Return control to the CPU
- Dispatch the next kernel
- Re-fetch weights from global memory
- Synchronize threads<p>Why nobody had done this yet?
Qwen3.5-0.8B isn't a vanilla transformer. It alternates:
- 18 DeltaNet layers: linear attention with a learned recurrence
- 6 Full Attention layers: standard MHA<p>This hybrid pattern is where frontier models are heading: Qwen3-Next, Kimi Linear, all of them. DeltaNet scales linearly with context length instead of quadratically.<p>It's new, and nobody has shipped a fused kernel for it. MLX doesn't have DeltaNet kernels at all. llama.cpp supports it generically. Everyone else is waiting. The 267 tok/s wasn't a hardware ceiling, it was the software ceiling for a brand-new architecture.<p>We wrote a single CUDA kernel that runs the entire forward pass in one dispatch. Data stays in registers and shared memory as it flows through the network. Zero CPU round-trips, zero redundant memory fetches.<p>- 82 blocks x 512 threads, all SMs occupied
- BF16 weights and activations, FP32 accumulation
DeltaNet recurrence runs in warp-cooperative F32 registers
- Full attention fuses QKV, RoPE, causal softmax, and output projection
- Cooperative grid sync replaces kernel launches between layers<p>Results on the same RTX 3090, same model, same weights:
Setup          | Prefill (pp520) | Decode (tg128)
Megakernel     | 37,800 tok/s    | 413 tok/s
llama.cpp BF16 | 11,247 tok/s    | 267 tok/s
PyTorch + HF   | 7,578 tok/s     | 108 tok/s<p>Then we turned the power down
Fewer wasted cycles means less heat, so we swept nvidia-smi -pl:
Power limit  | Clock    | Draw | tok/s | tok/J | Notes
420W (stock) | 1980 MHz | 314W | 433   | 1.38  | baseline
300W         | 1935 MHz | 299W | 432   | 1.44  | -5% power, 99.8% speed
220W         | 1635 MHz | 220W | 411   | 1.87  | -30% power, 95% speed
150W         | 405 MHz  | 150W | 194   | 1.29  | clock cliff, too aggressive<p>At 220W we hit the sweet spot: 95% of the throughput for 70% of the power. Tighter execution converts almost directly into saved watts.
Measurement: NVML energy counters for NVIDIA, powermetrics for Apple Silicon, matching Hazy Research's Intelligence Per Watt methodology. Accelerator power only, not wall draw.<p>Without the megakernel the 3090 barely edges out a laptop chip. With it, a five-year-old GPU beats Apple's latest on throughput, matches it on efficiency, and costs a quarter as much.
The NVIDIA vs Apple efficiency gap isn't silicon. It's software.<p>Try it
git clone <a href="https://github.com/Luce-Org/luce-megakernel.git" rel="nofollow">https://github.com/Luce-Org/luce-megakernel.git</a>
cd luce-megakernel
pip install -e .
python bench_pp_tg.py<p>Requires: NVIDIA Ampere+ (tested on 3090), CUDA 12+, PyTorch 2.0+, ~1.5GB VRAM.<p>Code is open source (MIT): <a href="https://github.com/Luce-Org/luce-megakernel" rel="nofollow">https://github.com/Luce-Org/luce-megakernel</a><p>Let us know if you have any feedback</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47691182">https://news.ycombinator.com/item?id=47691182</a></p>
<p>Points: 5</p>
<p># Comments: 1</p>
]]></description><pubDate>Wed, 08 Apr 2026 15:00:51 +0000</pubDate><link>https://github.com/Luce-Org/luce-megakernel</link><dc:creator>GreenGames</dc:creator><comments>https://news.ycombinator.com/item?id=47691182</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47691182</guid></item><item><title><![CDATA[New comment by GreenGames in "Why AI code fails differently: What I learned talking to 200 engineering teams"]]></title><description><![CDATA[
<p>Super interesting take Paul. Curious btw, how are these teams actually encoding their “institutional knowledge” into constraints? Like is it some manual config or more like natural‑language rules that evolve with the codebase?</p>
]]></description><pubDate>Wed, 12 Nov 2025 15:45:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=45901577</link><dc:creator>GreenGames</dc:creator><comments>https://news.ycombinator.com/item?id=45901577</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45901577</guid></item><item><title><![CDATA[New comment by GreenGames in "App-Use, Control Individual Applications with CUA Agents"]]></title><description><![CDATA[
<p>Hi there, Alessandro and Francesco here. We just launched an experimental feature in C/ua called App-Use. It lets you create virtual desktops scoped to specific apps (e.g., "Safari and Notes only") to give your agents focused, lightweight control without full-screen access.<p>Use cases:<p>- Run multiple agents in parallel with isolated app views<p>- Automate your iPhone using the iPhone Mirroring app<p>- Improve agent task precision and reduce VLM distractions<p>Works only on macOS (Sequoia+) and requires experiments=["app-use"]. No extra processes, just clever compositing.<p>More details: <a href="https://www.trycua.com/blog/app-use">https://www.trycua.com/blog/app-use</a><p>Feedback and experiments welcome!</p>
]]></description><pubDate>Tue, 17 Jun 2025 16:17:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=44300813</link><dc:creator>GreenGames</dc:creator><comments>https://news.ycombinator.com/item?id=44300813</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44300813</guid></item><item><title><![CDATA[App-Use, Control Individual Applications with CUA Agents]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.trycua.com/blog/app-use">https://www.trycua.com/blog/app-use</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44300812">https://news.ycombinator.com/item?id=44300812</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Tue, 17 Jun 2025 16:17:25 +0000</pubDate><link>https://www.trycua.com/blog/app-use</link><dc:creator>GreenGames</dc:creator><comments>https://news.ycombinator.com/item?id=44300812</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44300812</guid></item><item><title><![CDATA[Show HN: Lumier – Run macOS VMs in a Docker]]></title><description><![CDATA[
<p>Hey HN, we're excited to share Lumier (<a href="https://github.com/trycua/cua/tree/main/libs/lumier">https://github.com/trycua/cua/tree/main/libs/lumier</a>), an open-source tool for running macOS and Linux virtual machines in Docker containers on Apple Silicon Macs.<p>When building virtualized environments for AI agents, we needed a reproducible way to package and distribute macOS VMs. Inspired by projects like dockur/windows (<a href="https://github.com/dockur/windows">https://github.com/dockur/windows</a>) that pioneered running Windows in Docker, we wanted to create something similar but optimized for Apple Silicon. The existing solutions either didn't support M-series chips or relied on KVM/Intel emulation, which was slow and cumbersome. We realized we could leverage Apple's Virtualization Framework to create a much better experience.<p>Lumier takes a different approach: it uses Docker as a delivery mechanism (not for isolation) and connects to a lightweight virtualization service (lume) running on your Mac. This creates true hardware-accelerated VMs using Apple's native virtualization capabilities.<p>With Lumier, you can: 
- Launch a ready-to-use macOS VM in minutes with zero manual setup
- Access your VM through any web browser via VNC
- Share files between your host and VM effortlessly
- Use persistent storage or ephemeral mode for quick tests
- Automate VM startup with custom scripts<p>All of this works natively on Apple Silicon (M1/M2/M3/M4) - no emulation required.<p>To get started:<p>1. Install Docker for Apple Silicon: <a href="https://desktop.docker.com/mac/main/arm64/Docker.dmg" rel="nofollow">https://desktop.docker.com/mac/main/arm64/Docker.dmg</a><p>2. Install lume background service with our one-liner:<p><pre><code>  /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/trycua/cua/main/libs/lume/scripts/install.sh)"
</code></pre>
3. Start a VM (ephemeral mode):<p><pre><code>  docker run -it --rm \
  --name lumier-vm \
    -p 8006:8006 \
    -e VM_NAME=lumier-vm \
    -e VERSION=ghcr.io/trycua/macos-sequoia-cua:latest \
    -e CPU_CORES=4 \
    -e RAM_SIZE=8192 \
    trycua/lumier:latest
</code></pre>
4. Open <a href="http://localhost:8006/vnc.html" rel="nofollow">http://localhost:8006/vnc.html</a> in your browser. The container will generate a unique password for each VM instance - you'll see it in the container logs.<p>For persistent storage (so your changes survive container restarts):<p>mkdir -p storage
docker run -it --rm \
  --name lumier-vm \
  -p 8006:8006 \
  -v $(pwd)/storage:/storage \
  -e VM_NAME=lumier-vm \
  -e HOST_STORAGE_PATH=$(pwd)/storage \
  trycua/lumier:latest<p>Want to share files with your VM? Just add another volume:<p>mkdir -p shared
docker run ... -v $(pwd)/shared:/shared -e HOST_SHARED_PATH=$(pwd)/shared ...<p>You can even automate VM startup by placing an on-logon.sh script in shared/lifecycle/.<p>We're seeing people use Lumier for:
- Development and testing environments that need macOS
- CI/CD pipelines for Apple platform apps
- Disposable macOS instances for security research
- Automated UI testing across macOS versions
- Running AI agents in isolated environments<p>Lumier is 100% open-source under the MIT license. We're actively developing it as part of our work on C/ua (<a href="https://github.com/trycua/cua">https://github.com/trycua/cua</a>), and we'd love your feedback, bug reports, or feature ideas.<p>We'll be here to answer any technical questions and look forward to your comments!</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43985624">https://news.ycombinator.com/item?id=43985624</a></p>
<p>Points: 159</p>
<p># Comments: 52</p>
]]></description><pubDate>Wed, 14 May 2025 15:19:41 +0000</pubDate><link>https://github.com/trycua/cua/tree/main/libs/lumier</link><dc:creator>GreenGames</dc:creator><comments>https://news.ycombinator.com/item?id=43985624</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43985624</guid></item><item><title><![CDATA[New comment by GreenGames in "Microsoft is reportedly about to lay off 3% of its workforce"]]></title><description><![CDATA[
<p>Sorry my bad I didn’t see that. Should I cancel it?</p>
]]></description><pubDate>Tue, 13 May 2025 18:07:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=43975876</link><dc:creator>GreenGames</dc:creator><comments>https://news.ycombinator.com/item?id=43975876</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43975876</guid></item><item><title><![CDATA[Microsoft is reportedly about to lay off 3% of its workforce]]></title><description><![CDATA[
<p>Article URL: <a href="https://techcrunch.com/2025/05/13/microsoft-is-reportedly-about-to-lay-off-3-of-its-workforce/">https://techcrunch.com/2025/05/13/microsoft-is-reportedly-about-to-lay-off-3-of-its-workforce/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43975620">https://news.ycombinator.com/item?id=43975620</a></p>
<p>Points: 13</p>
<p># Comments: 2</p>
]]></description><pubDate>Tue, 13 May 2025 17:44:04 +0000</pubDate><link>https://techcrunch.com/2025/05/13/microsoft-is-reportedly-about-to-lay-off-3-of-its-workforce/</link><dc:creator>GreenGames</dc:creator><comments>https://news.ycombinator.com/item?id=43975620</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43975620</guid></item><item><title><![CDATA[Polaris is giving free GPUs/CPUs for everyone]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.polariscloud.ai/#Home">https://www.polariscloud.ai/#Home</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43968685">https://news.ycombinator.com/item?id=43968685</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 13 May 2025 00:23:57 +0000</pubDate><link>https://www.polariscloud.ai/#Home</link><dc:creator>GreenGames</dc:creator><comments>https://news.ycombinator.com/item?id=43968685</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43968685</guid></item><item><title><![CDATA[Improvements in reasoning AI models may slow down soon, analysis finds]]></title><description><![CDATA[
<p>Article URL: <a href="https://techcrunch.com/2025/05/12/improvements-in-reasoning-ai-models-may-slow-down-soon-analysis-finds/">https://techcrunch.com/2025/05/12/improvements-in-reasoning-ai-models-may-slow-down-soon-analysis-finds/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43968575">https://news.ycombinator.com/item?id=43968575</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 13 May 2025 00:01:59 +0000</pubDate><link>https://techcrunch.com/2025/05/12/improvements-in-reasoning-ai-models-may-slow-down-soon-analysis-finds/</link><dc:creator>GreenGames</dc:creator><comments>https://news.ycombinator.com/item?id=43968575</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43968575</guid></item><item><title><![CDATA[New comment by GreenGames in "Klarna changes its AI tune and again recruits humans for customer service"]]></title><description><![CDATA[
<p>I don't understand what Klarna is doing here lol</p>
]]></description><pubDate>Sun, 11 May 2025 17:59:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=43955589</link><dc:creator>GreenGames</dc:creator><comments>https://news.ycombinator.com/item?id=43955589</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43955589</guid></item><item><title><![CDATA[Microsoft and OpenAI are renegotiating their partnership]]></title><description><![CDATA[
<p>Article URL: <a href="https://techcrunch.com/2025/05/11/microsoft-and-openai-may-be-renegotiating-their-partnership/">https://techcrunch.com/2025/05/11/microsoft-and-openai-may-be-renegotiating-their-partnership/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43955582">https://news.ycombinator.com/item?id=43955582</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 11 May 2025 17:59:02 +0000</pubDate><link>https://techcrunch.com/2025/05/11/microsoft-and-openai-may-be-renegotiating-their-partnership/</link><dc:creator>GreenGames</dc:creator><comments>https://news.ycombinator.com/item?id=43955582</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43955582</guid></item><item><title><![CDATA[Apple developing new chips for smart glasses, Macs, and more]]></title><description><![CDATA[
<p>Article URL: <a href="https://techcrunch.com/2025/05/09/apple-said-to-be-developing-new-chips-for-smart-glasses-macs-and-more/">https://techcrunch.com/2025/05/09/apple-said-to-be-developing-new-chips-for-smart-glasses-macs-and-more/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43940630">https://news.ycombinator.com/item?id=43940630</a></p>
<p>Points: 6</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 09 May 2025 20:33:17 +0000</pubDate><link>https://techcrunch.com/2025/05/09/apple-said-to-be-developing-new-chips-for-smart-glasses-macs-and-more/</link><dc:creator>GreenGames</dc:creator><comments>https://news.ycombinator.com/item?id=43940630</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43940630</guid></item><item><title><![CDATA[New comment by GreenGames in "[dead]"]]></title><description><![CDATA[
<p>Sundar just started following cursor on twitter lol</p>
]]></description><pubDate>Tue, 06 May 2025 20:32:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=43909355</link><dc:creator>GreenGames</dc:creator><comments>https://news.ycombinator.com/item?id=43909355</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43909355</guid></item><item><title><![CDATA[New comment by GreenGames in "Show HN: Web-eval-agent – Let the coding agent debug itself"]]></title><description><![CDATA[
<p>Oh okay thanks - that would be fire tbh</p>
]]></description><pubDate>Mon, 28 Apr 2025 16:42:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=43823374</link><dc:creator>GreenGames</dc:creator><comments>https://news.ycombinator.com/item?id=43823374</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43823374</guid></item><item><title><![CDATA[New comment by GreenGames in "Show HN: Web-eval-agent – Let the coding agent debug itself"]]></title><description><![CDATA[
<p>This is very cool! Does your MCP server preserve cookies/localStorage between steps, or would developers need to manually script auth handshakes?</p>
]]></description><pubDate>Mon, 28 Apr 2025 16:18:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=43823137</link><dc:creator>GreenGames</dc:creator><comments>https://news.ycombinator.com/item?id=43823137</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43823137</guid></item><item><title><![CDATA[New comment by GreenGames in "Abusing AI pull request bots for fun and profit"]]></title><description><![CDATA[
<p>very cool, I know they are all a bit different, but what was the one the surprised you the most and you'd recommend to really try out?</p>
]]></description><pubDate>Sat, 26 Apr 2025 16:08:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=43804837</link><dc:creator>GreenGames</dc:creator><comments>https://news.ycombinator.com/item?id=43804837</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43804837</guid></item><item><title><![CDATA[Show HN: Tlume – a CLI tool that converts Tart VM images for Lume]]></title><description><![CDATA[
<p>Shoutout to our amazing contributor @iaktech for building tlume - a CLI tool that converts Tart VM images into the format expected by Lume!<p>tlume will:<p>-  Locate your Tart VM in ~/.tart/vms/<p>- Create a Lume VM in ~/.lume/<p>- Copy all VM files with optimized buffering<p>- Convert the Tart config to Lume format<p>- Preserve all your VM hardware settings, network config, and display settings<p>This new bridge interface will greatly simplify adoption for those with existing Tart images and help commoditize Apple Virtualization.framework solutions across the ecosystem!<p>Lume repo: <a href="https://github.com/trycua/cua/tree/main/libs/lume" rel="nofollow">https://github.com/trycua/cua/tree/main/libs/lume</a></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43445122">https://news.ycombinator.com/item?id=43445122</a></p>
<p>Points: 5</p>
<p># Comments: 2</p>
]]></description><pubDate>Sat, 22 Mar 2025 11:48:51 +0000</pubDate><link>https://github.com/aktech/tlume</link><dc:creator>GreenGames</dc:creator><comments>https://news.ycombinator.com/item?id=43445122</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43445122</guid></item><item><title><![CDATA[Show HN: Lume – OS lightweight CLI for MacOS and Linux VMs on Apple Silicon]]></title><description><![CDATA[
<p>We just open-sourced Lume - a tool we built after hitting walls with existing virtualization options on Apple Silicon. No GUI, no complex stacks - just a single binary that lets you spin up macOS or Linux VMs via CLI or API.<p>Why we built Lume:
- Run native macOS VMs in 1 command, using Apple Virtualization.Framework: `lume run macos-sequoia-vanilla:latest`<p>- Prebuilt images on <a href="https://ghcr.io/trycua" rel="nofollow">https://ghcr.io/trycua</a> (macOS, Ubuntu on ARM)<p>- API server to manage VMs programmatically `POST /lume/vms`<p>- A python SDK on github.com/trycua/pylume<p>Run prebuilt macOS images in just 1 step:
lume run macos-sequoia-vanilla:latest<p>How to Install:<p>brew tap trycua/lume<p>brew install lume<p>You can also download the `lume.pkg.tar.gz` archive from the latest release <a href="https://github.com/trycua/lume/releases">https://github.com/trycua/lume/releases</a>, extract it, and install the package manually.<p>Local API Server:
`lume` exposes a local HTTP API server that listens on `<a href="http://localhost:3000/lume" rel="nofollow">http://localhost:3000/lume</a>`, enabling automated management of VMs.<p>lume serve<p>For detailed API documentation, please refer to API Reference(<a href="https://github.com/trycua/lume/blob/main/docs/API-Reference.md">https://github.com/trycua/lume/blob/main/docs/API-Reference....</a>).<p>HN devs - would love raw feedback on the API design and whether this solves your Apple Silicon VM pain points. What would make you replace UTM/Multipass/Docker Desktop with this?<p>Repo: <a href="https://github.com/trycua/lume">https://github.com/trycua/lume</a>
Python SDK: github.com/trycua/pylume
Discord for direct feedback: <a href="https://discord.gg/8p56E2KJ" rel="nofollow">https://discord.gg/8p56E2KJ</a></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42908061">https://news.ycombinator.com/item?id=42908061</a></p>
<p>Points: 309</p>
<p># Comments: 75</p>
]]></description><pubDate>Sun, 02 Feb 2025 11:46:22 +0000</pubDate><link>https://github.com/trycua/lume</link><dc:creator>GreenGames</dc:creator><comments>https://news.ycombinator.com/item?id=42908061</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42908061</guid></item><item><title><![CDATA[Show HN: List of AI Agents]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/francedot/acu">https://github.com/francedot/acu</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42725367">https://news.ycombinator.com/item?id=42725367</a></p>
<p>Points: 4</p>
<p># Comments: 1</p>
]]></description><pubDate>Thu, 16 Jan 2025 14:15:50 +0000</pubDate><link>https://github.com/francedot/acu</link><dc:creator>GreenGames</dc:creator><comments>https://news.ycombinator.com/item?id=42725367</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42725367</guid></item><item><title><![CDATA[A curated list of AI agents for Computer Use]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/francedot/acu">https://github.com/francedot/acu</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42697911">https://news.ycombinator.com/item?id=42697911</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Tue, 14 Jan 2025 14:54:13 +0000</pubDate><link>https://github.com/francedot/acu</link><dc:creator>GreenGames</dc:creator><comments>https://news.ycombinator.com/item?id=42697911</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42697911</guid></item></channel></rss>