<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: lukebuehler</title><link>https://news.ycombinator.com/user?id=lukebuehler</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 13 May 2026 14:42:30 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=lukebuehler" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by lukebuehler in "One Startup Is Gambling. Ten Is Mathematics"]]></title><description><![CDATA[
<p>This is why taking investments too early is usually a mistake. It locks you in to one hand. Of course, you can pivot, but it is much harder with investors looking over your shoulder.</p>
]]></description><pubDate>Wed, 13 May 2026 07:16:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=48118793</link><dc:creator>lukebuehler</dc:creator><comments>https://news.ycombinator.com/item?id=48118793</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48118793</guid></item><item><title><![CDATA[New comment by lukebuehler in "A Critique of Cybernetics, by Hans Jonas [pdf]"]]></title><description><![CDATA[
<p>Still one of the best critique of AI agents, even today.</p>
]]></description><pubDate>Wed, 13 May 2026 07:00:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=48118693</link><dc:creator>lukebuehler</dc:creator><comments>https://news.ycombinator.com/item?id=48118693</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48118693</guid></item><item><title><![CDATA[A Critique of Cybernetics, by Hans Jonas (1953) [pdf]]]></title><description><![CDATA[
<p>Article URL: <a href="https://s3.amazonaws.com/arena-attachments/892605/f0747c7943bec99f4891969d4a808ecd.pdf">https://s3.amazonaws.com/arena-attachments/892605/f0747c7943bec99f4891969d4a808ecd.pdf</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=48118692">https://news.ycombinator.com/item?id=48118692</a></p>
<p>Points: 3</p>
<p># Comments: 1</p>
]]></description><pubDate>Wed, 13 May 2026 07:00:42 +0000</pubDate><link>https://s3.amazonaws.com/arena-attachments/892605/f0747c7943bec99f4891969d4a808ecd.pdf</link><dc:creator>lukebuehler</dc:creator><comments>https://news.ycombinator.com/item?id=48118692</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48118692</guid></item><item><title><![CDATA[New comment by lukebuehler in "Why senior developers fail to communicate their expertise"]]></title><description><![CDATA[
<p>I keep saying this is the single most important article to consider when talking about AI assisted software building. Everyone should read it. The question should always be: is a human building a theory of the software, or is does only AI understand it? If it's the latter, it is certainly slop.<p>(Second, albeit more theoretical, would be A Critique of Cybernetics by Jonas)</p>
]]></description><pubDate>Wed, 13 May 2026 06:57:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=48118671</link><dc:creator>lukebuehler</dc:creator><comments>https://news.ycombinator.com/item?id=48118671</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48118671</guid></item><item><title><![CDATA[New comment by lukebuehler in "LLMs corrupt your documents when you delegate"]]></title><description><![CDATA[
<p>See A Canticle for Leibowitz</p>
]]></description><pubDate>Sun, 10 May 2026 07:50:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=48081890</link><dc:creator>lukebuehler</dc:creator><comments>https://news.ycombinator.com/item?id=48081890</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48081890</guid></item><item><title><![CDATA[New comment by lukebuehler in "HyperAgents: Self-referential self-improving agents"]]></title><description><![CDATA[
<p>It's a tradeoff. Technically, you need very few programs, you can let an agent do everything and coordinate everything. But that is also inefficient, it's slow and uses a lot of tokens. So you allow the agent to build tools and coordinate those tools, just like we humans do. However, with agents, the threshold of pain is much higher, we can let agents do thing's "manually" where humans would build automations much sooner.</p>
]]></description><pubDate>Fri, 27 Mar 2026 15:42:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47544146</link><dc:creator>lukebuehler</dc:creator><comments>https://news.ycombinator.com/item?id=47544146</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47544146</guid></item><item><title><![CDATA[New comment by lukebuehler in "HyperAgents: Self-referential self-improving agents"]]></title><description><![CDATA[
<p>Agree. It's code all the way down. The key is to give agents a substrate where they can code up new capabilities and then compose them meaningfully and safely.<p>Larger composition, though, starts to run into typical software design problems, like dependency graphs, shared state, how to upgrade, etc.<p>I've been working on this front for over two years now too:
<a href="https://github.com/smartcomputer-ai/agent-os/" rel="nofollow">https://github.com/smartcomputer-ai/agent-os/</a></p>
]]></description><pubDate>Thu, 26 Mar 2026 19:00:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47534304</link><dc:creator>lukebuehler</dc:creator><comments>https://news.ycombinator.com/item?id=47534304</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47534304</guid></item><item><title><![CDATA[New comment by lukebuehler in "I built Timeframe, our family e-paper dashboard"]]></title><description><![CDATA[
<p>Wall-mounted dashboards are a huge life-hack, especially if you have a family. We got a 37-inch touchscreen one, running DAKBoard.<p>We have several kids and have been organizing our daily todos and calendars on it for several years. We used to drop the ball quite a bit due to a hectic schedule and the dashboard has helped us tremendously. Since it is mounted in the kitchen, being able to pull up recipes is a plus.</p>
]]></description><pubDate>Sun, 22 Feb 2026 20:47:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47114506</link><dc:creator>lukebuehler</dc:creator><comments>https://news.ycombinator.com/item?id=47114506</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47114506</guid></item><item><title><![CDATA[New comment by lukebuehler in "IronClaw: a Rust-based clawd that runs tools in isolated WASM sandboxes"]]></title><description><![CDATA[
<p>I think sandboxes are useful, but not sufficient. The whole agent runtime has to be designed to carefully manage I/O effects--and capability gate them. I'm working on this here [0]. There are some similarities to my project in what IronClaw is doing and many other sandboxes are doing, but i think we really gotta think bigger and broader to make this work.<p>[0] <a href="https://github.com/smartcomputer-ai/agent-os/" rel="nofollow">https://github.com/smartcomputer-ai/agent-os/</a></p>
]]></description><pubDate>Fri, 13 Feb 2026 20:34:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47007471</link><dc:creator>lukebuehler</dc:creator><comments>https://news.ycombinator.com/item?id=47007471</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47007471</guid></item><item><title><![CDATA[New comment by lukebuehler in "Software factories and the agentic moment"]]></title><description><![CDATA[
<p>The spec is pretty good! Within a day, Codex has written a good chunk of the attractor stack for me: <a href="https://github.com/smartcomputer-ai/forge" rel="nofollow">https://github.com/smartcomputer-ai/forge</a></p>
]]></description><pubDate>Mon, 09 Feb 2026 19:24:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=46949723</link><dc:creator>lukebuehler</dc:creator><comments>https://news.ycombinator.com/item?id=46949723</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46949723</guid></item><item><title><![CDATA[New comment by lukebuehler in "Software factories and the agentic moment"]]></title><description><![CDATA[
<p>I started a full implementation of the attractor spec here: <a href="https://github.com/smartcomputer-ai/forge" rel="nofollow">https://github.com/smartcomputer-ai/forge</a></p>
]]></description><pubDate>Mon, 09 Feb 2026 19:20:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46949663</link><dc:creator>lukebuehler</dc:creator><comments>https://news.ycombinator.com/item?id=46949663</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46949663</guid></item><item><title><![CDATA[New comment by lukebuehler in "Ask HN: What are you working on? (January 2026)"]]></title><description><![CDATA[
<p>Thanks! NixOS is great at building and configuring systems, while AgentOS is about running and governing long-lived, deterministic agent worlds. They share ideas like immutability and declarative state, but they operate at different layers. I would say if NixOS is about reproducibly constructing a system, AgentOS is about reproducibly operating one: tracking decisions, effects, and evolution over time.</p>
]]></description><pubDate>Mon, 12 Jan 2026 06:38:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46584881</link><dc:creator>lukebuehler</dc:creator><comments>https://news.ycombinator.com/item?id=46584881</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46584881</guid></item><item><title><![CDATA[New comment by lukebuehler in "Ask HN: What are you working on? (January 2026)"]]></title><description><![CDATA[
<p>I’m building AgentOS [1], trying to experiment where agent substrates/sandboxes will head next. It's a deterministic, event-sourced runtime where an “agent world” is replayable from its log, heavy logic runs in sandboxed WASM modules, and every real-world side effect (HTTP, LLM calls, code compilations, etc.) is explicitly capability-gated and recorded as signed receipts. It ensures that upgrades and automations are auditable, reversible, and composable. The fun bit is a small typed control-plane intermediate representation (AIR) that lets the system treat its own schemas/modules/plans/policies as data and evolve via a governed loop (propose > shadow-run > approve > apply), kind of “Lisp machine vibes” but aimed at agents that need reliable self-modification rather than ambient scripts.<p>[1] <a href="https://github.com/smartcomputer-ai/agent-os" rel="nofollow">https://github.com/smartcomputer-ai/agent-os</a></p>
]]></description><pubDate>Sun, 11 Jan 2026 18:35:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=46578312</link><dc:creator>lukebuehler</dc:creator><comments>https://news.ycombinator.com/item?id=46578312</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46578312</guid></item><item><title><![CDATA[New comment by lukebuehler in "Ask HN: What Are You Working On? (December 2025)"]]></title><description><![CDATA[
<p>Excellent article, and I fully agree.<p>I came to the same realization a while ago and started building an agent runtime designed to ensure all (I/O) effects are capability bound and validated by policies, while also allowing the agent to modify itself.<p><a href="https://github.com/smartcomputer-ai/agent-os/" rel="nofollow">https://github.com/smartcomputer-ai/agent-os/</a></p>
]]></description><pubDate>Mon, 15 Dec 2025 07:15:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=46271334</link><dc:creator>lukebuehler</dc:creator><comments>https://news.ycombinator.com/item?id=46271334</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46271334</guid></item><item><title><![CDATA[New comment by lukebuehler in "Search tool that only returns content created before ChatGPT's public release"]]></title><description><![CDATA[
<p>Arguably this is already happening with much human-to-human interactions moving to private groups on Signal, WhatsApp, Telegram, etc.</p>
]]></description><pubDate>Mon, 01 Dec 2025 11:18:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46106078</link><dc:creator>lukebuehler</dc:creator><comments>https://news.ycombinator.com/item?id=46106078</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46106078</guid></item><item><title><![CDATA[New comment by lukebuehler in "What OpenAI did when ChatGPT users lost touch with reality"]]></title><description><![CDATA[
<p>high hanging fruit!</p>
]]></description><pubDate>Tue, 25 Nov 2025 09:26:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46044019</link><dc:creator>lukebuehler</dc:creator><comments>https://news.ycombinator.com/item?id=46044019</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46044019</guid></item><item><title><![CDATA[New comment by lukebuehler in "Ask HN: What Are You Working On? (Nov 2025)"]]></title><description><![CDATA[
<p>Eternal Vault is interesting. I would for sure use something like this. However, only if there is a strong story how the vault will survive 20+ years, even if your company is defunct. I do see the pieces scattered around the website (backup to Dropbox, etc), but this story needs to be front and center.</p>
]]></description><pubDate>Mon, 10 Nov 2025 08:42:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=45873856</link><dc:creator>lukebuehler</dc:creator><comments>https://news.ycombinator.com/item?id=45873856</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45873856</guid></item><item><title><![CDATA[New comment by lukebuehler in "Ask HN: What Are You Working On? (Nov 2025)"]]></title><description><![CDATA[
<p>A low(er)-level agent runtime: <a href="https://github.com/smartcomputer-ai/agent-os/" rel="nofollow">https://github.com/smartcomputer-ai/agent-os/</a><p>AgentOS is a lisp-machine inspired runtime where agents can safely propose, simulate, and apply changes to their own code, policies, and workflows, all under governance, with full audit trails. Every external action produces a signed receipt. Every state change is replayable from an event log.</p>
]]></description><pubDate>Mon, 10 Nov 2025 07:30:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=45873377</link><dc:creator>lukebuehler</dc:creator><comments>https://news.ycombinator.com/item?id=45873377</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45873377</guid></item><item><title><![CDATA[New comment by lukebuehler in "I Work Best Under Stress (and My Family Pays for It)"]]></title><description><![CDATA[
<p>Woah, describes me quite accurately.<p>It actually took me quite a long time to learn this about myself. I do need a base-line of pressure to get the juices flowing. If pressure falls below base-line, my productivity tanks.<p>I'm also just starting to learn how to deal with the downside for my family. It's hard. I can very much relate to the yo-yo.</p>
]]></description><pubDate>Fri, 07 Nov 2025 15:05:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=45847145</link><dc:creator>lukebuehler</dc:creator><comments>https://news.ycombinator.com/item?id=45847145</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45847145</guid></item><item><title><![CDATA[New comment by lukebuehler in "The Case That A.I. Is Thinking"]]></title><description><![CDATA[
<p>I do think it raises interesting and important philosophical questions. Just look at all the literature around the Turing test--both supporters and detractors. This has been a fruitful avenue to talk about intelligence even before the advent of gpt.</p>
]]></description><pubDate>Tue, 04 Nov 2025 13:05:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=45810570</link><dc:creator>lukebuehler</dc:creator><comments>https://news.ycombinator.com/item?id=45810570</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45810570</guid></item></channel></rss>