<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: tao_oat</title><link>https://news.ycombinator.com/user?id=tao_oat</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 08 Apr 2026 03:43:34 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=tao_oat" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by tao_oat in "NanoClaw's architecture is a masterclass in doing less"]]></title><description><![CDATA[
<p>Unfortunately this has all the hallmarks of AI writing, which made me a lot less motivated to read it.</p>
]]></description><pubDate>Tue, 07 Apr 2026 15:43:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47677094</link><dc:creator>tao_oat</dc:creator><comments>https://news.ycombinator.com/item?id=47677094</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47677094</guid></item><item><title><![CDATA[New comment by tao_oat in "Dropping Cloudflare for Bunny.net"]]></title><description><![CDATA[
<p>I tried to move my sites to Bunny Edge Scripting and found the experience mostly poor, unfortunately. A lot of failures without error logs, and purging the pull zone cache only seemed to work sometimes. A shame because I like their offering otherwise.</p>
]]></description><pubDate>Tue, 07 Apr 2026 13:52:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47675405</link><dc:creator>tao_oat</dc:creator><comments>https://news.ycombinator.com/item?id=47675405</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47675405</guid></item><item><title><![CDATA[New comment by tao_oat in "OpenClaw privilege escalation vulnerability"]]></title><description><![CDATA[
<p>Relevant: <a href="https://days-since-openclaw-cve.com/" rel="nofollow">https://days-since-openclaw-cve.com/</a><p>Currently we're at 1.8 CVEs per day since OpenClaw launched!</p>
]]></description><pubDate>Sat, 04 Apr 2026 09:08:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47637309</link><dc:creator>tao_oat</dc:creator><comments>https://news.ycombinator.com/item?id=47637309</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47637309</guid></item><item><title><![CDATA[New comment by tao_oat in "A Rave Review of Superpowers (For Claude Code)"]]></title><description><![CDATA[
<p>Overall I think it's useful.<p>Superpowers has several skills. Its core workflow is:<p>- Brainstorm with you to design a spec<p>- Use subagents to review its own spec, then get your approval<p>- Based on the spec, write a plan, use subagents to review before final approval<p>- Use subagents to implement (using TDD)<p>I think that the brainstorming skill [1] is great. It helps flesh out a rough early idea. I also like that it uses subagents to adversarially review its own spec/plan; that has caught several things I would've missed. I do <i>not</i> like the separation of spec/plan; IMO the models are good enough to get straight to coding once the spec is written. The plan often ends up being code blocks in a Markdown doc.<p>[1]: <a href="https://github.com/obra/superpowers/tree/main/skills/brainstorming" rel="nofollow">https://github.com/obra/superpowers/tree/main/skills/brainst...</a></p>
]]></description><pubDate>Fri, 03 Apr 2026 10:04:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47624894</link><dc:creator>tao_oat</dc:creator><comments>https://news.ycombinator.com/item?id=47624894</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47624894</guid></item><item><title><![CDATA[New comment by tao_oat in "Some uncomfortable truths about AI coding agents"]]></title><description><![CDATA[
<p>I didn't find this very convincing. Especially the argument around artificially low cost -- we know that training the next model is the biggest cost for these companies, and we've already seen inference costs fall drastically (<a href="https://epoch.ai/data-insights/llm-inference-price-trends" rel="nofollow">https://epoch.ai/data-insights/llm-inference-price-trends</a>).</p>
]]></description><pubDate>Fri, 27 Mar 2026 20:48:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47548064</link><dc:creator>tao_oat</dc:creator><comments>https://news.ycombinator.com/item?id=47548064</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47548064</guid></item><item><title><![CDATA[Just One More Prompt]]></title><description><![CDATA[
<p>Article URL: <a href="https://btao.org/posts/2026-03-18-one-more-prompt">https://btao.org/posts/2026-03-18-one-more-prompt</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47428345">https://news.ycombinator.com/item?id=47428345</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 18 Mar 2026 17:07:49 +0000</pubDate><link>https://btao.org/posts/2026-03-18-one-more-prompt</link><dc:creator>tao_oat</dc:creator><comments>https://news.ycombinator.com/item?id=47428345</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47428345</guid></item><item><title><![CDATA[New comment by tao_oat in "I'm Getting a Whiff of Iain Banks' Culture"]]></title><description><![CDATA[
<p>This feels somewhat ahistorical.<p>The US has nearly always been successful in terms of conventional firepower and individual operations. E.g. in 2003 the US overthrew Saddam's government in a matter of weeks. The US won most battles in Vietnam. That doesn't change the fact that the strategic outcomes and long-term track record are poor. Trying to draw a link to AI or the current state of the US military feels flimsy.<p>Anyway, the recurring Big Question throughout the Culture series is "how should a highly progressive, developed, and egalitarian society act when it meets others who are <i>not</i>?". The US is sliding further and further from that ideal, and you can argue whether it was ever close.</p>
]]></description><pubDate>Mon, 09 Mar 2026 17:04:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47311839</link><dc:creator>tao_oat</dc:creator><comments>https://news.ycombinator.com/item?id=47311839</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47311839</guid></item><item><title><![CDATA[Show HN: What % of your commits were written by AI?]]></title><description><![CDATA[
<p>Hi HN,<p>I've been using Claude Code etc. for nearly all my work lately, and I wanted to see how many of my commits it was actually co-authoring. So I made this little tool to visualize my usage.<p>You log in with Github (read-only), it scans your commits from the last year, and visualizes how many came from Claude, Cursor, or any tool that adds Co-Authored-By trailer to the commit message.<p>Caveat: not all tools add this trailer, so this doesn't include e.g. Codex. Still, hope you find it interesting!</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47247102">https://news.ycombinator.com/item?id=47247102</a></p>
<p>Points: 5</p>
<p># Comments: 2</p>
]]></description><pubDate>Wed, 04 Mar 2026 13:27:16 +0000</pubDate><link>https://technically-your-name-is-on-it.btao.org/</link><dc:creator>tao_oat</dc:creator><comments>https://news.ycombinator.com/item?id=47247102</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47247102</guid></item><item><title><![CDATA[New comment by tao_oat in "Don't trust AI agents"]]></title><description><![CDATA[
<p>Haven't tried them in enough depth to compare.<p>Nanobot's was not great (cron + a HEARTBEAT.md meant two ways to do things, which would confuse the AI). But because the implementation is so simple, I could improve it in a few minutes in my own fork!</p>
]]></description><pubDate>Sat, 28 Feb 2026 15:49:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47196717</link><dc:creator>tao_oat</dc:creator><comments>https://news.ycombinator.com/item?id=47196717</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47196717</guid></item><item><title><![CDATA[New comment by tao_oat in "Everything Changes, and Nothing Changes"]]></title><description><![CDATA[
<p>This is an interesting idea! I searched around and it looks like there's [ast-grep](<a href="https://ast-grep.github.io/" rel="nofollow">https://ast-grep.github.io/</a>), an AST-aware CLI that can search and refactor code -- and you can expose it to your AI agent using a skill (<a href="https://github.com/ast-grep/agent-skill" rel="nofollow">https://github.com/ast-grep/agent-skill</a>).<p>Not exactly symbolic AI, but pretty cool nonetheless.</p>
]]></description><pubDate>Sat, 28 Feb 2026 15:43:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47196638</link><dc:creator>tao_oat</dc:creator><comments>https://news.ycombinator.com/item?id=47196638</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47196638</guid></item><item><title><![CDATA[New comment by tao_oat in "What AI coding costs you"]]></title><description><![CDATA[
<p>And they love to do this in spite of writing "NO FALLBACKS" etc. in your AGENTS.md.</p>
]]></description><pubDate>Sat, 28 Feb 2026 14:51:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47196000</link><dc:creator>tao_oat</dc:creator><comments>https://news.ycombinator.com/item?id=47196000</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47196000</guid></item><item><title><![CDATA[New comment by tao_oat in "Don't trust AI agents"]]></title><description><![CDATA[
<p>I haven't used them all but based on my partial research so far:<p>- OpenClaw: the big one, but extremely messy codebase and deployment<p>- NanoClaw: simple, main selling point is that agents spawn their own containers. Personally I don't see why that's preferable to just running the whole thing in a container for single-user purposes<p>- IronClaw: focused on security (tools run in a WASM sandbox, some defenses against prompt injection but idk if they're any good)<p>- PicoClaw: targets low-end machines/Raspberry Pis<p>- ZeroClaw: Claw But In Rust<p>- NanoBot: ~4k lines of Python, easy to understand and modify. This is the one I landed on and have been using Claude to tweak as needed for myself</p>
]]></description><pubDate>Sat, 28 Feb 2026 14:45:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47195950</link><dc:creator>tao_oat</dc:creator><comments>https://news.ycombinator.com/item?id=47195950</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47195950</guid></item><item><title><![CDATA[New comment by tao_oat in "OpenAI agrees with Dept. of War to deploy models in their classified network"]]></title><description><![CDATA[
<p>I'd apply to work for Anthropic in a heartbeat if it was a European company.</p>
]]></description><pubDate>Sat, 28 Feb 2026 13:40:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47195246</link><dc:creator>tao_oat</dc:creator><comments>https://news.ycombinator.com/item?id=47195246</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47195246</guid></item><item><title><![CDATA[New comment by tao_oat in "Pakistani newspaper mistakenly prints AI prompt with the article"]]></title><description><![CDATA[
<p>I've been trying to get ChatGPT to stop adding this kind of fluff to its responses through custom instructions, but to no avail! It's one of the more frustrating parts of it, IMO.</p>
]]></description><pubDate>Wed, 12 Nov 2025 19:58:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=45905536</link><dc:creator>tao_oat</dc:creator><comments>https://news.ycombinator.com/item?id=45905536</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45905536</guid></item><item><title><![CDATA[How to hire the best people you've ever worked with (2007)]]></title><description><![CDATA[
<p>Article URL: <a href="https://fictivekin.github.io/pmarchive-jekyll/how_to_hire_the_best_people.html">https://fictivekin.github.io/pmarchive-jekyll/how_to_hire_the_best_people.html</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45633313">https://news.ycombinator.com/item?id=45633313</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 19 Oct 2025 10:38:50 +0000</pubDate><link>https://fictivekin.github.io/pmarchive-jekyll/how_to_hire_the_best_people.html</link><dc:creator>tao_oat</dc:creator><comments>https://news.ycombinator.com/item?id=45633313</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45633313</guid></item><item><title><![CDATA[New comment by tao_oat in "Wabi – Personal Software Platform"]]></title><description><![CDATA[
<p>neat landing page but i don't see how their distribution model would be fundamentally different from / independent of the app stores.</p>
]]></description><pubDate>Fri, 17 Oct 2025 09:06:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=45614661</link><dc:creator>tao_oat</dc:creator><comments>https://news.ycombinator.com/item?id=45614661</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45614661</guid></item><item><title><![CDATA[New comment by tao_oat in "Ask HN: What's the best SKILL.md format for code planning?"]]></title><description><![CDATA[
<p>Skills were released in Claude Code, what, yesterday? I doubt there's a simple answer to this -- it'll depend on the model, task, etc.<p>You could try to get your agent to test its own skills. From <a href="https://blog.fsck.com/2025/10/09/superpowers" rel="nofollow">https://blog.fsck.com/2025/10/09/superpowers</a>:<p>> As Claude and I build new skills, one of the things I ask it to do is to "test" the skills on a set of subagents to ensure that the skills were comprehensible, complete, and that the subagents would comply with them. (Claude now thinks of this as TDD for skills and uses its RED/GREEN TDD skill as part of the skill creation skill.)<p>> The first time we played this game, Claude told me that the subagents had gotten a perfect score. After a bit of prodding, I discovered that Claude was quizzing the subagents like they were on a gameshow. This was less than useful. I asked to switch to realistic scenarios that put pressure on the agents, to better simulate what they might actually do.</p>
]]></description><pubDate>Fri, 17 Oct 2025 09:03:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=45614644</link><dc:creator>tao_oat</dc:creator><comments>https://news.ycombinator.com/item?id=45614644</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45614644</guid></item><item><title><![CDATA[New comment by tao_oat in "AI Actress Tilly Norwood Condemned by Sag-Aftra: Tilly 'Is Not an Actor '"]]></title><description><![CDATA[
<p>I think this is somewhat overhyped. If you look at the video that actually exists of this character[^1], it's clearly AI slop that falls flat -- honestly kind of embarrasing for the studio to put out. This seems like more of a media stunt than anything.<p>[^1]: <a href="https://www.youtube.com/watch?v=3sVO_j4czYs" rel="nofollow">https://www.youtube.com/watch?v=3sVO_j4czYs</a></p>
]]></description><pubDate>Tue, 30 Sep 2025 13:50:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=45425419</link><dc:creator>tao_oat</dc:creator><comments>https://news.ycombinator.com/item?id=45425419</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45425419</guid></item><item><title><![CDATA[Post-Mortem: OpenTaco using code from OTF without attribution]]></title><description><![CDATA[
<p>Article URL: <a href="https://blog.digger.dev/post-mortem-opentaco-using-code-from-otf-without-attribution/">https://blog.digger.dev/post-mortem-opentaco-using-code-from-otf-without-attribution/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45412573">https://news.ycombinator.com/item?id=45412573</a></p>
<p>Points: 3</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 29 Sep 2025 11:53:53 +0000</pubDate><link>https://blog.digger.dev/post-mortem-opentaco-using-code-from-otf-without-attribution/</link><dc:creator>tao_oat</dc:creator><comments>https://news.ycombinator.com/item?id=45412573</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45412573</guid></item><item><title><![CDATA[New comment by tao_oat in "Privacy Badger is a free browser extension made by EFF to stop spying"]]></title><description><![CDATA[
<p>According to [this page](<a href="https://github.com/arkenfox/user.js/wiki/4.1-Extensions#-dont-bother" rel="nofollow">https://github.com/arkenfox/user.js/wiki/4.1-Extensions#-don...</a>), yes, it's redundant in that case.</p>
]]></description><pubDate>Mon, 29 Sep 2025 10:19:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=45412064</link><dc:creator>tao_oat</dc:creator><comments>https://news.ycombinator.com/item?id=45412064</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45412064</guid></item></channel></rss>