<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: wenc</title><link>https://news.ycombinator.com/user?id=wenc</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 13:03:57 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=wenc" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by wenc in "Gas Town: From Clown Show to v1.0"]]></title><description><![CDATA[
<p>I feel Gastown is an attempt at answering: what if i push the multi-agent paradigm to its chaotic end?<p>But I think the point that Yegge doesn't address and that I had to discover for myself is: getting many agents working in parallel doing different things -- while cool and exciting (in an anthromorphic way) -- might not actually be solving the right problem. The bottleneck in development isn't <i>workflow orchestration</i> (what Gastown does) -- it's actually <i>problem decomposition</i>.<p>And Beads doesn't actually handle the decomposed problem well. I thought it did. But all it is is a task-graph system. Each bead is task, and agents can just pick up tasks to work on. That looks a lot like an SDE picking up a JIRA ticket right? But the problem is embedding just enough context in the task that the agent can do it right. But often it doesn't, so the agent has to guess missing context. And it often produces plausible code that is wrong.<p>Devolving a goal into the smaller slices is really where a lot of difficulty lies. You might say, oh, "I can just tell Claude to write Epics/Stories/Tasks, and it'll figure it out". Right? But without something grounding it like a spec, Claude doesn't do a good job. It won't know exactly how much context to provide to each independent agent.<p>What I have found useful is spec-driven development, especially of the opinionated variety that Kiro IDE offers. Kiro IDE is a middling Cursor, but an excellent spec generator -- in fact one of the best. It generates 3 specs at 3 levels of abstraction. It generates a Requirements doc in EARS/INCOSE (used at Rolls Royce and Boeing for reducing spec ambiguity), and then generate a Design doc (commonly done at FAANG), and... then generates a Task list, which cross-references the sections of the requirements/design.<p>This kind of spec hugely limits the degrees of freedom. The Requirements part of the spec actually captures intent, which is key. The Design part mocks interfaces, embeds glossaries, and also embeds PBTs (property-based tests using Hypothesis -- maybe eventually Hegel?) as gating mechanisms to check invariants. The Task list is what Beads is supposed to do -- but Beads can't do a good job because it doesn't have the other two specs.<p>I've deployed 4 products now using Kiro spec-driven dev (+ Simon Willison's tip "do red/green tdd") and they're running in prod and so far so good. They're pressure-tested using real data.<p>Spec-driven development isn't perfect but I feel its aim is the correct one -- to capture intentions, to reduce the degrees of freedom, and to constrain agents toward correctness. I tried using Claude Code's /plan mode but it's nowhere as rigorous, and there's still spec drift in the generated code. It doesn't pin down the problem sufficiently.<p>Gastown/Beads are solutions for workflow orchestration problem (which is exciting for tech bros), but at its core, it's not the most important problem. Problem decomposition is.<p>Otherwise you're just solving the wrong problem, fast.</p>
]]></description><pubDate>Wed, 15 Apr 2026 00:57:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47773388</link><dc:creator>wenc</dc:creator><comments>https://news.ycombinator.com/item?id=47773388</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47773388</guid></item><item><title><![CDATA[New comment by wenc in "NYC to open municipal grocery store in 2027"]]></title><description><![CDATA[
<p>You're thinking a tax break which is an unconditional subsidy.  That relies on the business passing savings through which folks are right to be skeptical about.<p>But that's not all subsidy mechanisms. The best ones are where pass-through is enforced, not assumed.<p>You already know of one that works: WIC. It lowers the effective price for customer, which the store receives as reimbursement.<p>It's not about trickle-down -- that's ideology. It's more about designing the right mechanism.</p>
]]></description><pubDate>Wed, 15 Apr 2026 00:01:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47773038</link><dc:creator>wenc</dc:creator><comments>https://news.ycombinator.com/item?id=47773038</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47773038</guid></item><item><title><![CDATA[New comment by wenc in "Design and implementation of DuckDB internals"]]></title><description><![CDATA[
<p>I use DuckDB daily.<p>In short — It doesn’t crash often at all.<p>What you may be remembering were reports of exceptional cases where it didn’t handle out of memory errors well. I was one of the people affected. I was running complex analytic queries on  400 GB parquets and I only had 128GB memory. It used jemalloc which didn’t gracefully degrade. They fixed a lot of the OOM issues so it’s more robust now. I haven’t had a crash for a long time.<p>On normal sized datasets it never crashes.</p>
]]></description><pubDate>Tue, 14 Apr 2026 16:32:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47767840</link><dc:creator>wenc</dc:creator><comments>https://news.ycombinator.com/item?id=47767840</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47767840</guid></item><item><title><![CDATA[New comment by wenc in "Distributed DuckDB Instance"]]></title><description><![CDATA[
<p>Try DuckLake. They just released a prod version.<p>You can do read/write of a parquet folder on your local drive, but managed by DuckLake. Supports schema evolution and versioning too.<p>Basically SQLite for parquet.</p>
]]></description><pubDate>Tue, 14 Apr 2026 12:34:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47764825</link><dc:creator>wenc</dc:creator><comments>https://news.ycombinator.com/item?id=47764825</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47764825</guid></item><item><title><![CDATA[New comment by wenc in "S3 Files"]]></title><description><![CDATA[
<p>Maybe the OP is thinking of reading/writing to DuckDB native format files. Those require filesystem semantics for writing. Unfortunately, even NFS or SMB are not sufficiently FS-like for DuckDB.<p>Parquet is static append only, so DuckDB has no problems with those living on S3.</p>
]]></description><pubDate>Wed, 08 Apr 2026 01:01:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47683396</link><dc:creator>wenc</dc:creator><comments>https://news.ycombinator.com/item?id=47683396</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47683396</guid></item><item><title><![CDATA[New comment by wenc in "Move Detroit"]]></title><description><![CDATA[
<p>I recommend visiting Detroit to update your priors. I first visited in 2000 and it was blighted. I visited again in 2025 and it’s actually nice (downtown Detroit and surroundings). There’s even a Microsoft office there.</p>
]]></description><pubDate>Tue, 07 Apr 2026 22:50:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47682357</link><dc:creator>wenc</dc:creator><comments>https://news.ycombinator.com/item?id=47682357</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47682357</guid></item><item><title><![CDATA[New comment by wenc in "Jack Dorsey says Block employees now bring prototypes, not slides, to meetings"]]></title><description><![CDATA[
<p>We need to match the tool to the uncertainty we're facing.<p>The "just prototype it" thinking addresses "feasibility uncertainty". It surfaces blind spots and helps people tangibly reason about what the product looks like. It's a great exploratory tool for incremental ideas.<p>But it doesn't address the the larger uncertainty that startups are faced with: "market uncertainty" (or pmf). It doesn't answer "should we be building in this the first place?" That's where <i>writing as a tool of thought</i> is most powerful -- it helps you crystallize what problem we're actually solving.<p>The "just prototype it" culture (which is being promoted these days because Claude Code makes it easy) risks answering the wrong question, or at least the right question but in the wrong order. You end up with organizations that are incredibly fast at building things that no one should have built.<p>Ironically sometimes you need to start from a lower resolution (i.e. writing a doc). Prototyping too early is premature optimization.</p>
]]></description><pubDate>Sat, 04 Apr 2026 16:12:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47640346</link><dc:creator>wenc</dc:creator><comments>https://news.ycombinator.com/item?id=47640346</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47640346</guid></item><item><title><![CDATA[New comment by wenc in "What category theory teaches us about dataframes"]]></title><description><![CDATA[
<p>Polars is Ritchie Vink. Pandas is Wes McKinney.</p>
]]></description><pubDate>Sat, 04 Apr 2026 05:35:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47636135</link><dc:creator>wenc</dc:creator><comments>https://news.ycombinator.com/item?id=47636135</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47636135</guid></item><item><title><![CDATA[New comment by wenc in "Why I Vibe in Go, Not Rust or Python"]]></title><description><![CDATA[
<p>It depends on the use case. Go seems like a dream... until you have to work with dataframes or do any kind of ML work. Then it's a nightmare.<p>Go's ecosystem is especially weak in ML, stats, and any kind of scientific computation. I mean, do you really want Claude to implement standard battle-tested ML algorithms in Go from scratch? You'd be burning tokens and still get a worse result than if you'd just used Python.<p>I use Go to write CLI tools, but for ML work I'd rather have Claude generate Python.<p>The suitability of language hinges not only on its language design, but its ecosystem as well.</p>
]]></description><pubDate>Mon, 23 Mar 2026 01:14:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47484285</link><dc:creator>wenc</dc:creator><comments>https://news.ycombinator.com/item?id=47484285</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47484285</guid></item><item><title><![CDATA[New comment by wenc in "I love my dumb watches"]]></title><description><![CDATA[
<p>Great, but I like my Apple Watch for its one killer features: not tell time, but tap to pay. I gave up wearing my Tissot watch for this.<p>This genuinely saves me time and adds fluidity to my day. Tap for subway. Tap to for vending machine. Tap for restaurant bill. Tap for shop purchase.<p>I genuinely don't look at my phone not much, so it's always deep in my winter coat pocket. Fishing it out takes 2-3 seconds each time.</p>
]]></description><pubDate>Fri, 20 Mar 2026 23:31:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47462272</link><dc:creator>wenc</dc:creator><comments>https://news.ycombinator.com/item?id=47462272</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47462272</guid></item><item><title><![CDATA[New comment by wenc in "A Japanese glossary of chopsticks faux pas (2022)"]]></title><description><![CDATA[
<p>The disposable wooden chopsticks in Japan don’t splinter (they’re higher quality and cost more than the ones we have in the US).<p>That’s why you don’t need to rub to get rid of splinters.</p>
]]></description><pubDate>Fri, 20 Mar 2026 21:51:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47461127</link><dc:creator>wenc</dc:creator><comments>https://news.ycombinator.com/item?id=47461127</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47461127</guid></item><item><title><![CDATA[New comment by wenc in "Scaling Karpathy's Autoresearch: What Happens When the Agent Gets a GPU Cluster"]]></title><description><![CDATA[
<p>I wonder if it's more like "qualitative gradient descent" on a very non-linear non-convex surface.<p>You can try this yourself in a simple fashion -- let's say you have piece of code that you want to speed up. Point your agent to a code profiler (your oracle -- typically your Python profiler) and tell it speed up the code. I've tried it. It works.</p>
]]></description><pubDate>Thu, 19 Mar 2026 23:16:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47447792</link><dc:creator>wenc</dc:creator><comments>https://news.ycombinator.com/item?id=47447792</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47447792</guid></item><item><title><![CDATA[New comment by wenc in "Kotlin creator's new language: talk to LLMs in specs, not English"]]></title><description><![CDATA[
<p>Rehashing my comment from before:<p>I use Kiro IDE (≠ Kiro CLI) primarily as a spec generator.
In my experience, it's high-quality for creating and iterating on specs. Tools like Cursor are optimized for human-driven vibing -- they have great autocomplete, etc. Kiro, by contrast, is optimized around spec, which ironically has been the most effective approach I've found for driving agents.<p>I'd argue that Cursor, Antigravity, and similar tools are optimized for human steering, which explains their popularity, while Kiro is optimized for agent harnesses. That's also why it’s underused: it's quite opinionated, but very effective. Vibe-coding culture isn't sold on spec driven development (they think it's waterfall and summarily dismiss it -- even Yegge has this bias), so people tend to underrate it.<p>Kiro writes specs using structured formats like EARS and INCOSE (which is the spc format used in places like Boeing for engineering reqs). It performs automated reasoning to check for consistency, then generates a design document and task list from the spec -- similar to what Beads does. I usually spend a significant amount of time pressure-testing the spec before implementing (often hours to days), and it pays off. Writing a good, consistent spec is essentially the computer equivalent of "writing as a tool of thought" in practice.<p>Once the spec is tight, implementation tends to follow it closely. Kiro also generates property-based tests (PBTs) using Hypothesis in Python, inspired by Haskell's QuickCheck. These tests sweep the input domain and, when combined with traditional scenario-based unit tests, tend to produce code that adheres closely to the spec. I also add a small instruction "do red/green TDD" (I learned this from Simon Willison) and that one line alone improved the quality of all my tests.
Kiro can technically implement the task list itself, but this is where agents come in. With the spec in hand, I use multiple headless CLI agents in tmux (e.g., Kiro CLI, Claude Code) for implementation. The results have been very good. With a solid Kiro spec and task list, agents usually implement everything end-to-end without stopping -- I haven’t found a need for Ralph loops. (agents sometimes tend to stop mid way on Claude plans, but I've never had that happen with Kiro, not sure why, maybe it's the checklist, which includes PBT tests as gates).<p>didn't have the strongest start, but the Kiro IDE is one of the best spec generators I've used, and it integrates extremely well with agent-driven workflows.</p>
]]></description><pubDate>Thu, 12 Mar 2026 18:53:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47355428</link><dc:creator>wenc</dc:creator><comments>https://news.ycombinator.com/item?id=47355428</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47355428</guid></item><item><title><![CDATA[New comment by wenc in "After outages, Amazon to make senior engineers sign off on AI-assisted changes"]]></title><description><![CDATA[
<p>I use Kiro IDE (≠ Kiro CLI) primarily as a spec generator.<p>In my experience, it's high-quality for creating and iterating on specs. Tools like Cursor are optimized for human-driven vibing -- they have great autocomplete, etc. Kiro, by contrast, is optimized around spec, which ironically has been the most effective approach I've found for driving agents.<p>I'd argue that Cursor, Antigravity, and similar tools are optimized for human steering, which explains their popularity, while Kiro is optimized for agent harnesses. That's also why it’s underused: it's quite opinionated, but very effective. Vibe-coding culture isn't sold on spec driven development (they think it's waterfall and summarily dismiss it -- even Yegge has this bias), so people tend to underrate it.<p>Kiro writes specs using structured formats like EARS and INCOSE. It performs automated reasoning to check for consistency, then generates a design document and task list from the spec -- similar to what Beads does. I usually spend a significant amount of time pressure-testing the spec before implementing (often hours to days), and it pays off. Writing a good, consistent spec is essentially the computer equivalent of "writing as a tool of thought" in practice.<p>Once the spec is tight, implementation tends to follow it closely. Kiro also generates property-based tests (PBTs) using Hypothesis in Python, inspired by Haskell's QuickCheck. These tests sweep the input domain and, when combined with traditional scenario-based unit tests, tend to produce code that adheres closely to the spec. I also add a small instruction "do red/green TDD" (I learned this from Simon Willison) and that one line alone improved the quality of all my tests.<p>Kiro can technically implement the task list itself, but this is where agents come in. With the spec in hand, I use multiple headless CLI agents in tmux (e.g., Kiro CLI, Claude Code) for implementation. The results have been very good. With a solid Kiro spec and task list, agents usually implement everything end-to-end without stopping -- I haven’t found a need for Ralph loops. (agents sometimes tend to stop mid way on Claude plans, but I've never had that happen with Kiro, not sure why, maybe it's the checklist, which includes PBT tests as gates).<p>Kiro didn't have the strongest start, but the Kiro IDE is one of the best spec generators I've used, and it integrates extremely well with agent-driven workflows.</p>
]]></description><pubDate>Wed, 11 Mar 2026 01:10:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47330762</link><dc:creator>wenc</dc:creator><comments>https://news.ycombinator.com/item?id=47330762</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47330762</guid></item><item><title><![CDATA[New comment by wenc in "Put the zip code first"]]></title><description><![CDATA[
<p>I think that can solved using a city dropdown there’s ambiguity.<p>This is not new. Some checkouts do start with zip code and they feel much more efficient. (US only delivery, so country already assumed)</p>
]]></description><pubDate>Sat, 07 Mar 2026 23:44:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47292645</link><dc:creator>wenc</dc:creator><comments>https://news.ycombinator.com/item?id=47292645</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47292645</guid></item><item><title><![CDATA[New comment by wenc in "BMW Group to deploy humanoid robots in production in Germany for the first time"]]></title><description><![CDATA[
<p>I used to work for the US side of a German multinational (one of the largest in the world) and discovered the same thing when it came to software.<p>The German side always had slick presentations (they always had good visual marketing) and impressive claims, but whenever I tried to work with their products, I always found the claims overstated and that they hadn't really executed deeply. This despite my German counterparts working hard (I visited HQ in Germany and when they work, they really work and clock the hours, no idle chitchat)... yet it doesn't translate to impact.<p>A lot of their products had impressive front-ends but half-baked back-ends (on the American side, it's the reverse -- our interfaces looked like crap, but our stuff actually worked and often delivered in less time).<p>A lot of their designs were also non-human friendly (if you've ever driven a German car, you'll realize that the car was built for engineers and not for end users -- weird little user-hostile features pop up everywhere). I don't understand why this is -- this is a nation that produced Dieter Rams. Tobi Lutke (CEO Shopify) likes to talk about how Germans grew up surrounded by good design, yet that design culture never permeated many German products. I own a Bosch in-unit washer/dryer and it's frustratingly unintuitive and has a "my (the engineer's) way or the highway" philosophy.<p>I went to a BMW talk once about the infotainment system (it was built on the latest Azure tech), but came away feeling that the work was not deep. It was skin deep.<p>I wonder what has happened to the German builder/tinkerer culture that made German manufacturing great. In the 1980s and 1990s, Germany was synonymous with excellence. But in the 2000s-present, not so much (except maybe in very narrow mittelstand verticals, e.g. Zeiss).</p>
]]></description><pubDate>Thu, 05 Mar 2026 01:11:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47256196</link><dc:creator>wenc</dc:creator><comments>https://news.ycombinator.com/item?id=47256196</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47256196</guid></item><item><title><![CDATA[New comment by wenc in "Notes on Lagrange Interpolating Polynomials"]]></title><description><![CDATA[
<p>I used to solve differential algebraic equations using Lagrange polynomials.<p>Essentially you convert the differential equations into an algebraic system by discretizing the solution. The method is called Orthogonal Collocation on Finite Elements (OCFE), and it was developed by chemical engineers.<p>The Lagrange polynomials were calculated at special knots that corresponded to Radau interior points, which work great for stiff systems.<p>It’s great for solving differential algebraic equations through purely sparse matrix operations, no explicit integration like Runge Kutta. 
(Well, it’s implicit Runge Kutta).</p>
]]></description><pubDate>Mon, 02 Mar 2026 18:20:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47221855</link><dc:creator>wenc</dc:creator><comments>https://news.ycombinator.com/item?id=47221855</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47221855</guid></item><item><title><![CDATA[New comment by wenc in "When does MCP make sense vs CLI?"]]></title><description><![CDATA[
<p>> RE: duckdb. I have a wonderful time with ChatGPT talking to duckdb but I have kept it to inmemory db only. Do you set up some system prompt that tell it to keep a duckdb database locally on disk in the current folder?<p>No, I don't use DuckDB's database format at all. DuckDB for me is more like an engine to work with CSV/Parquet (similar to `jq` for JSON, and `grep` for strings).<p>Also I don't use web-based chat (you mentioned ChatGPT) -- all these interactions are through agents like Kiro or Claude Code.<p>I often have CSVs that are 100s of MBs and there's no way they fit in context, so I tell Opus to use DuckDB to sample data from the CSV. DuckDB works way better than any dedicated CSV tool because it packs a full database engine that can return aggregates, explore the limits of your data (max/min), figure out categorical data levels, etc.<p>For Parquet, I just point DuckDB to the 100s of GBs of Parquet files in S3 (our data lake), and it's blazing fast at introspecting that data. DuckDB is one of the best Parquet query engines on the planet (imo better than Apache Spark) despite being just a tiny little CLI tool.<p>One of the use cases is debugging results from an ML model artifact (which is more difficult that debugging code).<p>For instance, let's say a customer points out  a weird result in a particular model prediction. I highlight that weird result, and tell Opus to work backwards to trace how the ML model (I provide the training code and inference code) arrived at that number. Surprisingly, Opus 4.6 is does a great job using DuckDB to figure out how the input data produced that one weird output. If necessary, Opus will even write temporary Python code to call the inference part of the ML model to do inference on a sample to verify assumptions. If the assumptions turn out to be wrong, Opus will change strategies. It's like watching a really smart junior work through the problem systematically. Even if Opus doesn't end up nailing the actual cause, it gets into the proximity of the real cause and I can figure out the rest. (usually it's not the ML model itself, but some anomaly in the input). This has saved me so much time in deep-diving weird results. Not only that, I can have confidence in the deep-dive because I can just run the exact DuckDB SQL to convince myself (and others) of the source of the error, and that it's not something Opus hallucinated. CLI tools are deterministic and transparent that way. (unlike MCPs which are black boxes)</p>
]]></description><pubDate>Sun, 01 Mar 2026 21:19:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47210776</link><dc:creator>wenc</dc:creator><comments>https://news.ycombinator.com/item?id=47210776</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47210776</guid></item><item><title><![CDATA[New comment by wenc in "When does MCP make sense vs CLI?"]]></title><description><![CDATA[
<p>MCPs (especially remote MCPs) are like a black box API -- you don't have to install anything, provision any resources, etc. You just call it and get an answer. There's a place for that, but an MCP is ultimately a blunt instrument.<p>CLI tools on the other hand are like precision instruments. Yes, you have to install them locally once, but after that, they have access to your local environment and can discover things on their own. There are two CLIs are particularly powerful for working with large structured data: `jq` and `duckdb` cli. I tell the agent to never load large JSON, CSV or Parquet files into context -- instead, introspect them intelligently by sampling the data with said CLI tools. And Opus 4.6 is amazing at this! It figures out the shape of the data on its own within seconds by writing "probing" queries in DuckDB and jq. When it hits a bottleneck, Opus 4.6 figures out what's wrong, and tries other query strategies. It's amazing to watch it go down rabbit holes and then recovering automatically. This is especially useful for doing exploratory data analysis in ML work. The agent uses these tools to quickly check data edge cases, and does a way more thorough job than me.<p>CLIs also feel "snappier" than MCPs. MCPs often have latency, whereas you can see CLIs do things in real time. There's a certain ergonomic niceness to this.<p>p.s. other CLIs I use often in conjunction with agents:<p>`showboat` (Simon Willison) to do linear walkthroughts of code.<p>`br` (Rust port of Beads) to create epics/stories/tasks to direct Opus in implementing a plan.<p>`psql` to probe Postgres databases.<p>`roborev` (Wes McKinney) to do automatic code reviews and fixes.</p>
]]></description><pubDate>Sun, 01 Mar 2026 19:45:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47209977</link><dc:creator>wenc</dc:creator><comments>https://news.ycombinator.com/item?id=47209977</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47209977</guid></item><item><title><![CDATA[New comment by wenc in "Warren Buffett dumps $1.7B of Amazon stock"]]></title><description><![CDATA[
<p>I have two Alexas but never get ads.<p>How is it that you’re getting ads?</p>
]]></description><pubDate>Wed, 18 Feb 2026 19:41:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47065324</link><dc:creator>wenc</dc:creator><comments>https://news.ycombinator.com/item?id=47065324</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47065324</guid></item></channel></rss>