<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: mccoyb</title><link>https://news.ycombinator.com/user?id=mccoyb</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 01:26:39 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=mccoyb" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by mccoyb in "Scoring Show HN submissions for AI design patterns"]]></title><description><![CDATA[
<p>The problem is not vibe coding itself. The problem is that certain untrained people do not have or perhaps do not care to learn the necessary skills to refine the result into something novel, or clear / precise, something which communicates (clearly) the idea they are trying to convey to others (who are hoping to learn something new).<p>In a climate where it seems like VC are <i>woefully</i> bereft of the same skills, there's an impetus to just slop garbage up for any vague idea, without taking the care or time to polish it into something which has that intangibly human sense of greatness and clarity.<p>I see, you've done something -- but why? If you continue to ask this question, you will arrive at good science ... but many submissions are not aimed at that level of communication or stop far ahead of the point at which the question becomes interesting.<p>There's that phrase: "better to remain silent and be thought a fool than to speak and to remove all doubt" which strikes as poignant, except it seems like <i>the audience</i> today are also fools ... the inmates are running the asylum.</p>
]]></description><pubDate>Wed, 22 Apr 2026 15:24:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47864950</link><dc:creator>mccoyb</dc:creator><comments>https://news.ycombinator.com/item?id=47864950</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47864950</guid></item><item><title><![CDATA[New comment by mccoyb in "Multi-Agentic Software Development Is a Distributed Systems Problem"]]></title><description><![CDATA[
<p>Re — totally fine with hand-waving for intuition.<p>I just came away from the read thinking that this post was pointing to something very strong and was a bit irked to find that the state of results was more subtle than the post conveys it.</p>
]]></description><pubDate>Tue, 14 Apr 2026 20:28:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47771071</link><dc:creator>mccoyb</dc:creator><comments>https://news.ycombinator.com/item?id=47771071</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47771071</guid></item><item><title><![CDATA[New comment by mccoyb in "Multi-Agentic Software Development Is a Distributed Systems Problem"]]></title><description><![CDATA[
<p>LLMs utilize categorical distributions defined by the logits computed by the matrix multiplies, and there are <i>many</i> sampling strategies which are employed. This is one of the core mechanisms for token generation.<p>There's no peculiarity to discuss, that's how they work. That's how they are trained (the loss is defined by probabilistic density computations), that's how inference works, etc.<p>> I guess my central claim is that there hasn't been a salient argument made as to why the randomness here is relevant for consensus. Maybe the models exhibit some variability in their output, but in practice does this substantially change how they approach consensus? Can we model this as artefacts of how they are initialised rather than some inherent stochasticity? Why not? It feels like randomness is being introduced here as a sort of magic "get out of jail" free card here.<p>I'm really surprised to hear this given the content of the post. The claims in the post are <i>quite strong</i>, yet here I need to give a counterargument to why the claim about consensus applying to pseudorandom processes is relevant?<p>I don't think it's necessary to furnish a counterexample when pointing out when a formal claim is overreaching. It's not clear what the results are in this case! So it feels premature to claim that results cover a wider array of things than shown?<p>For instance, this is a strong claim:<p>> it means that in any multi-agentic system, irrespective of how smart the agents are, they will never be able to guarantee that they are able to do both at the same time:
>
>    Be Safe - i.e produce well formed software satisfying the user's specification.
>    Be Live - i.e always reach consensus on the final software module.<p>I'm confused as to the stance, we're either hand-waving, or we're not -- so which is it?</p>
]]></description><pubDate>Tue, 14 Apr 2026 20:20:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47770958</link><dc:creator>mccoyb</dc:creator><comments>https://news.ycombinator.com/item?id=47770958</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47770958</guid></item><item><title><![CDATA[New comment by mccoyb in "Multi-Agentic Software Development Is a Distributed Systems Problem"]]></title><description><![CDATA[
<p>There is not a single mention of probability in this post.<p>The post acts like agents are a highly complex but well-specified deterministic function. Perhaps, under certain temperature limits, this is approximately true ... but that's a <i>serious</i> restriction and glossed over.<p>For instance, perhaps the most striking constraint about FLP is that it is about <i>deterministic consensus</i> ... the post glazes over this:<p>> establishes a fundamental impossibility result dictating consensus in any asynchronous distributed system (yes! that includes us).<p>No, not <i>any</i> asynchronous distributed system, that might not include us. For instance, Ben-Or (1983, <a href="https://dl.acm.org/doi/10.1145/800221.806707" rel="nofollow">https://dl.acm.org/doi/10.1145/800221.806707</a>) (as a counterexample to the adversary in FLP) essentially says "if you're stuck, flip a coin". There's significant work studying randomized consensus (yes, multi-agents are randomized consensus algorithms): <a href="https://www.sciencedirect.com/science/article/abs/pii/S0196677483710229" rel="nofollow">https://www.sciencedirect.com/science/article/abs/pii/S01966...</a><p>Now, in Ben-Or, the coins have to be independent sources of randomness, and that's obviously not true in the multi-agent case.<p>But it's very clear that <i>the language</i> in this post seems to be arguing that these results apply without understanding possibly the most fundamental fact of agents: they are probability distributions -- inherently, they are stochastic creatures.<p>Difficult to take seriously without a more rigorous justification.</p>
]]></description><pubDate>Tue, 14 Apr 2026 15:04:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47766588</link><dc:creator>mccoyb</dc:creator><comments>https://news.ycombinator.com/item?id=47766588</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47766588</guid></item><item><title><![CDATA[New comment by mccoyb in "Claude Managed Agents"]]></title><description><![CDATA[
<p>Another way to think about it:<p>For Anthropic to have the best version of this software, they'd have to simultaneously ... well, have the best version of the software, but also <i>beat every other AI company at all subtasks</i> (like: technical writing, diagramming, bug finding -- they'd need to have the unequivocal "best model" in all categories).<p>Surely their version is not going to allow you to e.g. invoke Codex or what have you as part of their stack.</p>
]]></description><pubDate>Wed, 08 Apr 2026 19:56:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47695468</link><dc:creator>mccoyb</dc:creator><comments>https://news.ycombinator.com/item?id=47695468</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47695468</guid></item><item><title><![CDATA[New comment by mccoyb in "Claude Managed Agents"]]></title><description><![CDATA[
<p>I'm suspicious that this is going to lead to optimal orchestration ... or rather, that open source won't produce a far better alternative in time.<p>The best performance I've gotten is <i>by mixing agents</i> from different companies. Unless there is a "winner take all" agent (I seriously doubt it, based on the dynamics and cost of collecting high quality RL data), I think the best orchestration systems are going to involve mixing agents.<p>Here, it's not about the planner, it's about the workers. Some agents are just better at certain things than others.<p>For instance, Opus 4.6 <i>on max</i> does not hold a candle to GPT 5.4 xhigh in terms of bug finding. It's just not even a comparison, iykyk.<p>Almost analogous to how diversity of thought can improve the robustness of the outcomes in real world teams. The same thing seems to be true in mixture-of-agent-distributions space.</p>
]]></description><pubDate>Wed, 08 Apr 2026 19:45:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47695318</link><dc:creator>mccoyb</dc:creator><comments>https://news.ycombinator.com/item?id=47695318</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47695318</guid></item><item><title><![CDATA[New comment by mccoyb in "Show HN: TUI-use: Let AI agents control interactive terminal programs"]]></title><description><![CDATA[
<p>See related sibling: the use cases are compelling!<p>My complaint is that tmux handles them perfectly. Exactly the claim that OP is making with their software - is served by robust 18 year old software.<p>In 2026, it costs nearly nothing to thoroughly and autonomously investigate related software — so yes I am going to be purposefully abrasive about it.</p>
]]></description><pubDate>Wed, 08 Apr 2026 17:59:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47693894</link><dc:creator>mccoyb</dc:creator><comments>https://news.ycombinator.com/item?id=47693894</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47693894</guid></item><item><title><![CDATA[New comment by mccoyb in "Show HN: TUI-use: Let AI agents control interactive terminal programs"]]></title><description><![CDATA[
<p>Something something medical researcher reinvents calculus.<p>In 2026: frontend web developer reinvents tmux.<p>Guys, please do us the service of pre-filtering your crack token dreams by investigating the tool stack which is already available in the terminal ... or at least give us the courtesy of explaining why your vibecoded Greenspun's 10th something is a significant leg up on what already exists, and perhaps has existed for many years, (and is therefore, in the training set, and is therefore, probably going to work <i>perfectly</i> out of the box).</p>
]]></description><pubDate>Wed, 08 Apr 2026 17:13:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47693182</link><dc:creator>mccoyb</dc:creator><comments>https://news.ycombinator.com/item?id=47693182</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47693182</guid></item><item><title><![CDATA[New comment by mccoyb in "Tell HN: Anthropic no longer allowing Claude Code subscriptions to use OpenClaw"]]></title><description><![CDATA[
<p>I see -- and AI is just like all technologies that came before it ...<p>It is not <i>a class of labor</i> ... it is all digital labor. Do you or do you not understand this?<p>It is digital knowledge itself, and then all communication labor, and then all <i>physical</i> labor with robotics.<p>Is this clear to you?</p>
]]></description><pubDate>Sat, 04 Apr 2026 01:57:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47634841</link><dc:creator>mccoyb</dc:creator><comments>https://news.ycombinator.com/item?id=47634841</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47634841</guid></item><item><title><![CDATA[New comment by mccoyb in "Tell HN: Anthropic no longer allowing Claude Code subscriptions to use OpenClaw"]]></title><description><![CDATA[
<p><a href="https://code.claude.com/docs/en/overview#what-you-can-do" rel="nofollow">https://code.claude.com/docs/en/overview#what-you-can-do</a><p>Try this one: <a href="https://code.claude.com/docs/en/overview#run-agent-teams-and-build-custom-agents" rel="nofollow">https://code.claude.com/docs/en/overview#run-agent-teams-and...</a><p>Or perhaps: <a href="https://code.claude.com/docs/en/overview#pipe-script-and-automate-with-the-cli" rel="nofollow">https://code.claude.com/docs/en/overview#pipe-script-and-aut...</a><p>You know what they say about looking and quacking.</p>
]]></description><pubDate>Sat, 04 Apr 2026 01:30:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47634644</link><dc:creator>mccoyb</dc:creator><comments>https://news.ycombinator.com/item?id=47634644</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47634644</guid></item><item><title><![CDATA[New comment by mccoyb in "Tell HN: Anthropic no longer allowing Claude Code subscriptions to use OpenClaw"]]></title><description><![CDATA[
<p>Contempt for users is not silly when the CEO of said company has repeatedly claimed they will replace SWEs "end-to-end" by next year.<p>I'm not sure what to say. You're either listening to the actions of these companies, or you're not in a place where you feel the need to be concerned be their actions.<p>I'm in a place where I'm concerned by their actions, and the impact that their claims and behavior have on the working environment around me.</p>
]]></description><pubDate>Sat, 04 Apr 2026 01:16:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47634543</link><dc:creator>mccoyb</dc:creator><comments>https://news.ycombinator.com/item?id=47634543</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47634543</guid></item><item><title><![CDATA[New comment by mccoyb in "Tell HN: Anthropic no longer allowing Claude Code subscriptions to use OpenClaw"]]></title><description><![CDATA[
<p>Why not use datacenter of geniuses to increase capacity? Grug confused.</p>
]]></description><pubDate>Sat, 04 Apr 2026 00:31:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47634216</link><dc:creator>mccoyb</dc:creator><comments>https://news.ycombinator.com/item?id=47634216</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47634216</guid></item><item><title><![CDATA[New comment by mccoyb in "Tell HN: Anthropic no longer allowing Claude Code subscriptions to use OpenClaw"]]></title><description><![CDATA[
<p>It is confusing for a company to sell you the subscription service, say "Claude Code is covered", ship Claude Code with `claude -p`, and then say "oh right, actually, not _all of Claude Code_, don't try and use it as a executable ... sorry, right, the subscription only works as long as you're looking at that juicy little Claude Code logo in the TUI"<p>The disrespect Anthropic has for their user base is constant and palpable.</p>
]]></description><pubDate>Sat, 04 Apr 2026 00:21:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47634134</link><dc:creator>mccoyb</dc:creator><comments>https://news.ycombinator.com/item?id=47634134</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47634134</guid></item><item><title><![CDATA[New comment by mccoyb in "Emacs-libgterm: Terminal emulator for Emacs using libghostty-vt"]]></title><description><![CDATA[
<p>The vast majority of your complaints are handled by libghostty-vt itself, not by this person's Emacs wrapper software over libghostty.<p>Ghostty is a great piece of software, with a stellar maintainer who has a very pragmatic and measured take on using AI to develop software.</p>
]]></description><pubDate>Thu, 02 Apr 2026 15:32:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47615843</link><dc:creator>mccoyb</dc:creator><comments>https://news.ycombinator.com/item?id=47615843</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47615843</guid></item><item><title><![CDATA[New comment by mccoyb in "Show HN: Axe – A 12MB binary that replaces your AI framework"]]></title><description><![CDATA[
<p>It is large compared to a stripped Zig ReleaseSmall binary with no runtime. With agents, one can take this repo, and create an <i>extremely</i> small binary.<p>To your point, why even advertise the number? If that particular number is completely irrelevant in practical usage, why mention it? It seems like the point is to impress, hence my response.</p>
]]></description><pubDate>Thu, 12 Mar 2026 22:49:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47358313</link><dc:creator>mccoyb</dc:creator><comments>https://news.ycombinator.com/item?id=47358313</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47358313</guid></item><item><title><![CDATA[New comment by mccoyb in "Show HN: Axe – A 12MB binary that replaces your AI framework"]]></title><description><![CDATA[
<p>I know off topic, but is that mostly coming from the Go runtime (how large is that about?)</p>
]]></description><pubDate>Thu, 12 Mar 2026 19:48:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47356118</link><dc:creator>mccoyb</dc:creator><comments>https://news.ycombinator.com/item?id=47356118</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47356118</guid></item><item><title><![CDATA[New comment by mccoyb in "Show HN: Axe – A 12MB binary that replaces your AI framework"]]></title><description><![CDATA[
<p>Cool work!<p>Aside but 12 MB is ... large ... for such a thing. For reference, an entire HTTP (including crypto, TLS) stack with LLM API calls in Zig would net you a binary ~400 KB on ReleaseSmall (statically linked).<p>You can implement an entire language, compiler, and a VM in another 500 KB (or less!)<p>I don't think 12 MB is an impressive badge here?</p>
]]></description><pubDate>Thu, 12 Mar 2026 19:41:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47356014</link><dc:creator>mccoyb</dc:creator><comments>https://news.ycombinator.com/item?id=47356014</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47356014</guid></item><item><title><![CDATA[New comment by mccoyb in "Claude's Cycles [pdf]"]]></title><description><![CDATA[
<p>My recommendation: call a neurologist.</p>
]]></description><pubDate>Thu, 05 Mar 2026 01:43:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47256406</link><dc:creator>mccoyb</dc:creator><comments>https://news.ycombinator.com/item?id=47256406</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47256406</guid></item><item><title><![CDATA[New comment by mccoyb in "Claude's Cycles [pdf]"]]></title><description><![CDATA[
<p>Tune your bot detector, I'm a real person and I think about my comments before posting them.</p>
]]></description><pubDate>Wed, 04 Mar 2026 14:21:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47247708</link><dc:creator>mccoyb</dc:creator><comments>https://news.ycombinator.com/item?id=47247708</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47247708</guid></item><item><title><![CDATA[New comment by mccoyb in "Claude's Cycles [pdf]"]]></title><description><![CDATA[
<p>Sorry, are you familiar with what a <i>next token distribution</i> is, mathematically speaking?<p>If you are not, let me introduce you to the term: a probability distribution.<p>Just because it has profound properties ... doesn't make it <i>different</i>.<p>> has all the trappings of the stochastic parrot-style HN-discourse that has been consistently wrong for almost a decade now<p>Perhaps respond to my actual comment compared to whatever meta-level grouping you wish to interpret it as part of?<p>> It contains a number of premises that we have no business being confident in. We are potentially witnessing the obviation of human cognitive labor.<p>What premises? Be clear.</p>
]]></description><pubDate>Tue, 03 Mar 2026 22:06:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47239738</link><dc:creator>mccoyb</dc:creator><comments>https://news.ycombinator.com/item?id=47239738</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47239738</guid></item></channel></rss>