<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: Denzel</title><link>https://news.ycombinator.com/user?id=Denzel</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 21:20:20 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=Denzel" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by Denzel in "AI got the blame for the Iran school bombing. The truth is more worrying"]]></title><description><![CDATA[
<p>Thanks for point-by-point.<p>Your first two quotes are about targeting in the Iraq War; specifically how the breakdown in careful analysis, precipitated by the new systems, led to the exact mis-targeting they were trying to solve. That’s what the entire article is about.<p>And your third quote is from an ex-official commenting on the event <i>after</i> the school strike happened.<p>These quotes contradict your original point, ie they show how careful analysis has been designed out of the system.<p>> We killed young kids, but not on purpose. We targeted a building and intent matters. I refuse to believe anyone in the decision chain would move forward if they believed kids were going to be killed. If you do - how can you? Why would they?<p>This sounds incredibly naive. For starters, plausible deniability due to diffuse responsibility is a thing.<p>“Of course we don’t target schools and kill children, this was a system error.” But the message gets sent regardless and meanwhile we have people arguing back-and-forth over grains of sand because they took an action with deliberate plausible deniability.<p>For a historical analog that involved killing US children “unintentionally”, you can read up on the Ludlow Massacre - <a href="https://www.pbs.org/wgbh/americanexperience/features/rockefellers-ludlow/#:~:text=April%2021%2C%201914:%20Rockefeller%20to,been%20committed%20under%20your%20authority." rel="nofollow">https://www.pbs.org/wgbh/americanexperience/features/rockefe...</a><p>Of course they didn’t intend to kill the children, they only intended to disperse the strikers by setting their tents on fire. It was simply a mistake.</p>
]]></description><pubDate>Sat, 28 Mar 2026 03:44:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47551400</link><dc:creator>Denzel</dc:creator><comments>https://news.ycombinator.com/item?id=47551400</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47551400</guid></item><item><title><![CDATA[New comment by Denzel in "AI got the blame for the Iran school bombing. The truth is more worrying"]]></title><description><![CDATA[
<p>Friend, TFA commonly refers to the effing article that’s posted for discussion.<p>EDIT: The irony that GP then goes on the quote TFA and not NYT.</p>
]]></description><pubDate>Sat, 28 Mar 2026 03:10:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47551194</link><dc:creator>Denzel</dc:creator><comments>https://news.ycombinator.com/item?id=47551194</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47551194</guid></item><item><title><![CDATA[New comment by Denzel in "AI got the blame for the Iran school bombing. The truth is more worrying"]]></title><description><![CDATA[
<p>You’re acting like the U.S. government is a monolithic good faith actor right now. The current administration’s behavior is qualitatively different than past administrations.<p>Do you also believe this administration will ever officially confirm Renee Good and Alex Pretti <i>were not</i> domestic terrorists?<p>It’s hard to interpret your points charitably here.</p>
]]></description><pubDate>Sat, 28 Mar 2026 03:07:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47551183</link><dc:creator>Denzel</dc:creator><comments>https://news.ycombinator.com/item?id=47551183</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47551183</guid></item><item><title><![CDATA[New comment by Denzel in "AI got the blame for the Iran school bombing. The truth is more worrying"]]></title><description><![CDATA[
<p><a href="https://www.nytimes.com/2026/03/11/us/politics/iran-school-missile-strike.html" rel="nofollow">https://www.nytimes.com/2026/03/11/us/politics/iran-school-m...</a><p>> An ongoing [United States] military investigation has determined that the United States is responsible for a deadly Tomahawk missile strike on an Iranian elementary school, according to U.S. officials and others familiar with the preliminary findings.</p>
]]></description><pubDate>Fri, 27 Mar 2026 18:45:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47546677</link><dc:creator>Denzel</dc:creator><comments>https://news.ycombinator.com/item?id=47546677</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47546677</guid></item><item><title><![CDATA[New comment by Denzel in "AI got the blame for the Iran school bombing. The truth is more worrying"]]></title><description><![CDATA[
<p>So I read the entire TFA, where do you see “quotes [from] those in the know who believe this should have been eliminated as a target”? I saw no such quotes about the school in TFA. Maybe I missed it.<p>> there was precisely one mis-strike in 1000s of sorties<p>How did you verify this? Because I’ll remind you, the U.S. administration denied responsibility for some time before owning up to this due to public pressure. Absent public pressure, I guess we would’ve had zero mis-strikes.<p>> so this already is a low error rate<p>As a father of similarly aged daughters, I can’t express enough how grotesque and disturbing the term “error rate” is here.<p>We targeted and killed young children. Plain and simple.<p>> However, you have made a very, very strong assumption that these targets were not carefully evaluated.<p>Let’s take the opposing assumption that this target was carefully evaluated then. Please reason through the implications now?</p>
]]></description><pubDate>Fri, 27 Mar 2026 18:28:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47546445</link><dc:creator>Denzel</dc:creator><comments>https://news.ycombinator.com/item?id=47546445</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47546445</guid></item><item><title><![CDATA[New comment by Denzel in "Autoresearch on an old research idea"]]></title><description><![CDATA[
<p>How much did this cost? Has there ever been an engineering focus on performance for liquid?<p>It’s certainly cool, but the optimizations are so basic that I’d expect a performance engineer to find these within a day or two with some flame graphs and profiling.</p>
]]></description><pubDate>Mon, 23 Mar 2026 19:50:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47494272</link><dc:creator>Denzel</dc:creator><comments>https://news.ycombinator.com/item?id=47494272</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47494272</guid></item><item><title><![CDATA[New comment by Denzel in "Agentic Engineering Patterns"]]></title><description><![CDATA[
<p>Mind linking the project so we can see the PR’s?</p>
]]></description><pubDate>Thu, 05 Mar 2026 00:03:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47255751</link><dc:creator>Denzel</dc:creator><comments>https://news.ycombinator.com/item?id=47255751</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47255751</guid></item><item><title><![CDATA[New comment by Denzel in "Claude Code is being dumbed down?"]]></title><description><![CDATA[
<p>Apologies, I may have misinterpreted the passage below from your repo:<p>> This crate was developed with the assistance of Claude Opus 4.5 initially to answer the shower thought "would the Braille Unicode trick work to visually simulate complex ball physics in a terminal?" Opus 4.5 one-shot the problem, so I decided to further experiment to make it more fun and colorful.<p>Also, yes, I don’t dispute that human written software takes iteration as well. My point is that the significance of autonomous agentic coding feels exaggerated if I’m holding the LLM’s hand more than I have to hold a senior engineer’s hand.<p>That doesn’t mean the tech isn’t valuable. The claims just feel over exaggerated.</p>
]]></description><pubDate>Thu, 12 Feb 2026 13:59:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=46988913</link><dc:creator>Denzel</dc:creator><comments>https://news.ycombinator.com/item?id=46988913</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46988913</guid></item><item><title><![CDATA[New comment by Denzel in "Claude Code is being dumbed down?"]]></title><description><![CDATA[
<p>First, very cool! Thank you for sharing some actual projects with the prompts logged.<p>I think you and I have different definitions of “one-shotting”. If the model has to be steered, I don’t consider that a one-shot.<p>And you clearly “broke” the model a few times based on your prompt log where the model was unable to solve the problem given with the spec.<p>Honestly, your experience in these repos matches my daily experience with these models almost exactly.<p>I want to see good/interesting work where the model is going off and doing its thing for <i>multiple hours</i> without supervision.</p>
]]></description><pubDate>Thu, 12 Feb 2026 01:15:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46983679</link><dc:creator>Denzel</dc:creator><comments>https://news.ycombinator.com/item?id=46983679</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46983679</guid></item><item><title><![CDATA[New comment by Denzel in "Claude Code is being dumbed down?"]]></title><description><![CDATA[
<p>Weird, I broke Opus 4.5 pretty easily by giving some code, a build system, and integration tests that demonstrate the bug.<p>CC confidently iterated until it discovered the issue. CC confidently communicated exactly what the bug was, a detailed step-by-step deep dive into all the sections of the code that contributed to it. CC confidently suggested a fix that it then implemented. CC declared victory after 10 minutes!<p>The bug was still there.<p>I’m willing to admit I might be “holding it wrong”. I’ve had some successes and failures.<p>It’s all very impressive, but I still have yet to see how people are consistently getting CC to work for hours on end to produce good work. That still feels far fetched to me.</p>
]]></description><pubDate>Wed, 11 Feb 2026 22:42:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=46982281</link><dc:creator>Denzel</dc:creator><comments>https://news.ycombinator.com/item?id=46982281</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46982281</guid></item><item><title><![CDATA[New comment by Denzel in "The tech market is fundamentally fucked up and AI is just a scapegoat"]]></title><description><![CDATA[
<p>Good points - admittedly, I didn’t put enough effort into building connections through different pipelines back when I was contracting. Upwork and a few personal connections were my sole sources.<p>It just felt really difficult to do both the engineering work while trying to do customer development at the same time.<p>The fact that OP has been able to do this for so long, while supporting a family, piqued my interest.</p>
]]></description><pubDate>Thu, 29 Jan 2026 17:06:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46812986</link><dc:creator>Denzel</dc:creator><comments>https://news.ycombinator.com/item?id=46812986</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46812986</guid></item><item><title><![CDATA[New comment by Denzel in "The tech market is fundamentally fucked up and AI is just a scapegoat"]]></title><description><![CDATA[
<p>Are you a one-person shop? How do you find clients?</p>
]]></description><pubDate>Thu, 29 Jan 2026 13:39:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46810050</link><dc:creator>Denzel</dc:creator><comments>https://news.ycombinator.com/item?id=46810050</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46810050</guid></item><item><title><![CDATA[New comment by Denzel in "Ideas are cheap, execution is cheaper"]]></title><description><![CDATA[
<p>Very interesting, thanks for sharing! Looks like you have considerable experience with vibe coding to be able to produce that in 2 hours.</p>
]]></description><pubDate>Sun, 18 Jan 2026 19:23:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46671217</link><dc:creator>Denzel</dc:creator><comments>https://news.ycombinator.com/item?id=46671217</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46671217</guid></item><item><title><![CDATA[New comment by Denzel in "AI coding assistants are getting worse?"]]></title><description><![CDATA[
<p>I’m well aware of what Google does and their AI strategy ;)</p>
]]></description><pubDate>Thu, 15 Jan 2026 14:46:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=46633256</link><dc:creator>Denzel</dc:creator><comments>https://news.ycombinator.com/item?id=46633256</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46633256</guid></item><item><title><![CDATA[New comment by Denzel in "Ideas are cheap, execution is cheaper"]]></title><description><![CDATA[
<p>Would you mind sharing the repo?</p>
]]></description><pubDate>Thu, 15 Jan 2026 14:44:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=46633218</link><dc:creator>Denzel</dc:creator><comments>https://news.ycombinator.com/item?id=46633218</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46633218</guid></item><item><title><![CDATA[New comment by Denzel in "AI coding assistants are getting worse?"]]></title><description><![CDATA[
<p>Cool, that potential 5x cost improvement just got delivered this year. A company can continue running the previous generation until EOL, or take a hit by writing off the residual value - either way they’ll have a mixed cost model that puts their token cost somewhere in the middle between previous and current gens.<p>Also, you’re missing material capex and opex costs from a DC perspective. Certain inputs exhibit diseconomies of scale when your demand outstrips market capacity. You do notice electricity cost is rising and companies are chomping at the bit to build out more power plants, right?<p>Again, I ran the numbers for simplicity’s sake to show it’s not clear cut that these models are profitable. “I can sort of see how you can get this to work” agrees with exactly what I said: it’s unclear, certainly not a slam dunk.<p>Especially when you factor in all the other real-world costs.<p>We’ll find out soon enough.</p>
]]></description><pubDate>Fri, 09 Jan 2026 08:48:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46551506</link><dc:creator>Denzel</dc:creator><comments>https://news.ycombinator.com/item?id=46551506</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46551506</guid></item><item><title><![CDATA[New comment by Denzel in "Logistics Is Dying; Or – Dude, Where's My Mail?"]]></title><description><![CDATA[
<p>> It's that we're paying more for objectively worse service than we had a decade ago.<p>> I'm not asking for magic, I'm asking where went the reliability we already had, at the prices we're already paying.<p>My god thank you! My partner and I have been talking about this for the past 2 years in the context of food service and delivery service industry.<p>Greater than 50% of all our restaurant orders are straight up wrong or missing items, whether it’s from local places, chains, or fast food restaurants.<p>The unreliability is staggering, especially because we’re paying so much more!<p>It’s gotten so bad that we’re done with certain services and establishments for good now, or we make sure to QC before leaving the restaurant to ensure everything is in the bag.<p>Even more ironic, this happened a couple weeks ago at Texas Roadhouse — the same restaurant I worked in decades ago as a teenager, so I remember the process we had to go through for to-go orders.<p>First, we’d take the order over the phone. We’d repeat the order back to the customer to confirm everything (1st QC). When the food came up in the window, we’d pack the food in bags, crossing off every item on the receipt before stapling it to the bag (2nd QC). When the customer came to pick up their food, we’d have to take every box out of the bag, show the customer the food, and confirm that everything they expected in their order was there (3rd QC).<p>No customer. Every left. With an incorrect order. Simple.<p>That process is gone now. We paid more and came home missing my partner’s meal. Wtf.</p>
]]></description><pubDate>Fri, 09 Jan 2026 05:02:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=46550233</link><dc:creator>Denzel</dc:creator><comments>https://news.ycombinator.com/item?id=46550233</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46550233</guid></item><item><title><![CDATA[New comment by Denzel in "2025: The Year in LLMs"]]></title><description><![CDATA[
<p>Great response, we’re like 98% aligned at a high-level. :) These next few years will be interesting.</p>
]]></description><pubDate>Fri, 09 Jan 2026 02:49:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=46549548</link><dc:creator>Denzel</dc:creator><comments>https://news.ycombinator.com/item?id=46549548</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46549548</guid></item><item><title><![CDATA[New comment by Denzel in "AI coding assistants are getting worse?"]]></title><description><![CDATA[
<p>Uhm, you actually just proved their point if you run the numbers.<p>For simplicity’s sake we’ll assume DeepSeek 671B on 2 RTX 5090 running at 2 kW full utilization.<p>In 3 years you’ve paid $30k total: $20k for system + $10k in electric @ $0.20/kWh<p>The model generates 500M-1B tokens total over 3 years @ 5-10 tokens/sec. Understand that’s total throughput for reasoning and output tokens.<p>You’re paying $30-$60/Mtok - more than both Opus 4.5 and GPT-5.2, for less performance and less features.<p>And like the other commenters point out, this doesn’t even factor in the extra DC costs when scaling it up for consumers, nor the costs to train the model.<p>Of course, you can play around with parameters of the cost model, but this serves to illustrate it’s not so clear cut whether the current AI service providers are profitable or not.</p>
]]></description><pubDate>Fri, 09 Jan 2026 02:42:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=46549513</link><dc:creator>Denzel</dc:creator><comments>https://news.ycombinator.com/item?id=46549513</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46549513</guid></item><item><title><![CDATA[New comment by Denzel in "2025: The Year in LLMs"]]></title><description><![CDATA[
<p>We probably work at the same company, given you used MAANG instead of FAANG.<p>As one of the WAU (really DAU) you’re talking about, I want to call out a couple things: 1) the LOC metrics are flawed, and anyone using the agents knows this - eg, ask CC to rewrite the 1 commit you wrote into 5 different commits, now you have 5 100% AI-written commits; 2) total speed up across the entire dev lifecycle is far below 10x, most likely below 2x, but I don’t see any evidence of anyone measuring the counterfactuals to prove speed up anyways, so there’s no clear data; 3) look at token spend for power users, you might be surprised by how many SWE-years they’re spending.<p>Overall it’s unclear whether LLM-assisted coding is ROI-positive.</p>
]]></description><pubDate>Fri, 02 Jan 2026 02:30:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=46460744</link><dc:creator>Denzel</dc:creator><comments>https://news.ycombinator.com/item?id=46460744</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46460744</guid></item></channel></rss>