<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: woah</title><link>https://news.ycombinator.com/user?id=woah</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 09:08:32 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=woah" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by woah in "App Store sees 84% surge in new apps as AI coding tools take off"]]></title><description><![CDATA[
<p>Who cares?</p>
]]></description><pubDate>Thu, 09 Apr 2026 04:43:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47699346</link><dc:creator>woah</dc:creator><comments>https://news.ycombinator.com/item?id=47699346</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47699346</guid></item><item><title><![CDATA[New comment by woah in "Newly created Polymarket accounts win big on well-timed Iran ceasefire bets"]]></title><description><![CDATA[
<p>Can someone articulate what the harm of this is?</p>
]]></description><pubDate>Thu, 09 Apr 2026 02:44:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47698753</link><dc:creator>woah</dc:creator><comments>https://news.ycombinator.com/item?id=47698753</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47698753</guid></item><item><title><![CDATA[New comment by woah in "Claude Managed Agents"]]></title><description><![CDATA[
<p>agentic software services</p>
]]></description><pubDate>Wed, 08 Apr 2026 20:03:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47695560</link><dc:creator>woah</dc:creator><comments>https://news.ycombinator.com/item?id=47695560</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47695560</guid></item><item><title><![CDATA[New comment by woah in "Claude Managed Agents"]]></title><description><![CDATA[
<p>Are they entering their OpenAI throw shit at the wall phase?</p>
]]></description><pubDate>Wed, 08 Apr 2026 20:02:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47695550</link><dc:creator>woah</dc:creator><comments>https://news.ycombinator.com/item?id=47695550</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47695550</guid></item><item><title><![CDATA[New comment by woah in "US and Iran agree to provisional ceasefire"]]></title><description><![CDATA[
<p>> much younger and more formidable Khameini<p>Formidable?</p>
]]></description><pubDate>Wed, 08 Apr 2026 04:41:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47685336</link><dc:creator>woah</dc:creator><comments>https://news.ycombinator.com/item?id=47685336</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47685336</guid></item><item><title><![CDATA[New comment by woah in "Issue: Claude Code is unusable for complex engineering tasks with Feb updates"]]></title><description><![CDATA[
<p>I haven't noticed any issues on well-specified tasks, even ones requiring large amounts of thinking.<p>One thing I have noticed is that the codebase quality influences the quality of Claude's new contributions. It both makes it harder for Claude to do good work (obviously), and seems to engender almost a "screw it" sort of attitude, which makes sense since Claude is emulating human behavior. Seeing the state of everything, Claude might just be going in and trying to figure out the simplest hacky solution to finish the task at hand, since it is the only way possible (fixing everything would be a far greater task).<p>Is it possible that this highly functioning senior dev team's practice of making 50+ concurrent agents commit 100k+ LOC per weekend resulted in a godawful pile of spaghetti code that is now literally impossible to maintain even with superhuman AI?<p>It's amusing that the OP had Claude dump out a huge rigorous-sounding report without considering the huge confounding variable staring him in the face.</p>
]]></description><pubDate>Mon, 06 Apr 2026 21:12:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47667200</link><dc:creator>woah</dc:creator><comments>https://news.ycombinator.com/item?id=47667200</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47667200</guid></item><item><title><![CDATA[New comment by woah in "F-15E jet shot down over Iran"]]></title><description><![CDATA[
<p>Only one of the combatants is shooting at civilian shipping</p>
]]></description><pubDate>Sat, 04 Apr 2026 03:44:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47635486</link><dc:creator>woah</dc:creator><comments>https://news.ycombinator.com/item?id=47635486</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47635486</guid></item><item><title><![CDATA[New comment by woah in "We replaced RAG with a virtual filesystem for our AI documentation assistant"]]></title><description><![CDATA[
<p>My intuition is that since AI assistants are fictional characters in a story being autocompleted by an LLM, mechanisms that are interpretable as human interactions with language and appear in the pretraining data have a surprising advantage over mechanisms that are more like speculation about how the brain works or abstract concepts.</p>
]]></description><pubDate>Fri, 03 Apr 2026 21:28:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47632534</link><dc:creator>woah</dc:creator><comments>https://news.ycombinator.com/item?id=47632534</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47632534</guid></item><item><title><![CDATA[New comment by woah in "What Is Copilot Exactly?"]]></title><description><![CDATA[
<p>Why do they all start with C?</p>
]]></description><pubDate>Wed, 01 Apr 2026 19:04:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47605085</link><dc:creator>woah</dc:creator><comments>https://news.ycombinator.com/item?id=47605085</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47605085</guid></item><item><title><![CDATA[New comment by woah in "AI for American-produced cement and concrete"]]></title><description><![CDATA[
<p>Somebody needs to coin a new term for the scattershot zero-thought AI griping that is pervasive in online comments these days. Meatslop?<p>Obviously it's going to be more productive for a manufacturer to do a years-long curing test on 100 likely candidates instead of 100 random mixes. They obviously already screen candidates through traditional methods, but if this AI technique improves accuracy, all the better.</p>
]]></description><pubDate>Wed, 01 Apr 2026 18:10:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47604417</link><dc:creator>woah</dc:creator><comments>https://news.ycombinator.com/item?id=47604417</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47604417</guid></item><item><title><![CDATA[New comment by woah in "The OpenAI graveyard: All the deals and products that haven't happened"]]></title><description><![CDATA[
<p>My guess is Sam Altman is a better VC than CEO. Better at hype, networking, fund raising, and back room political hijinks than shipping a focused product<p>He seems to be trying to take almost a "venture studio" approach by throwing shit at the wall, but the problem with these things is always that the "internal startups" are "founded" by people who don't have enough incentive or control over their product to perform as well as an actual startup, and are distracted by internal politics. And frankly, it may also be that the really good founders will just do their own startup vs working on a quasi-startup inside a large org so there's some selection bias as well.</p>
]]></description><pubDate>Wed, 01 Apr 2026 17:04:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47603561</link><dc:creator>woah</dc:creator><comments>https://news.ycombinator.com/item?id=47603561</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47603561</guid></item><item><title><![CDATA[New comment by woah in "OpenAI closes funding round at an $852B valuation"]]></title><description><![CDATA[
<p>OpenAI is making $24b a year. It's a 32x revenue multiple. High, but not insane. Spinning this as a story of overinvestment doesn't make sense.</p>
]]></description><pubDate>Wed, 01 Apr 2026 03:43:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47596557</link><dc:creator>woah</dc:creator><comments>https://news.ycombinator.com/item?id=47596557</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47596557</guid></item><item><title><![CDATA[New comment by woah in "We haven't seen the worst of what gambling and prediction markets will do"]]></title><description><![CDATA[
<p>> the more liquidity that gets dumped on a particular outcome, the more motivation there is to tamper with the real-world outcome<p>This only works if there are enough people betting on the other side. It's not some kind of magical money multiplication machine. As more stuff like the one-day Iran bet or the 64:56 minute press conference happen, people will avoid taking bets on highly specific outcomes.<p>On more reasonable bets like "war with Iran in the next 6 months", if there is some kind of shadowy cabal putting billions of dollars on the yes side, they just aren't going to have much upside if there isn't the same amount on the no side.</p>
]]></description><pubDate>Thu, 26 Mar 2026 22:55:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47536900</link><dc:creator>woah</dc:creator><comments>https://news.ycombinator.com/item?id=47536900</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47536900</guid></item><item><title><![CDATA[New comment by woah in "Meta and YouTube found negligent in landmark social media addiction case"]]></title><description><![CDATA[
<p>Are there any takeaways here for builders of social media applications who are not Facebook or Google? Is this a warning to not make your newsfeed algorithm "too engaging" or is it only really relevant for big companies?</p>
]]></description><pubDate>Wed, 25 Mar 2026 18:31:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47521340</link><dc:creator>woah</dc:creator><comments>https://news.ycombinator.com/item?id=47521340</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47521340</guid></item><item><title><![CDATA[New comment by woah in "North Korean's 100k fake IT workers net $500M a year for Kim"]]></title><description><![CDATA[
<p>How are these IT workers fake? Sounds like they are really doing the job.</p>
]]></description><pubDate>Wed, 18 Mar 2026 16:44:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47428016</link><dc:creator>woah</dc:creator><comments>https://news.ycombinator.com/item?id=47428016</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47428016</guid></item><item><title><![CDATA[New comment by woah in "Language Model Teams as Distrbuted Systems"]]></title><description><![CDATA[
<p>Steelmanning the other side of this question:<p>LLMs mostly do useful work by writing stories about AI assistants who issue various commands and reply to a user's prompts. These do work, but they are fundamentally like a screenplay that the LLM is continuing.<p>An "agent" is a great abstraction since the LLM is used to continuing stories about characters going through narrative arcs. The type of work that would be assigned to a particular agent can also keep its context clean and distraction-free.<p>So parallelism could be useful even if everything is completely sequential to study how these separate characters and narrative arcs intersect in ways that are similar to real characters acting independently and simultaneously, which is what LLMs are good at writing about.<p>Seems like the important thing would be to avoid getting caught up on actual "wall time" parallelism</p>
]]></description><pubDate>Mon, 16 Mar 2026 21:40:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47405316</link><dc:creator>woah</dc:creator><comments>https://news.ycombinator.com/item?id=47405316</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47405316</guid></item><item><title><![CDATA[New comment by woah in "Language model teams as distributed systems"]]></title><description><![CDATA[
<p>The current fad for "agent swarms" or "model teams" seems misguided, although it definitely makes for great paper fodder (especially if you combine it with distributed systems!) and gets the VCs hot.<p>An LLM running one query at a time can already generate a huge amount of text in a few hours, and drain your bank account too.<p>A "different agent" is just different context supplied in the query to the LLM. There is nothing more than that. Maybe some of them use a different model, but again, this is just a setting in OpenRouter or whatever.<p>Agent parallelism just doesn't seem necessary and makes everything harder. Not an expert though, tell me where I'm wrong.</p>
]]></description><pubDate>Mon, 16 Mar 2026 21:13:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47404960</link><dc:creator>woah</dc:creator><comments>https://news.ycombinator.com/item?id=47404960</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47404960</guid></item><item><title><![CDATA[New comment by woah in "Agents that run while I sleep"]]></title><description><![CDATA[
<p>Why does understanding computer science principles and software architecture and instructing a person or an ai on how to fix them require typing every line yourself?</p>
]]></description><pubDate>Thu, 12 Mar 2026 03:29:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47346090</link><dc:creator>woah</dc:creator><comments>https://news.ycombinator.com/item?id=47346090</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47346090</guid></item><item><title><![CDATA[New comment by woah in "Agents that run while I sleep"]]></title><description><![CDATA[
<p>Build features faster. Granted, this exposes the difference between people who like to finish projects and people who like to get paid a lot of money for typing on a keyboard.</p>
]]></description><pubDate>Wed, 11 Mar 2026 00:12:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47330399</link><dc:creator>woah</dc:creator><comments>https://news.ycombinator.com/item?id=47330399</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47330399</guid></item><item><title><![CDATA[New comment by woah in "LLM Writing Tropes.md"]]></title><description><![CDATA[
<p>Reading through this i feel like i'm on substack</p>
]]></description><pubDate>Sun, 08 Mar 2026 20:54:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47301345</link><dc:creator>woah</dc:creator><comments>https://news.ycombinator.com/item?id=47301345</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47301345</guid></item></channel></rss>