<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: MrOrelliOReilly</title><link>https://news.ycombinator.com/user?id=MrOrelliOReilly</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 22 Apr 2026 08:40:49 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=MrOrelliOReilly" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by MrOrelliOReilly in "Ben Lerner's Big Feelings"]]></title><description><![CDATA[
<p>Yeah, good comparison. To Lerner's credit, he is always a poet at heart, which leads to concise, lyrical prose. DFW is voluminous in comparison; when it lands, its great, but it can feel overinflated/overdone when it doesn't.</p>
]]></description><pubDate>Mon, 20 Apr 2026 11:21:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47832739</link><dc:creator>MrOrelliOReilly</dc:creator><comments>https://news.ycombinator.com/item?id=47832739</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47832739</guid></item><item><title><![CDATA[New comment by MrOrelliOReilly in "Ben Lerner's Big Feelings"]]></title><description><![CDATA[
<p>I am a huge fan of Ben Lerner and have a copy of “Transcription” at home, waiting to be read. Autofiction is in many ways _the_ dominant mode of contemporary American literature, particularly among the literati of NYC/London (cf. Ocean Vuong, Tao Lin, Patricia Lockwood, etc., etc.). It can, for this reason, feel overdone and out of touch. But Lerner comes to the topic with such skill and intelligence, he really defines the genre for me, in a positive light.</p>
]]></description><pubDate>Mon, 20 Apr 2026 07:14:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47831246</link><dc:creator>MrOrelliOReilly</dc:creator><comments>https://news.ycombinator.com/item?id=47831246</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47831246</guid></item><item><title><![CDATA[New comment by MrOrelliOReilly in "AI Will Be Met with Violence, and Nothing Good Will Come of It"]]></title><description><![CDATA[
<p>> People hate AI so much that they are prone to attribute to it everything that’s going wrong in their lives, regardless of the truth. That’s why they mix real arguments, like data theft, with fake ones, like the water stuff. Employers do it, too. Most layoffs are not caused by AI, but it’s the perfect excuse to do something that’s otherwise socially reprehensible.<p>Pertinent quote. A lot of AI discourse goes in circles trying to evaluate the truthiness of every individual complaint about AI. Obviously it's good to ensure claims are factual! But I believe it misses a broader point that people are resistant to AI, often out of fear, and are grasping for strategies to exert control. Or at least that's my read of it.<p>Refuting individual claims won't make a difference if the underlying anxieties aren't addressed (e.g., if I lose my job will I be compensated, will we protect ourselves against x-risk, etc).</p>
]]></description><pubDate>Sun, 12 Apr 2026 09:55:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47737823</link><dc:creator>MrOrelliOReilly</dc:creator><comments>https://news.ycombinator.com/item?id=47737823</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47737823</guid></item><item><title><![CDATA[New comment by MrOrelliOReilly in "Anthropic expands partnership with Google and Broadcom for next-gen compute"]]></title><description><![CDATA[
<p>That's a fair response. But I'm not aware of any metrics supporting the point that the lag time is decreasing. The discourse I've seen has more focused on the ways Claude/OpenAI/Google have pulled away from the rest of the pack.<p>To be clear, I accept you might be right, but I think the crux is whether lag time is increasing, steady or growing.</p>
]]></description><pubDate>Wed, 08 Apr 2026 08:28:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47687084</link><dc:creator>MrOrelliOReilly</dc:creator><comments>https://news.ycombinator.com/item?id=47687084</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47687084</guid></item><item><title><![CDATA[New comment by MrOrelliOReilly in "Anthropic expands partnership with Google and Broadcom for next-gen compute"]]></title><description><![CDATA[
<p>How so? Opus and Sonnet are frontier models which cannot easily be replicated. Compute has real physical constraints which require appropriate procurement at this scale. At least those two points seem like pretty strong moats against the majority of companies.</p>
]]></description><pubDate>Tue, 07 Apr 2026 09:34:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47672666</link><dc:creator>MrOrelliOReilly</dc:creator><comments>https://news.ycombinator.com/item?id=47672666</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47672666</guid></item><item><title><![CDATA[New comment by MrOrelliOReilly in "POSSE – Publish on your Own Site, Syndicate Elsewhere"]]></title><description><![CDATA[
<p>I like the principle, but I also find that we software folk commonly mistake the creation of a website as the goal, rather than the production of "content" (e.g. blog posts). I spent years trying to publish a blog and continually getting derailed building the ultimate static website. Recently I switched to a Substack hosted on my own subdomain, and now I'm finally writing. At least I still own the subdomain.</p>
]]></description><pubDate>Mon, 23 Mar 2026 10:21:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47487474</link><dc:creator>MrOrelliOReilly</dc:creator><comments>https://news.ycombinator.com/item?id=47487474</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47487474</guid></item><item><title><![CDATA[New comment by MrOrelliOReilly in "Bus travel from Lima to Rio de Janeiro"]]></title><description><![CDATA[
<p>I’m not convinced you read the post. I believe the author makes quite explicit their goal was to actually visit these cities, noting this is far from the most efficient bus route. Their itinerary also shows long stays in several spots.</p>
]]></description><pubDate>Sun, 15 Mar 2026 22:27:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47392655</link><dc:creator>MrOrelliOReilly</dc:creator><comments>https://news.ycombinator.com/item?id=47392655</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47392655</guid></item><item><title><![CDATA[New comment by MrOrelliOReilly in "I'm Getting a Whiff of Iain Banks' Culture"]]></title><description><![CDATA[
<p>I have had the same hypothesis around the recent operational success of US military interventions, but would agree with other comments here that this is more "vibes" than data. It's been reported that Maven (integrated with Claude) has been used extensively for Iran, but I haven't seen any hard evidence this is directly contributing to greater US military efficiency. I do buy the general thesis that AI would support operation excellence and solve attention problems across concurrent actions. Would be good to see some more reporting or combat analysis to try to measure the contributions of AI (e.g., how many more concurrent aerial sorties are taking place vs equivalent interventions, how many more strikes are "successful" vs past, etc).<p>EDIT: I see this post has been flagged. Why? I understand it’s political but it seems very much within the site’s ethos. I didn’t get the impression it was AI-writing either.</p>
]]></description><pubDate>Mon, 09 Mar 2026 16:50:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47311592</link><dc:creator>MrOrelliOReilly</dc:creator><comments>https://news.ycombinator.com/item?id=47311592</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47311592</guid></item><item><title><![CDATA[New comment by MrOrelliOReilly in "AI is not a coworker, it's an exoskeleton"]]></title><description><![CDATA[
<p>Correct, I am stating that the stochastic parrot hypothesis is a fallacy.</p>
]]></description><pubDate>Tue, 24 Feb 2026 07:15:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47133879</link><dc:creator>MrOrelliOReilly</dc:creator><comments>https://news.ycombinator.com/item?id=47133879</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47133879</guid></item><item><title><![CDATA[New comment by MrOrelliOReilly in "Terence Tao, at 8 years old (1984) [pdf]"]]></title><description><![CDATA[
<p>I think you might be underrating the value of even that enabling work. Some parents would not have the financial resources to provide those learning materials. And some parents would take a normative stance on how an 8 year old ought to behave.<p>More importantly, it's not as though individuals like Clements or Erdos was corresponding with Terrence directly to arrange a meeting. His parents clearly played an important role in facilitating and allowing these encounters. That deserves a lot of credit!</p>
]]></description><pubDate>Tue, 24 Feb 2026 07:13:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47133861</link><dc:creator>MrOrelliOReilly</dc:creator><comments>https://news.ycombinator.com/item?id=47133861</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47133861</guid></item><item><title><![CDATA[New comment by MrOrelliOReilly in "AI is not a coworker, it's an exoskeleton"]]></title><description><![CDATA[
<p>This all sounds like the stochastic parrot fallacy. Total determinism is not the goal, and it not a prerequisite for general intelligence. As you allude to above, humans are also not fully deterministic. I don't see what hard theoretical barriers you've presented toward AGI or future ASI.</p>
]]></description><pubDate>Fri, 20 Feb 2026 10:47:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47086287</link><dc:creator>MrOrelliOReilly</dc:creator><comments>https://news.ycombinator.com/item?id=47086287</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47086287</guid></item><item><title><![CDATA[New comment by MrOrelliOReilly in "How does misalignment scale with model intelligence and task complexity?"]]></title><description><![CDATA[
<p>I don’t believe the article makes any claims on the infeasibility of a future ASI. It just explores likely failure modes.<p>It is fine to be worried about both alignment risks and economic inequality. The world is complex, there are many problems all at once, we don’t have to promote one at the cost of the other.</p>
]]></description><pubDate>Tue, 03 Feb 2026 06:04:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=46867117</link><dc:creator>MrOrelliOReilly</dc:creator><comments>https://news.ycombinator.com/item?id=46867117</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46867117</guid></item><item><title><![CDATA[New comment by MrOrelliOReilly in "Claude Code's new hidden feature: Swarms"]]></title><description><![CDATA[
<p>Totally agreed. Most the weird concepts of Gas Town are just workarounds for bad behavior in Claude or the underlying models. Anthropic is in the best position to get their own model to adhere to orchestration steps, obviating the need for these extra layers. Beyond that, there shouldn’t actually be much to orchestration beyond a solid messaging and task management implementation.</p>
]]></description><pubDate>Sat, 24 Jan 2026 21:31:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=46747893</link><dc:creator>MrOrelliOReilly</dc:creator><comments>https://news.ycombinator.com/item?id=46747893</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46747893</guid></item><item><title><![CDATA[New comment by MrOrelliOReilly in "Gas Town's agent patterns, design bottlenecks, and vibecoding at scale"]]></title><description><![CDATA[
<p>The author's high-value flowcharts vs Steve Yegge's AI art is enough of a case-in-point for how confusing his posts and repos are. However this is a pervasive problem with AI coding tools. Unsurprisingly, the creators of these tools are also the most bullish about agentic coding, so the source code shows the consequences. Even Claude Code itself seems to experience an unusually high number of regressions or undocumented changes for such a widely used product. I had the same problem when recently trying to understand the details of spec-kit or sprites from their docs. Still, I agree that Gas Town is a very instructive example of what the future of AI coding will look like. I'm confident mature orchestration workflows will arrive in 2026.</p>
]]></description><pubDate>Fri, 23 Jan 2026 17:18:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46735041</link><dc:creator>MrOrelliOReilly</dc:creator><comments>https://news.ycombinator.com/item?id=46735041</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46735041</guid></item><item><title><![CDATA[Claude Cowboys]]></title><description><![CDATA[
<p>Article URL: <a href="https://write.ianwsperber.com/p/claude-cowboys">https://write.ianwsperber.com/p/claude-cowboys</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46719230">https://news.ycombinator.com/item?id=46719230</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Thu, 22 Jan 2026 13:51:28 +0000</pubDate><link>https://write.ianwsperber.com/p/claude-cowboys</link><dc:creator>MrOrelliOReilly</dc:creator><comments>https://news.ycombinator.com/item?id=46719230</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46719230</guid></item><item><title><![CDATA[New comment by MrOrelliOReilly in "Comic-Con Bans AI Art After Artist Pushback"]]></title><description><![CDATA[
<p>For me, the killer feature would more be _autocomplete_ for art. I love to cartoon and doodle, but don’t have the time/patience/skillset to build professional digital assets. If I could go from my pencil drawn sketch to a flashy png, that would be awesome! I think it’d be a nice use of AI, since it just allows me to do more with my own creativity.<p>Unfortunately whenever I’ve tried uploading a sketch to ChatGPT or Gemini, it seems to fixate on details of my sketch, and recreates my mistakes in high fidelity. It fails to take a creative leap toward a good result. I’ve heard some professionals have gotten good results building custom workflows in ComfyUI.</p>
]]></description><pubDate>Wed, 21 Jan 2026 15:18:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=46706868</link><dc:creator>MrOrelliOReilly</dc:creator><comments>https://news.ycombinator.com/item?id=46706868</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46706868</guid></item><item><title><![CDATA[New comment by MrOrelliOReilly in "The Agentic AI Handbook: Production-Ready Patterns"]]></title><description><![CDATA[
<p>I will ruefully admit that I had also planned a similar blog post! I am hoping I can still add some value to the conversation, but it does seem like _everyone_ is writing about agentic development right now.</p>
]]></description><pubDate>Wed, 21 Jan 2026 10:10:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=46703457</link><dc:creator>MrOrelliOReilly</dc:creator><comments>https://news.ycombinator.com/item?id=46703457</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46703457</guid></item><item><title><![CDATA[New comment by MrOrelliOReilly in "The Agentic AI Handbook: Production-Ready Patterns"]]></title><description><![CDATA[
<p>This is a great consolidation of various techniques and patterns for agentic coding. It’s valuable just to standardize our vocabulary in this new world of AI led or assisted programming. I’ve seen a lot of developers all converging toward similar patterns. Having clear terms and definitions for various strategies can help a lot in articulating the best way to solve a given problem. Not so different from approaching a problem and saying “hey, I think we’d benefit from TDD here.”</p>
]]></description><pubDate>Wed, 21 Jan 2026 07:52:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=46702445</link><dc:creator>MrOrelliOReilly</dc:creator><comments>https://news.ycombinator.com/item?id=46702445</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46702445</guid></item><item><title><![CDATA[New comment by MrOrelliOReilly in "Nvidia Stock Crash Prediction"]]></title><description><![CDATA[
<p>And I just predict how you’ll predict</p>
]]></description><pubDate>Tue, 20 Jan 2026 21:48:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=46698195</link><dc:creator>MrOrelliOReilly</dc:creator><comments>https://news.ycombinator.com/item?id=46698195</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46698195</guid></item><item><title><![CDATA[New comment by MrOrelliOReilly in "Prediction markets are ushering in a world in which news becomes about gambling"]]></title><description><![CDATA[
<p>I believe there are valid critiques to be made of prediction markets, particularly on the morality of allowing bets on events with serious/market outcomes (the market could create an incentive for an insider to actualize that bad outcome, hence we should ban the market as it increases the odds of a bad outcome occurring) or on the negative repercussions of gambling addiction. Instead of making either of these valid arguments, the article instead decides to critique the epistemic value of prediction markets. It comes off to me as ill-informed handwringing and tribal signaling, rather than bothering to engage critically with the topic and offering a meaningful critique.</p>
]]></description><pubDate>Tue, 20 Jan 2026 15:26:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46692772</link><dc:creator>MrOrelliOReilly</dc:creator><comments>https://news.ycombinator.com/item?id=46692772</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46692772</guid></item></channel></rss>