<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: prostheticrazor</title><link>https://news.ycombinator.com/user?id=prostheticrazor</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 10 Apr 2026 04:44:08 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=prostheticrazor" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by prostheticrazor in "Improving 15 LLMs at Coding in One Afternoon. Only the Harness Changed"]]></title><description><![CDATA[
<p>We build MCP servers that wrap fund APIs. The biggest performance variable we’ve found isn’t the model, it’s how much domain context the harness provides before the model has to reason. Same model, generic prompt versus one loaded with our procedural docs - wider gap than switching between model generations. Which surprised me.<p>The post’s framing is right but undersells what the harness actually does in production. It’s your trust layer: what can the model touch, what can’t it, how cheaply do you recover when it gets something wrong. We spend something like 70% of engineering time on the recovery path, not the inference. Whether that ratio is right I’m not sure, but it’s where we’ve ended up.<p>On MCP overhead downthread: real, yes. In regulated environments you need the audit trail and the kill switch, and a tool boundary is how you get those. The unsolved part is keeping the protocol thin enough that you’re not burning tokens on ceremony.</p>
]]></description><pubDate>Sun, 22 Feb 2026 19:26:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47113821</link><dc:creator>prostheticrazor</dc:creator><comments>https://news.ycombinator.com/item?id=47113821</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47113821</guid></item><item><title><![CDATA[New comment by prostheticrazor in "The AI coding trap"]]></title><description><![CDATA[
<p>The article is pretty interesting, perhaps some marmite takes, but the bit that chimed with me is the vibe coding vs AI-driven engineering. Senior management at my work is obsessed with vibe-coding and are constantly pushing engineers to promote vibe code to PROD. It’s dispiriting to see parts of our code base begin to fill with manager+LLM slop …</p>
]]></description><pubDate>Sun, 28 Sep 2025 19:06:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=45406952</link><dc:creator>prostheticrazor</dc:creator><comments>https://news.ycombinator.com/item?id=45406952</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45406952</guid></item><item><title><![CDATA[Canal Chase]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/UoB-COMSM0166/2025-group-14">https://github.com/UoB-COMSM0166/2025-group-14</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44384235">https://news.ycombinator.com/item?id=44384235</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 26 Jun 2025 04:39:53 +0000</pubDate><link>https://github.com/UoB-COMSM0166/2025-group-14</link><dc:creator>prostheticrazor</dc:creator><comments>https://news.ycombinator.com/item?id=44384235</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44384235</guid></item></channel></rss>