<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: stuxf</title><link>https://news.ycombinator.com/user?id=stuxf</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 04 May 2026 01:08:42 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=stuxf" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by stuxf in "Hardening Firefox with Anthropic's Red Team"]]></title><description><![CDATA[
<p>Makes sense, thank you!</p>
]]></description><pubDate>Fri, 06 Mar 2026 12:47:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47274310</link><dc:creator>stuxf</dc:creator><comments>https://news.ycombinator.com/item?id=47274310</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47274310</guid></item><item><title><![CDATA[New comment by stuxf in "Hardening Firefox with Anthropic's Red Team"]]></title><description><![CDATA[
<p>It's interesting that they counted these as security vulnerabilities (from the linked Anthropic article)<p>> “Crude” is an important caveat here. The exploits Claude wrote only worked on our testing environment, which intentionally removed some of the security features found in modern browsers. This includes, most importantly, the sandbox, the purpose of which is to reduce the impact of these types of vulnerabilities. Thus, Firefox’s “defense in depth” would have been effective at mitigating these particular exploits.</p>
]]></description><pubDate>Fri, 06 Mar 2026 12:39:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47274253</link><dc:creator>stuxf</dc:creator><comments>https://news.ycombinator.com/item?id=47274253</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47274253</guid></item><item><title><![CDATA[New comment by stuxf in "The path to ubiquitous AI (17k tokens/sec)"]]></title><description><![CDATA[
<p>I totally buy the thesis on specialization here, I think it makes total sense.<p>Asides from the obvious concern that this is a tiny 8B model, I'm also a bit skeptical of the power draw. 2.4 kW feels a little bit high, but someone else should try doing the napkin math compared to the total throughput to power ratio on the H200 and other chips.</p>
]]></description><pubDate>Fri, 20 Feb 2026 11:35:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47086706</link><dc:creator>stuxf</dc:creator><comments>https://news.ycombinator.com/item?id=47086706</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47086706</guid></item><item><title><![CDATA[New comment by stuxf in "Show HN: Echo, an iOS SSH+mosh client built on Ghostty"]]></title><description><![CDATA[
<p>finally I can move off termius, this looks great!</p>
]]></description><pubDate>Wed, 18 Feb 2026 19:52:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47065476</link><dc:creator>stuxf</dc:creator><comments>https://news.ycombinator.com/item?id=47065476</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47065476</guid></item><item><title><![CDATA[New comment by stuxf in "Expensively Quadratic: The LLM Agent Cost Curve"]]></title><description><![CDATA[
<p>> Some coding agents (Shelley included!) refuse to return a large tool output back to the agent after some threshold. This is a mistake: it's going to read the whole file, and it may as well do it in one call rather than five.<p>disagree with this: IMO the primary reason that these still need to exist is for when the agent messes up (e.g reads a file that is too large like a bundle file), or when you run a grep command in a large codebase and end up hitting way too many files, overloading context.<p>Otherwise lots of interesting stuff in this article! Having a precise calculator was very useful for the idea of how many things we should be putting into an agent loop to get a cost optimum (and not just a performance optimum) for our tasks, which is something that's been pretty underserved.</p>
]]></description><pubDate>Mon, 16 Feb 2026 07:42:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47032078</link><dc:creator>stuxf</dc:creator><comments>https://news.ycombinator.com/item?id=47032078</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47032078</guid></item><item><title><![CDATA[New comment by stuxf in "AI Slop vs. OSS Security"]]></title><description><![CDATA[
<p>I agree with a lot of things said in this article, I also think some sort of centralized trust system for OSS bug bounty would be a really good solution to this problem<p>> The downside is that it makes it harder for new researchers to enter the field, and it risks creating an insider club.<p>I also think this concern can be largely mitigated or reduced to a nonissue. New researchers would have a trust score of zero for example, but people who consistently submit AI slop will have a very low score and can be filtered out fairly easily.</p>
]]></description><pubDate>Thu, 06 Nov 2025 17:01:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=45837385</link><dc:creator>stuxf</dc:creator><comments>https://news.ycombinator.com/item?id=45837385</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45837385</guid></item><item><title><![CDATA[From MCP to shell: MCP auth flaws enable RCE in Claude Code, Gemini CLI and more]]></title><description><![CDATA[
<p>Article URL: <a href="https://verialabs.com/blog/from-mcp-to-shell/">https://verialabs.com/blog/from-mcp-to-shell/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45348183">https://news.ycombinator.com/item?id=45348183</a></p>
<p>Points: 148</p>
<p># Comments: 40</p>
]]></description><pubDate>Tue, 23 Sep 2025 15:09:50 +0000</pubDate><link>https://verialabs.com/blog/from-mcp-to-shell/</link><dc:creator>stuxf</dc:creator><comments>https://news.ycombinator.com/item?id=45348183</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45348183</guid></item></channel></rss>