<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: mozdeco</title><link>https://news.ycombinator.com/user?id=mozdeco</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 08 May 2026 13:58:35 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=mozdeco" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by mozdeco in "Hardening Firefox with Claude Mythos Preview"]]></title><description><![CDATA[
<p>> But report [1] says that "Some of these bugs showed evidence of memory corruption...", which implies that majority of these (which includes 271 bugs from Mythos) don't have evidence at all. Do I not understand something?<p>This is just the standard sentence we've been using for years. It has nothing to do with Mythos and for Mythos, almost all bugs show evidence of memory corruption (we do have a handful of bugs in JS IPC / JS Actors, one is in the blog post).<p>> Mythos is supposed to be pretty good at writing actual exploits, so (as I understand) there shouldn't be any serious problems with checking if bug is vulnerability or not.<p>Yes but if we have a choice between writing exploits and scanning more source, potentially finding more bugs, then of course we prioritize the latter.</p>
]]></description><pubDate>Fri, 08 May 2026 12:36:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=48062173</link><dc:creator>mozdeco</dc:creator><comments>https://news.ycombinator.com/item?id=48062173</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48062173</guid></item><item><title><![CDATA[New comment by mozdeco in "Hardening Firefox with Claude Mythos Preview"]]></title><description><![CDATA[
<p>No, it's a new post, see also<p><a href="https://hacks.mozilla.org/2026/05/behind-the-scenes-hardening-firefox/" rel="nofollow">https://hacks.mozilla.org/2026/05/behind-the-scenes-hardenin...</a></p>
]]></description><pubDate>Fri, 08 May 2026 00:33:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=48056963</link><dc:creator>mozdeco</dc:creator><comments>https://news.ycombinator.com/item?id=48056963</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48056963</guid></item><item><title><![CDATA[New comment by mozdeco in "Mozilla says 271 vulnerabilities found by Mythos and "almost no false positives""]]></title><description><![CDATA[
<p>No, we actually just posted a follow-up story with more details and opened several bugs, see also:<p><a href="https://hacks.mozilla.org/2026/05/behind-the-scenes-hardening-firefox/" rel="nofollow">https://hacks.mozilla.org/2026/05/behind-the-scenes-hardenin...</a></p>
]]></description><pubDate>Fri, 08 May 2026 00:32:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=48056950</link><dc:creator>mozdeco</dc:creator><comments>https://news.ycombinator.com/item?id=48056950</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48056950</guid></item><item><title><![CDATA[New comment by mozdeco in "Hardening Firefox with Claude Mythos Preview"]]></title><description><![CDATA[
<p>Mythos did in fact write PoCs for all bugs that crash with demonstration of memory-unsafe behavior (e.g. use-after-free, out-of-bounds reads/writes, etc).<p>For us this is substantial enough evidence to consider it a security vulnerability at that point, unless shown otherwise and it has always been this way (also for fuzzing bugs).</p>
]]></description><pubDate>Thu, 07 May 2026 23:19:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=48056382</link><dc:creator>mozdeco</dc:creator><comments>https://news.ycombinator.com/item?id=48056382</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48056382</guid></item><item><title><![CDATA[New comment by mozdeco in "Hardening Firefox with Anthropic's Red Team"]]></title><description><![CDATA[
<p>The bugs are at least of the same quality as our internal fuzzing bugs. They are either crashes or assertion failures, both of these are considered bugs by us. But they have of course a varying value. Not every single assertion failure is ultimately a high impact bug, some of these don't have an impact on the user at all - the same applies to fuzzing bugs though, there is really no difference here. And ultimately we want to fix all of these because assertions have the potential to find very complex bugs, but only if you keep your software "clean" wrt to assertion failures.<p>The curl situation was completely different because as far as I know, these bugs were not filed with actual testcases. They were purely static bugs and those kinds of reports eat up a lot of valuable resources in order to validate.</p>
]]></description><pubDate>Fri, 06 Mar 2026 17:04:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47277746</link><dc:creator>mozdeco</dc:creator><comments>https://news.ycombinator.com/item?id=47277746</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47277746</guid></item><item><title><![CDATA[New comment by mozdeco in "Hardening Firefox with Anthropic's Red Team"]]></title><description><![CDATA[
<p>[working for Mozilla]<p>That's because there were none. All bugs came with verifiable testcases (crash tests) that crashed the browser or the JS shell.<p>For the JS shell, similar to fuzzing, a small fraction of these bugs were bugs in the shell itself (i.e. testing only) - but according to our fuzzing guidelines, these are not false positives and they will also be fixed.</p>
]]></description><pubDate>Fri, 06 Mar 2026 14:26:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47275272</link><dc:creator>mozdeco</dc:creator><comments>https://news.ycombinator.com/item?id=47275272</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47275272</guid></item><item><title><![CDATA[New comment by mozdeco in "Hardening Firefox with Anthropic's Red Team"]]></title><description><![CDATA[
<p>[work at Mozilla]<p>I agree that LLMs are sometimes wrong, which is why this new method here is so valuable - it provides us with easily verifiable testcases rather than just some kind of analysis that could be right or wrong. Purely triaging through vulnerability reports that are static (i.e. no actual PoC) is very time consuming and false-positive prone (same issue with pure static analysis).<p>I can't really confirm the part about "local" bugs anymore though, but that might also be a model thing. When I did experiments longer ago, this was certainly true, esp. for the "one shot" approaches where you basically prompt it once with source code and want some analysis back. But this actually changed with agentic SDKs where more context can be pulled together automatically.</p>
]]></description><pubDate>Fri, 06 Mar 2026 13:18:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47274563</link><dc:creator>mozdeco</dc:creator><comments>https://news.ycombinator.com/item?id=47274563</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47274563</guid></item><item><title><![CDATA[New comment by mozdeco in "Partnering with Mozilla to improve Firefox's security"]]></title><description><![CDATA[
<p>And the Firefox side of the story: <a href="https://blog.mozilla.org/en/firefox/hardening-firefox-anthropic-red-team/" rel="nofollow">https://blog.mozilla.org/en/firefox/hardening-firefox-anthro...</a></p>
]]></description><pubDate>Fri, 06 Mar 2026 11:52:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47273847</link><dc:creator>mozdeco</dc:creator><comments>https://news.ycombinator.com/item?id=47273847</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47273847</guid></item><item><title><![CDATA[New comment by mozdeco in "Retrospective and technical details on the recent Firefox outage"]]></title><description><![CDATA[
<p>The infinite busy loop in this case was not the tab no (neither visible or invisible). The loop was directly in the network stack, as stated in the post, not in the caller.</p>
]]></description><pubDate>Wed, 02 Feb 2022 13:11:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=30176980</link><dc:creator>mozdeco</dc:creator><comments>https://news.ycombinator.com/item?id=30176980</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30176980</guid></item><item><title><![CDATA[New comment by mozdeco in "Retrospective and technical details on the recent Firefox outage"]]></title><description><![CDATA[
<p>> code that can end up blocking forever should have a timeout and recover from that timeout happening.<p>There was no way for the calling code to do this. This was literally an infinite loop inside the network stack. Imagine the network stack itself going `while(1) {}` on you, without checking if the request was canceled.<p>Even if you detect that this happens, there is nothing you can do as the caller. You can't even properly stop the thread, as it is not cooperating. So recovering from this type of failure is hard.</p>
]]></description><pubDate>Wed, 02 Feb 2022 12:39:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=30176680</link><dc:creator>mozdeco</dc:creator><comments>https://news.ycombinator.com/item?id=30176680</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30176680</guid></item><item><title><![CDATA[New comment by mozdeco in "Retrospective and technical details on the recent Firefox outage"]]></title><description><![CDATA[
<p>All requests go through one socket thread, no matter which HTTP version. I am not a Necko engineer, but since requests can be upgraded, an HTTP/1 request could switch to HTTP/2 and if there was a separation by protocol, the request would have to be "moved" to a different thread. So I'm not sure that would work easily.</p>
]]></description><pubDate>Wed, 02 Feb 2022 12:23:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=30176561</link><dc:creator>mozdeco</dc:creator><comments>https://news.ycombinator.com/item?id=30176561</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30176561</guid></item><item><title><![CDATA[New comment by mozdeco in "Retrospective and technical details on the recent Firefox outage"]]></title><description><![CDATA[
<p>> the fix has to be in the code that communicates back, it should fail gracefully.<p>The bug that caused the hang was in the network stack itself. There was no way the calling code could have prevented this in any way. You can see this by taking a look at the linked HTTP3 code. It's not that the higher-level code kept retrying over and over causing the hang, that was not the problem here.<p>Under "Lessons learned" you can also read "investigating action points both to make the browser more resilient towards such problems". I agree that this is broadly spoken, but it covers ideas that would have made this technically recoverable (e.g. can network requests be compartmentalized to not block on a single network thread?).</p>
]]></description><pubDate>Wed, 02 Feb 2022 11:47:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=30176321</link><dc:creator>mozdeco</dc:creator><comments>https://news.ycombinator.com/item?id=30176321</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30176321</guid></item><item><title><![CDATA[New comment by mozdeco in "Retrospective and technical details on the recent Firefox outage"]]></title><description><![CDATA[
<p>At this point, the code relied on the Content-Length header being present because the higher-level API was supposed to add it. The field that is supposed to be populated by Content-Length (mRequestBodyLenRemaining) is pre-initialized to 0.</p>
]]></description><pubDate>Wed, 02 Feb 2022 10:36:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=30175895</link><dc:creator>mozdeco</dc:creator><comments>https://news.ycombinator.com/item?id=30175895</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30175895</guid></item><item><title><![CDATA[New comment by mozdeco in "Retrospective and technical details on the recent Firefox outage"]]></title><description><![CDATA[
<p>Firefox generally does not block if a remote connection does not work. As explained in the post, the infinite loop was a bug in the network stack itself.<p>So yes, you can use Firefox in any offline environment.</p>
]]></description><pubDate>Wed, 02 Feb 2022 10:16:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=30175768</link><dc:creator>mozdeco</dc:creator><comments>https://news.ycombinator.com/item?id=30175768</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30175768</guid></item><item><title><![CDATA[New comment by mozdeco in "Eliminating Data Races in Firefox"]]></title><description><![CDATA[
<p>This is absolutely true and hence we combine not only our tests with TSan, but also fuzzing, to explore even more corner cases.<p>On the static vs. dynamic side, I would always opt for the dynamic when it can guarantee me no false positives, even if the results are incomplete. It is pretty much impossible to deploy a tool that produces lots of false positives because developers usually will reject it at some point and question every result.</p>
]]></description><pubDate>Wed, 07 Apr 2021 07:56:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=26721657</link><dc:creator>mozdeco</dc:creator><comments>https://news.ycombinator.com/item?id=26721657</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=26721657</guid></item><item><title><![CDATA[New comment by mozdeco in "Eliminating Data Races in Firefox"]]></title><description><![CDATA[
<p>It would probably be fairly easy to change Qt's Mutex implementation to be TSan-compatible and only do so for TSan builds (by swapping out the fences for atomics when building with TSan). This is what we did in Firefox.</p>
]]></description><pubDate>Wed, 07 Apr 2021 07:53:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=26721623</link><dc:creator>mozdeco</dc:creator><comments>https://news.ycombinator.com/item?id=26721623</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=26721623</guid></item></channel></rss>