<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: IainIreland</title><link>https://news.ycombinator.com/user?id=IainIreland</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 08 May 2026 14:07:27 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=IainIreland" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by IainIreland in "Hardening Firefox with Claude Mythos Preview"]]></title><description><![CDATA[
<p>If I had to guess, I'd say that AI is better at finding TOCTOU bugs than fuzzing because it starts by looking at the code and trying to find problems with it, which naturally leads it to experiment with questions like "is there any way to make this assumption false?", whereas fuzzing is more brute force. Fuzzing can explore way more possible states, but AI is better at picking good ones.<p>In this particular sense, AI tends to find bugs that are closer to what we'd see from a human researcher reading the code. Fuzz bugs are often more "here's a seemingly innocuous sequence of statements that randomly happen to collide three corner cases in an unexpected way".<p>Outside of SpiderMonkey, my understanding is that many of the best vulnerabilities were in code that is difficult to fuzz effectively for whatever reason.</p>
]]></description><pubDate>Fri, 08 May 2026 00:21:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=48056874</link><dc:creator>IainIreland</dc:creator><comments>https://news.ycombinator.com/item?id=48056874</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48056874</guid></item><item><title><![CDATA[New comment by IainIreland in "Hardening Firefox with Claude Mythos Preview"]]></title><description><![CDATA[
<p>The same is also true of a good security researcher, and has been for a long time. The question is mostly whether it takes long enough to come up with a testcase that we've managed to ship the fix to all affected releases, and given people some time to update. (And maybe LLMs do change the calculus there! We'll have to wait and see.)</p>
]]></description><pubDate>Thu, 07 May 2026 23:57:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=48056690</link><dc:creator>IainIreland</dc:creator><comments>https://news.ycombinator.com/item?id=48056690</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48056690</guid></item><item><title><![CDATA[New comment by IainIreland in "Hardening Firefox with Claude Mythos Preview"]]></title><description><![CDATA[
<p>I work on SpiderMonkey, so I mostly looked at the JS bugs. It was a smorgasbord of various things. Broadly speaking I'd say the most impressive bugs were TOCTOU issues, where we checked something and later acted on it, and the testcase found a clever way to invalidate the result of the check in between.<p>If you look closely at, say, this patch, you might get a sense of what I mean (although the real cleverness is in the testcase, which we have not made public): <a href="https://hg-edge.mozilla.org/integration/autoland/rev/c29515d5f859" rel="nofollow">https://hg-edge.mozilla.org/integration/autoland/rev/c29515d...</a></p>
]]></description><pubDate>Thu, 07 May 2026 23:12:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=48056320</link><dc:creator>IainIreland</dc:creator><comments>https://news.ycombinator.com/item?id=48056320</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48056320</guid></item><item><title><![CDATA[New comment by IainIreland in "Hardening Firefox with Claude Mythos Preview"]]></title><description><![CDATA[
<p>Yeah, fuzzing, sanitizers, and bug bounties were our main pre-AI tools for finding bugs.</p>
]]></description><pubDate>Thu, 07 May 2026 23:02:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=48056230</link><dc:creator>IainIreland</dc:creator><comments>https://news.ycombinator.com/item?id=48056230</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48056230</guid></item><item><title><![CDATA[New comment by IainIreland in "Hardening Firefox with Claude Mythos Preview"]]></title><description><![CDATA[
<p>There doesn't have to be a huge qualitative discontinuity between Opus and Mythos. It's just that Mythos has reached a threshold where it's finally smart enough that putting it in a loop and asking it to find bugs is suddenly <i>really</i> effective. Especially at the beginning, Mozilla wasn't doing anything particularly clever with prompts. Mythos is just smart enough that the hit rate on obvious prompts is high enough to matter. (Maybe you can get similar performance out of Opus 4.6 with really smart prompts, but AFAICT nobody had managed it until Mythos.)</p>
]]></description><pubDate>Thu, 07 May 2026 23:00:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=48056218</link><dc:creator>IainIreland</dc:creator><comments>https://news.ycombinator.com/item?id=48056218</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48056218</guid></item><item><title><![CDATA[New comment by IainIreland in "Hardening Firefox with Claude Mythos Preview"]]></title><description><![CDATA[
<p>I work at Mozilla; I fixed a bunch of these bugs.<p>In general, I would say that our use of "vulnerability" lines up with what jerrythegerbil calls "potential vulnerability". (In cases with a POC, we would likely use the word "exploit".) Our goal is to keep Firefox secure. Once it's clear that a particular bug <i>might</i> be exploitable, it's usually not worth a lot of engineering effort to investigate further; we just fix it. We spend a little while eyeballing things for the purpose of sorting into sec-high, sec-moderate, etc, and to help triage incoming bugs, but if there's any real question, we assume the worst and move on.<p>So were all 271 bugs exploitable? Absolutely not. But they were all security bugs according to the normal standards that we've been applying for years.<p>(Partial exception: there were some bugs that might normally have been opened up, but were kept hidden because Mythos wasn't public information yet. But those bugs would have been marked sec-other, and not included in the count.)<p>So if you think we're guilty of inflating the number of "real" vulnerabilities found by Mythos, bear in mind that we've also been consistently inflating the baseline. The spike in the Firefox Security Fixes by Month graph is very, very real:
<a href="https://hacks.mozilla.org/2026/05/behind-the-scenes-hardening-firefox/" rel="nofollow">https://hacks.mozilla.org/2026/05/behind-the-scenes-hardenin...</a></p>
]]></description><pubDate>Thu, 07 May 2026 22:47:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=48056110</link><dc:creator>IainIreland</dc:creator><comments>https://news.ycombinator.com/item?id=48056110</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48056110</guid></item><item><title><![CDATA[New comment by IainIreland in "The acyclic e-graph: Cranelift's mid-end optimizer"]]></title><description><![CDATA[
<p>This is really cool. Thanks for the write-up, Chris!<p>I kept waiting for "sea of nodes with CFG" to be shortened to SeaFG, and it never happened. I guess maybe it's ambiguous out loud.</p>
]]></description><pubDate>Tue, 14 Apr 2026 19:57:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47770652</link><dc:creator>IainIreland</dc:creator><comments>https://news.ycombinator.com/item?id=47770652</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47770652</guid></item><item><title><![CDATA[New comment by IainIreland in "Garbage collection for Rust: The finalizer frontier"]]></title><description><![CDATA[
<p>One clear use case for GC in Rust is for implementing other languages (eg writing a JS engine). When people ask why SpiderMonkey hasn't been rewritten in Rust, one of the main technical blockers I generally bring up is that safe, ergonomic, performant GC in Rust still appears to be a major research project. ("It would be a whole lot of work" is another, less technical problem.)<p>For a variety of reasons I don't think this particular approach is a good fit for a JS engine, but it's still very good to see people chipping away at the design space.</p>
]]></description><pubDate>Wed, 15 Oct 2025 15:46:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=45594345</link><dc:creator>IainIreland</dc:creator><comments>https://news.ycombinator.com/item?id=45594345</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45594345</guid></item><item><title><![CDATA[New comment by IainIreland in "Copy-and-Patch: A Copy-and-Patch Tutorial"]]></title><description><![CDATA[
<p>Cranelift does not use copy-and-patch. Consider, for example, this file, which implements part of the instruction generation logic for x64: <a href="https://github.com/bytecodealliance/wasmtime/blob/main/cranelift/codegen/src/isa/x64/inst/emit.rs" rel="nofollow">https://github.com/bytecodealliance/wasmtime/blob/main/crane...</a><p>Copy-and-patch is a technique for reducing the amount of effort it takes to write a JIT by leaning on an existing AOT compiler's code generator. Instead of generating machine code yourself, you can get LLVM (or another compiler) to generate a small snippet of code for each operation in your internal IR. Then codegen is simply a matter of copying the precompiled snippet and patching up the references.<p>The more resources are poured into a JIT, the less it is likely to use copy-and-patch. You get more control/flexibility doing codegen yourself.<p>But see also Deegen for a pretty cool example of trying to push this approach as far as possible: <a href="https://aha.stanford.edu/deegen-meta-compiler-approach-high-performance-vms-low-engineering-costb" rel="nofollow">https://aha.stanford.edu/deegen-meta-compiler-approach-high-...</a></p>
]]></description><pubDate>Tue, 14 Oct 2025 17:02:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=45582360</link><dc:creator>IainIreland</dc:creator><comments>https://news.ycombinator.com/item?id=45582360</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45582360</guid></item><item><title><![CDATA[New comment by IainIreland in "Tracing JITs in the Real World CPython Core Dev Sprint"]]></title><description><![CDATA[
<p>Yeah, SM will compile functions with try/catch/finally, but we don't support unwinding directly into optimized code, so the catch block itself will not be optimized.</p>
]]></description><pubDate>Thu, 25 Sep 2025 21:38:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=45379467</link><dc:creator>IainIreland</dc:creator><comments>https://news.ycombinator.com/item?id=45379467</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45379467</guid></item><item><title><![CDATA[New comment by IainIreland in "Tracing JITs in the Real World CPython Core Dev Sprint"]]></title><description><![CDATA[
<p>I don't know how JSC handles it, but in SM `eval` has significant negative effects on surrounding code. (We also decline to optimize functions containing `with` statements, but that's less because it's impossible and more because nobody uses them.)</p>
]]></description><pubDate>Thu, 25 Sep 2025 21:18:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=45379227</link><dc:creator>IainIreland</dc:creator><comments>https://news.ycombinator.com/item?id=45379227</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45379227</guid></item><item><title><![CDATA[New comment by IainIreland in "The "high-level CPU" challenge (2008)"]]></title><description><![CDATA[
<p>This isn't about languages; it's about hardware. Should hardware be "higher-level" to support higher level languages? The author says no (and I am inclined to agree with him).</p>
]]></description><pubDate>Tue, 12 Aug 2025 18:04:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=44879797</link><dc:creator>IainIreland</dc:creator><comments>https://news.ycombinator.com/item?id=44879797</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44879797</guid></item><item><title><![CDATA[New comment by IainIreland in "Oodle 2.9.14 and Intel 13th/14th gen CPUs"]]></title><description><![CDATA[
<p>This is really impressive analysis.</p>
]]></description><pubDate>Thu, 22 May 2025 16:54:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=44063970</link><dc:creator>IainIreland</dc:creator><comments>https://news.ycombinator.com/item?id=44063970</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44063970</guid></item><item><title><![CDATA[New comment by IainIreland in "SDB Scans the Ruby Stack Without the GVL"]]></title><description><![CDATA[
<p>This doesn't read as AI-generated to me at all.<p>The prose isn't polished enough to be AI. AI generation is unlikely to produce missing spaces like "...which are not readable to humans.SDB uses eBPF ...", or grammatical inaccuracies like "Ensuring Fully Correctness".<p>As for the data race thing, it seems to me that there's a pretty clear distinction between rbspy's approach (as described in reference 1) and this blog post. rbspy is walking the native stack, which occasionally fails. SDB seems to be looking at Ruby's internals instead, and has some sort of generation-number design to identify cases where there was a data race.<p>Beyond that, this post just absolutely sounds like what somebody would write if they were trying to describe in prose why they think their multi-threaded code is correct, especially the "Scanning Stacks without the GVL" section.</p>
]]></description><pubDate>Mon, 19 May 2025 19:06:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=44033574</link><dc:creator>IainIreland</dc:creator><comments>https://news.ycombinator.com/item?id=44033574</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44033574</guid></item><item><title><![CDATA[New comment by IainIreland in "A Perplexing JavaScript Parsing Puzzle"]]></title><description><![CDATA[
<p>Copy and paste all four lines at once.</p>
]]></description><pubDate>Wed, 12 Mar 2025 17:50:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=43345852</link><dc:creator>IainIreland</dc:creator><comments>https://news.ycombinator.com/item?id=43345852</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43345852</guid></item><item><title><![CDATA[New comment by IainIreland in "An Attempt to Catch Up with JIT Compilers"]]></title><description><![CDATA[
<p>Taking a quick look at the JSC code, the main difference is that CacheIR is more pervasive and load-bearing. Even monomorphic cases go through CacheIR.<p>The main justification for CacheIR isn't that it enables us to do optimizations that can't be done in other ways. It's just a convenient unifying framework.</p>
]]></description><pubDate>Tue, 04 Mar 2025 06:10:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=43250915</link><dc:creator>IainIreland</dc:creator><comments>https://news.ycombinator.com/item?id=43250915</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43250915</guid></item><item><title><![CDATA[New comment by IainIreland in "An Attempt to Catch Up with JIT Compilers"]]></title><description><![CDATA[
<p>asm.js solves this in the specific case where somebody has compiled their C/C++ code to target asm.js. It doesn't solve it for arbitrary JS code.<p>asm.js is more like a weird frontend to wasm than a dialect of JS.</p>
]]></description><pubDate>Tue, 04 Mar 2025 00:05:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=43248439</link><dc:creator>IainIreland</dc:creator><comments>https://news.ycombinator.com/item?id=43248439</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43248439</guid></item><item><title><![CDATA[New comment by IainIreland in "An Attempt to Catch Up with JIT Compilers"]]></title><description><![CDATA[
<p>The main thing we're doing differently in SM is that all of our ICs are generated using a simple linear IR (CacheIR), instead of generating machine code directly. For example, a simple monomorphic property access (obj.prop) would be GuardIsObject / GuardShape / LoadSlot. We can then lower that IR directly to MIR for the optimizing compiler.<p>It gives us a lot of flexibility in choosing what to guard, without having to worry as much about getting out of sync between the baseline ICs and the optimizer's frontend. To a first approximation, our CacheIR generators are the single source of truth for speculative optimization in SpiderMonkey, and the rest of the engine just mechanically follows their lead.<p>There are also some cool tricks you can do when your ICs have associated IR. For example, when calling a method on a superclass, with receivers of a variety of different subclasses, you often end up with a set of ICs that all 1. Guard the different shapes of the receiver objects, 2. Guard the shared shape of the holder object, then 3. Do the call. When we detect that, we can mechanically walk the IR, collect the different receiver shapes, and generate a single stub-folded IC that instead guards against a list of shapes. The cool thing is that stub folding doesn't care whether it's looking at a call IC, or a GetProp IC, or anything else: so long as the only thing that differs is the a single GuardShape, you can make the transformation.</p>
]]></description><pubDate>Tue, 04 Mar 2025 00:01:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=43248405</link><dc:creator>IainIreland</dc:creator><comments>https://news.ycombinator.com/item?id=43248405</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43248405</guid></item><item><title><![CDATA[New comment by IainIreland in "An Attempt to Catch Up with JIT Compilers"]]></title><description><![CDATA[
<p>We talk about this a bit in our CacheIR paper. Search for "IonBuilder".<p><a href="https://www.mgaudet.ca/s/mplr23main-preprint.pdf" rel="nofollow">https://www.mgaudet.ca/s/mplr23main-preprint.pdf</a></p>
]]></description><pubDate>Mon, 03 Mar 2025 23:31:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=43248140</link><dc:creator>IainIreland</dc:creator><comments>https://news.ycombinator.com/item?id=43248140</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43248140</guid></item><item><title><![CDATA[New comment by IainIreland in "JavaScript garbage collection and closures"]]></title><description><![CDATA[
<p>I believe the technical term for the property that existing JS engines lack here is "safe for space". The V8 bug (<a href="https://issues.chromium.org/issues/41070945" rel="nofollow">https://issues.chromium.org/issues/41070945</a>) has already been linked elsewhere).<p>Here's a long-standing SpiderMonkey bug: <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=894971" rel="nofollow">https://bugzilla.mozilla.org/show_bug.cgi?id=894971</a>.<p>Here's a JSC equivalent: <a href="https://bugs.webkit.org/show_bug.cgi?id=224077.\" rel="nofollow">https://bugs.webkit.org/show_bug.cgi?id=224077.\</a><p>Both of those bugs (especially the JSC one) sketch out possible solutions and give some insight into why this is hard to implement efficiently. In general, it adds a lot of complexity to an already complicated (and performance-sensitive!) chunk of code.</p>
]]></description><pubDate>Tue, 30 Jul 2024 23:17:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=41115053</link><dc:creator>IainIreland</dc:creator><comments>https://news.ycombinator.com/item?id=41115053</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41115053</guid></item></channel></rss>