<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: Diggsey</title><link>https://news.ycombinator.com/user?id=Diggsey</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 09:09:23 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=Diggsey" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by Diggsey in "Reverse engineering Gemini's SynthID detection"]]></title><description><![CDATA[
<p>Which works for about 5 minutes until someone leaks a manufacturer's private key or extracts it from a device...</p>
]]></description><pubDate>Thu, 09 Apr 2026 22:22:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47711035</link><dc:creator>Diggsey</dc:creator><comments>https://news.ycombinator.com/item?id=47711035</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47711035</guid></item><item><title><![CDATA[New comment by Diggsey in "Jepsen: MariaDB Galera Cluster 12.1.2"]]></title><description><![CDATA[
<p>Yes this stood out to me as well...</p>
]]></description><pubDate>Tue, 17 Mar 2026 13:17:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47412236</link><dc:creator>Diggsey</dc:creator><comments>https://news.ycombinator.com/item?id=47412236</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47412236</guid></item><item><title><![CDATA[New comment by Diggsey in "Testing Postgres race conditions with synchronization barriers"]]></title><description><![CDATA[
<p>Stored procedures don't eliminate serialization anomalies unless they are run inside a transaction that is itself SERIALIZABLE.<p>There's essentially no difference between putting the logic in the app vs a stored procedure (other than round trip time)</p>
]]></description><pubDate>Mon, 16 Feb 2026 23:01:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47041542</link><dc:creator>Diggsey</dc:creator><comments>https://news.ycombinator.com/item?id=47041542</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47041542</guid></item><item><title><![CDATA[New comment by Diggsey in "Unconventional PostgreSQL Optimizations"]]></title><description><![CDATA[
<p>This is completely untrue. While the index only stores the hashes, the table itself stores the full value and postgres requires both the hash and the full value to match before rejecting the new row. Ie. Duplicate hashes are fine.</p>
]]></description><pubDate>Tue, 20 Jan 2026 23:06:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=46698892</link><dc:creator>Diggsey</dc:creator><comments>https://news.ycombinator.com/item?id=46698892</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46698892</guid></item><item><title><![CDATA[New comment by Diggsey in "It's hard to justify Tahoe icons"]]></title><description><![CDATA[
<p>It's also `position: fixed` which breaks all scroll optimizations and makes scrolling feel terrible.</p>
]]></description><pubDate>Mon, 05 Jan 2026 12:21:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46497952</link><dc:creator>Diggsey</dc:creator><comments>https://news.ycombinator.com/item?id=46497952</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46497952</guid></item><item><title><![CDATA[New comment by Diggsey in "Cloudflare outage on November 18, 2025 post mortem"]]></title><description><![CDATA[
<p>There were two things I think went extremely poorly here:<p>1) Lack of validation of the configuration file.<p>Rolling out a config file across the global network every 5 minutes is extremely high risk. Even without hindsight, surely one would see then need for very careful validation of this file before taking on that risk?<p>There were several things "obviously" wrong with the file that validation should have caught:<p>- It was much bigger than expected.<p>- It had duplicate entries.<p>- Most importantly, when loaded into the FL2 proxy, the proxy would panic on every request. At the very least, part of the validation should involve loading the file into the proxy and serving a request?<p>2) Very long time to identify and then fix such a critical issue.<p>I can't understand the complete lack of monitoring or reporting? A panic in Rust code, especially from an unwrap, is the application screaming that there's a logic error! I don't understand how that can be conflated with a DDoS attack. How are your logs not filled with backtraces pointing to the exact "unwrap" in question?<p>Then, once identified, why was it so hard to revert to a known good version of the configuration file? How did noone foresee the need to roll back this file when designing a feature that deploys a new one globally every 5 minutes?</p>
]]></description><pubDate>Wed, 19 Nov 2025 20:51:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=45984972</link><dc:creator>Diggsey</dc:creator><comments>https://news.ycombinator.com/item?id=45984972</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45984972</guid></item><item><title><![CDATA[New comment by Diggsey in "Automatically Translating C to Rust"]]></title><description><![CDATA[
<p>IMO, safety and "idiomatic-ness" of Rust code are two separate concerns, with the former being easier to automate.<p>In most C code I've read, the lifetimes of pointers are not that complicated. They <i>can't</i> be that complicated, because complex lifetimes are too error prone without automated checking. That means those lifetimes can be easily expressed.<p>In that sense, a fairly direct C to Rust translation that doesn't try to generate <i>idomatic</i> Rust, but does accurately encode the lifetimes into the type system (ie. replacing pointers with references and Box) is already a huge safety win, since you gain automatic checking of the rules you were already implicitly following.<p>Here's an example of the kind of unidiomatic-but-safe Rust code I mean: <a href="https://play.rust-lang.org/?version=stable&mode=debug&edition=2024&gist=1d586d4c46d598d9b00c7b7c8991250e" rel="nofollow">https://play.rust-lang.org/?version=stable&mode=debug&editio...</a><p>If that can be automated (which seems increasingly plausible) then the need to do such a translation incrementally also goes away.<p>Making it idiomatic would be a case of recognising higher level patterns that couldn't be abstracted away in C, but can be turned into abstractions in Rust, and creating those abstractions. That is a more creative process that would require something like an LLM to drive, but that <i>can</i> be done incrementally, and provides a different kind of value from the basic safety checks.</p>
]]></description><pubDate>Sun, 02 Nov 2025 15:37:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=45791070</link><dc:creator>Diggsey</dc:creator><comments>https://news.ycombinator.com/item?id=45791070</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45791070</guid></item><item><title><![CDATA[New comment by Diggsey in "Gemini 3.0 spotted in the wild through A/B testing"]]></title><description><![CDATA[
<p>I've found Gemini to be <i>much</i> better at completing tasks and following instructions. For example, let's say I want to extract all the questions from a word document and output them as a CSV.<p>If I ask ChatGPT to do this, it will do one of two things:<p>1) Extract the first ~10-20 questions perfectly, and then either just give up, or else hallucinate a bunch of stuff.<p>2) Write code that tries to use regex to extract the questions, which then fails because the questions are too free-form to be reliably matched by a regex.<p>If I ask Gemini to do the same thing, it will just do it and output a perfectly formed and most importantly <i>complete</i> CSV.</p>
]]></description><pubDate>Thu, 16 Oct 2025 21:06:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=45610665</link><dc:creator>Diggsey</dc:creator><comments>https://news.ycombinator.com/item?id=45610665</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45610665</guid></item><item><title><![CDATA[New comment by Diggsey in "Why it took 4 years to get a lock files specification"]]></title><description><![CDATA[
<p>That would be because package version flexibility is an entirely orthogonal concept to lock files, and to conflate them shows a lack of understanding.<p>pyproject.toml describes the supported dependency versions. Those dependencies are then resolved to some specific versions, and the output of that resolution is the lock file. This allows someone else to install the same dependencies in a reproducible way. It doesn't prevent someone resolving pyproject.toml to a different set of dependency versions.<p>If you are building a library, downstream users of your library won't use your lockfile. Lockfiles can still be useful for a library: one can use multiple lockfiles to try to validate its dependency specifications. For example you might generate a lockfile using minimum-supported-versions of all dependencies and then run your test suite against that, in addition to running the test suite against the default set of resolved dependencies.</p>
]]></description><pubDate>Sun, 12 Oct 2025 13:05:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=45557972</link><dc:creator>Diggsey</dc:creator><comments>https://news.ycombinator.com/item?id=45557972</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45557972</guid></item><item><title><![CDATA[New comment by Diggsey in "We found a bug in Go's ARM64 compiler"]]></title><description><![CDATA[
<p>Agree, but I think there <i>is</i> a point to be made here: Go as a language has more subtle runtime invariants that must be upheld compared to other languages, and this has led to a relatively large number of really nasty bugs (eg. there have also been several bugs relating to native function calling due to stack space issues and calling convention differences). By "nasty" I mean ones that are <i>really</i> hard to track down if you don't have the resources that a company like CF does.<p>To me this points to a lack of verification, testing, and most importantly <i>awareness</i> of the invariants that are relied on. If the GC relies on the stack pointer being valid at all times, then the IR needs a way to guarantee that modifications to it are not split into multiple instructions during lowering. It means that there should be explicit testing of each kind of stack layout, and tests that look at the real generated code and step through it instruction by instruction to verify that these invariants are never broken...</p>
]]></description><pubDate>Thu, 09 Oct 2025 14:08:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=45527943</link><dc:creator>Diggsey</dc:creator><comments>https://news.ycombinator.com/item?id=45527943</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45527943</guid></item><item><title><![CDATA[New comment by Diggsey in "AI won't use as much electricity as we are told (2024)"]]></title><description><![CDATA[
<p>Essentially zero as a fraction of global concrete usage...</p>
]]></description><pubDate>Tue, 23 Sep 2025 15:06:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=45348119</link><dc:creator>Diggsey</dc:creator><comments>https://news.ycombinator.com/item?id=45348119</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45348119</guid></item><item><title><![CDATA[New comment by Diggsey in "RustGPT: A pure-Rust transformer LLM built from scratch"]]></title><description><![CDATA[
<p>They are different:<p><pre><code>    "0": ">=0.0.0, <1.0.0"
    "0.9": ">=0.9.0, <1.0.0"
    "0.9.3": ">=0.9.3, <1.0.0"
</code></pre>
Notice how the the minimum bound changes while the upper bound is the same for all of them.<p>The reason for this is that unless otherwise specified, the ^ operator is used, so "0.9" is actually "^0.9", which then gets translated into the kind of range specifier I showed above.<p>There are other operators you can use, these are the common ones:<p><pre><code>    (default) ^ Semver compatible, as described above
    >= Inclusive lower bound only
    < Exclusive upper bound only
    = Exact bound
</code></pre>
Note that while an exact bound will force that exact version to be used, it still doesn't allow two semver compatible versions of a crate to exist together. For example. If cargo can't find a single version that satisfies all constraints, it will just error.<p>For this reason, if you are writing a library, you should <i>in almost all cases</i> stick to regular semver-compatible dependency specifications.<p>For binaries, it is more common to want exact control over versions and you don't have downstream consumers for whom your exact constraints would be a nightmare.</p>
]]></description><pubDate>Mon, 15 Sep 2025 23:35:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=45256251</link><dc:creator>Diggsey</dc:creator><comments>https://news.ycombinator.com/item?id=45256251</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45256251</guid></item><item><title><![CDATA[New comment by Diggsey in "RustGPT: A pure-Rust transformer LLM built from scratch"]]></title><description><![CDATA[
<p>Within a crate graph, for any given major version of a crate (eg. D v1) only a single minor version can exist. So if B depends on D v1.x, and C depends on D v2.x, then two versions of D will exist. If B depends on Dv1.2 and C depends on Dv1.3, then only Dv1.3 will exist.<p>I'm over-simplifying a few things here:<p>1. Semver has special treatment of 0.x versions. For these crates the minor version depends like the major version and the patch version behaves like the minor version. So technically you could have v0.1 and v0.2 of a crate in the same crate graph.<p>2. I'm assuming all dependencies are specified "the default way", ie. as just a number. When a dependency looks like "1.3", cargo actually treats this as "^1.3", ie. the version must be at least 1.3, but can be any semver compatible version (eg. 1.4). When you specify an exact dependency like "=1.3" instead, the rules above still apply (you still can't have 1.3 and 1.4 in the same crate graph) but cargo will error if no version can be found that satisfies all constraints, instead of just picking a version that's compatible with all dependents.</p>
]]></description><pubDate>Mon, 15 Sep 2025 23:23:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=45256167</link><dc:creator>Diggsey</dc:creator><comments>https://news.ycombinator.com/item?id=45256167</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45256167</guid></item><item><title><![CDATA[New comment by Diggsey in "RustGPT: A pure-Rust transformer LLM built from scratch"]]></title><description><![CDATA[
<p>It doesn't link two versions of `rand-core`. That's not even possible with rust (you can only link two semver-incompatible versions of the same crate). And dependency specifications in Rust don't work like that - unless you explicitly override it, all dependencies are semver constraints, so "0.9.0" will happily match "0.9.3".</p>
]]></description><pubDate>Mon, 15 Sep 2025 15:38:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=45251005</link><dc:creator>Diggsey</dc:creator><comments>https://news.ycombinator.com/item?id=45251005</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45251005</guid></item><item><title><![CDATA[New comment by Diggsey in "What are OKLCH colors?"]]></title><description><![CDATA[
<p>No, sRGB refers to both a colour space and an encoding of that colour space. The encoding is non-linear to make best use of the 256 levels available per channel, but you were never supposed to interpolate sRGB by linearly interpolating the encoded components: you're supposed to apply the transfer function, perform the linear interpolation at higher precision, and then convert back down into the non-linear encoding.<p>Failure to do this conversion is what leads to the bad results when interpolating: going from red to green will still go through grey but it should go through a much lighter grey compared to what happens if you get the interpolation wrong.</p>
]]></description><pubDate>Mon, 25 Aug 2025 16:19:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=45015527</link><dc:creator>Diggsey</dc:creator><comments>https://news.ycombinator.com/item?id=45015527</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45015527</guid></item><item><title><![CDATA[New comment by Diggsey in "What are OKLCH colors?"]]></title><description><![CDATA[
<p>Also, isn't the way browsers interpolate colors in sRGB just a bug that I assume is retained for backwards compatibility? sRGB is a logarithmic encoding, you were never supposed to interpolate between colors directly in that encoding - the spec says you're suppose to convert to linear RGB first and do the interpolation there...</p>
]]></description><pubDate>Mon, 25 Aug 2025 10:36:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=45012378</link><dc:creator>Diggsey</dc:creator><comments>https://news.ycombinator.com/item?id=45012378</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45012378</guid></item><item><title><![CDATA[New comment by Diggsey in "Which colors are primary?"]]></title><description><![CDATA[
<p>> There will always be 2 sets of "primary" colors for a given eye: Additive and Subtractive.<p>If your eye only has two types of cone cells then your additive and subtractive primaries are the same ;)</p>
]]></description><pubDate>Sat, 09 Aug 2025 19:37:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=44849439</link><dc:creator>Diggsey</dc:creator><comments>https://news.ycombinator.com/item?id=44849439</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44849439</guid></item><item><title><![CDATA[New comment by Diggsey in "Don’t use “click here” as link text (2001)"]]></title><description><![CDATA[
<p>Documents don't contain calls to action like "Download X" or "Tell me more about Y", so your argument falls down in relation to the examples presented by W3C.</p>
]]></description><pubDate>Wed, 02 Jul 2025 19:20:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=44447753</link><dc:creator>Diggsey</dc:creator><comments>https://news.ycombinator.com/item?id=44447753</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44447753</guid></item><item><title><![CDATA[New comment by Diggsey in "The provenance memory model for C"]]></title><description><![CDATA[
<p>It's standardizing the contract between the programmer and the compiler.<p>Previously a lot of C code was non-portable because it relied on behaviour that wasn't defined as part of the standard. If you compiled it with the wrong compiler or the wrong flags you might get miscompilations.<p>The provenance memory model draws a line in the sand and says "all C code on this side of the line should behave in this well defined way". Any optimizations implemented by compiler authors which would miscompile code on that side of the line would need to be disabled.<p>Assuming the authors of the model have done a good job, the impact on compiler optimizations should be minimized whilst making as much existing C code fall on the "right" side of the line as possible.<p>For new C code it provides programmers a way to write useful code that is also portable, since we now have a line that we can all hopefully agree on.</p>
]]></description><pubDate>Mon, 30 Jun 2025 16:50:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=44425399</link><dc:creator>Diggsey</dc:creator><comments>https://news.ycombinator.com/item?id=44425399</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44425399</guid></item><item><title><![CDATA[New comment by Diggsey in "BusyBeaver(6) Is Quite Large"]]></title><description><![CDATA[
<p>> This doesn't sound right to me.<p>Which bit?<p>> You can prove that ZFC is consistent. You could do it today, with or without the magic number, using a stronger axiom system.<p>Right but then just replace ZFC with that stronger system and you're back where you started - the point is that whatever the "strongest" system is that we've yet considered, BB(N) for sufficiently large N is stronger than that - and in all likelihood N can be much smaller than 748 for all such systems we've yet conceived, since we are not great at efficiently encoding things in turing machines.<p>> If an Oracle told you that BB(748) = 100 or whatever, that would constitute proof that ZFC is consistent.<p>The number alone is not the proof - you'd still need to actually run the corresponding turing machine to finish the proof.<p>> But it wouldn't negate the fact that BB(748) is independent of ZFC, because you haven't proved within the axioms of ZFC that ZFC is consistent, which is what makes it independent.<p>Normally when we say some predicate P is independent of some axiomatic system it means we could add a new axiom to the system (P or !P) that would produce a new system that is still consistent.<p>BB(N) being "independent of ZFC" is a very different statement - it doesn't mean we are free to pick different values of BB(N). It's easy to prove this:<p>1. Let's say there are two possible values of BB(748) - V1 and V2 such that V2 > V1 and both are consistent with ZFC.<p>2. Simulate every possible 748 state turing machine for V2 steps.<p>3. See if any one terminated after more than V1 steps.<p>4. If they did, then V1 is inconsistent with ZFC - contradiction. If they did not, then V2 is inconsistent with ZFC - contradiction (since at least one turing machine must terminate after exactly V2 steps).<p>This entire process takes finite time since there are finitely many 748 state turing machines and V1 and V2 are also finite.<p>So what does it even mean to say that BB(748) is independent of ZFC? BB(N) is not even a predicate so it definitely feels like a category error to say it's independent.<p>We certainly can't prove that a candidate value is correct within ZFC, but given any "overestimate" of BB(748) we can prove that it's wrong:<p>1. Let's say we have VC - an estimate of BB(748) that's too large.<p>2. Simulate every possible 748 state turing machine for VC steps.<p>3. If no turing machine terminated after exactly VC steps, then VC is wrong.</p>
]]></description><pubDate>Sun, 29 Jun 2025 13:56:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=44413201</link><dc:creator>Diggsey</dc:creator><comments>https://news.ycombinator.com/item?id=44413201</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44413201</guid></item></channel></rss>