<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: kmill</title><link>https://news.ycombinator.com/user?id=kmill</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 16:51:52 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=kmill" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by kmill in "Mathematicians disagree on the essential structure of the complex numbers (2024)"]]></title><description><![CDATA[
<p>Would you mind sharing your representation? :-)</p>
]]></description><pubDate>Tue, 10 Feb 2026 22:27:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46967855</link><dc:creator>kmill</dc:creator><comments>https://news.ycombinator.com/item?id=46967855</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46967855</guid></item><item><title><![CDATA[New comment by kmill in "Mathematicians disagree on the essential structure of the complex numbers (2024)"]]></title><description><![CDATA[
<p>1. Algebra: Let's say we have a linear operator T on a real vector space V. When trying to analyze a linear operator, a key technique is to determine the T-invariant subspaces (these are subspaces W such that TW is a subset of W). The smallest non-trivial T-invariant subspaces are always 1- or 2-dimensional(!). The first case corresponds to eigenvectors, and T acts by scaling by a real number. In the second case, there's always a basis where T acts by scaling and rotation. The set of all such 2D scaling/rotation transformations are closed under addition, multiplication, and the nonzero ones are invertible. This is the complex numbers! (Correspondence: use C with 1 and i as the basis vectors, then T:C->C is determined by the value of T(1).)<p>2. Topology: The fact the complex numbers are 2D is essential to their fundamentality. One way I think about it is that, from the perspective of the real numbers, multiplication by -1 is a reflection through 0. But, from an "outside" perspective, you can <i>rotate</i> the real line by 180 degrees, through some ambient space. Having a 2D ambient space is sufficient. (And rotating through an ambient space feels more physically "real" than reflecting through 0.) Adding or multiplying by nonzero complex numbers can always be performed as a continuous transformation <i>inside</i> the complex numbers. And, given a number system that's 2D, you get a key topological invariant of closed paths that avoid the origin: winding number. This gives a 2D version of the Intermediate Value Theorem: If you have a continuous path between two closed loops with different winding numbers, then one of the intermediate closed loops must pass through 0. A consequence to this is the fundamental theorem of algebra, since for a degree-n polynomial f, when r is large enough then f(r*e^(i*t)) traces out for 0<=t<=2*pi a loop with winding number n, and when r=0 either f(0)=0 or f(r*e^(i*t)) traces out a loop with winding number 0, so if n>0 there's some intermediate r for which there's some t such that f(r*e^(i*t))=0.<p>So, I think the point is that 2D rotations and going around things are natural concepts, and very physical. Going around things lets you <i>ensnare</i> them. A side effect is that (complex) polynomials have (complex) roots.</p>
]]></description><pubDate>Tue, 10 Feb 2026 22:22:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=46967797</link><dc:creator>kmill</dc:creator><comments>https://news.ycombinator.com/item?id=46967797</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46967797</guid></item><item><title><![CDATA[New comment by kmill in "Imagining a language without booleans"]]></title><description><![CDATA[
<p>That second operator is the <|> operator, from the Alternative typeclass.<p>The first one has some arbitrariness (do you take the left or right value if both are Just). But, thankfully the Applicative typeclass gives both <* and *>, which lets you choose which value you want:<p><pre><code>  Just A <* Just B = Just A
  Just A *> Just B = Just B
</code></pre>
(There's the possibility to merge values too, with f <$> Just A <*> Just B, which evaluates to Just (f A B). I feel like this is a "don't try to understand it, just get used to it" sort of syntax. It can be pretty convenient though.)</p>
]]></description><pubDate>Tue, 23 Sep 2025 18:48:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=45351192</link><dc:creator>kmill</dc:creator><comments>https://news.ycombinator.com/item?id=45351192</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45351192</guid></item><item><title><![CDATA[New comment by kmill in "A simple way to measure knots has come unraveled"]]></title><description><![CDATA[
<p>An analogy might be how if you mix together water and alcohol, you get a solution with less volume than the sum of the volumes. That doesn't mean that there's "negative" volume, just that the volume turns out to be sub-additive due to an interaction of specific characteristics of the liquids. Somehow, some connect sums of particular knots enable possibilities that let it more easily be unknotted.<p>I spent the better part of the summer during grad school trying to <i>prove</i> additivity of unknotting numbers. (I'll mention that it's sort of a relief to know that the reason I failed to prove it wasn't because I wasn't trying hard enough, but that it was impossible!)<p>One approach I looked into was to come up with some different analogues of unknotting number, ones that were conceptually related but which might or might not be additive, to at least serve as some partial progress. The general idea is represent an unknotting using a certain kind of surface, which can be more restrictive than a general unknotting, and then maybe <i>that</i> version of unknotting can be proved to be additive. Maybe there's some classification of individual unknotting moves where when you have multiple of them in the same knotting surface, they can cancel out in certain ways (e.g. in the classification of surfaces, you can always transform two projective planes into a torus connect summand, in the presence of a third projective plane).<p>Connect summing mirror images of knots does have some interesting structure that other connect sums don't have — these are known as ribbon knots. It's possible that this structure is a good way to derive that the unknotting number is 5. I'm not sure that would explain any of the other examples they produced however — this is more speculation on how might someone have discovered this counterexample without a large-scale computer search.</p>
]]></description><pubDate>Mon, 22 Sep 2025 20:40:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=45339177</link><dc:creator>kmill</dc:creator><comments>https://news.ycombinator.com/item?id=45339177</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45339177</guid></item><item><title><![CDATA[New comment by kmill in "Ongoing Lean formalisation of the proof of Fermat's Last Theorem"]]></title><description><![CDATA[
<p>I see people on Zulip using Copilot to write Lean proofs, and they have some success, but the quality is really bad right now, creating long, unmaintainable proofs. New users get stuck, thinking they're 90% of the way to the end, but really the whole thing probably should be scrapped and they should start over.<p>It's a bit frustrating because, before Copilot, new users would come with proofs and you could spend some time helping them write better proofs and they'd learn things and gain skills, but now it's not clear that this is time well spent on my part. Copilot is not going to learn from my feedback.</p>
]]></description><pubDate>Sun, 03 Aug 2025 17:15:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=44778077</link><dc:creator>kmill</dc:creator><comments>https://news.ycombinator.com/item?id=44778077</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44778077</guid></item><item><title><![CDATA[New comment by kmill in "Ongoing Lean formalization of the proof for Fermat's Last Theorem"]]></title><description><![CDATA[
<p>My understanding is that the proof doesn't exist in written form in its entirety.<p>Plus, Kevin Buzzard is a world expert with some ideas for how to better organize the proof. In general, formalization leads to new understanding about mathematics.<p>Something people outside of mathematics don't tend to appreciate is that mathematicians are usually thinking deeply about what we already know, and that work reveals new structures and connections that clarify existing knowledge. The new understanding reveals new gaps in understanding, which are filled in, and the process continues. It's not just about collecting verifiably true things.<p>Even if somehow the OpenAI algorithm could apply here, we'd get less value out of this whole formalization exercise than to have researchers methodically go through our best understanding of our best proof of FLT again.</p>
]]></description><pubDate>Sun, 03 Aug 2025 17:10:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=44778044</link><dc:creator>kmill</dc:creator><comments>https://news.ycombinator.com/item?id=44778044</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44778044</guid></item><item><title><![CDATA[New comment by kmill in "Ongoing Lean formalization of the proof for Fermat's Last Theorem"]]></title><description><![CDATA[
<p>> I haven't worked with Lean so I don't know how much this crops up in practice<p>It really doesn't. I've been using Lean and Mathlib for about five years now, and Fermat's Last Theorem is definitely not going to depend on the reduction properties of quotient types in the large scale.<p>Mathematical reasoning in Lean is almost universally done with rewriting, not reduction. People have found reduction based proofs (colloquially "heavy rfls") to be difficult to maintain. It exposes internal details of definitions. It's better to use the "public API" for mathematical definitions to be sure things can be refactored.<p>Really, quotients almost should never use the actual `Quot` type unless you have no better choice. In mathematics we like working with objects via universal properties ("public API"). A quotient type is any type that satisfies the universal property of a quotient. All `Quot` does is guarantee that quotients exist with reasonable computation properties, if we ever need them, and if we need those computation properties — which in the kind of math that goes into FLT we often don't. We don't even need `Quot` for Lean to have quotient types, since the classic construction of a set of equivalence classes works. (Though to prove that this construction is correct surely uses functional extensionality, which is proved using `Quot` in some way, but that's an implementation detail of `funext`.)</p>
]]></description><pubDate>Sun, 03 Aug 2025 16:55:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=44777913</link><dc:creator>kmill</dc:creator><comments>https://news.ycombinator.com/item?id=44777913</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44777913</guid></item><item><title><![CDATA[New comment by kmill in "The Math Is Haunted"]]></title><description><![CDATA[
<p>We're working on a new rewrite tactic this summer at the Lean FRO (I don't know if I ever directly mentioned that to you yet on Zulip).<p>One interface I'm planning on is `rw [(pos := 1,3,2) thm]` to be able to navigate to the place where the rewrite should occur, with a widget interface to add these position strings by clicking on them in the Infoview. The whole occurrences interface will also be revamped, and maybe someone from the community could help make a widget interface for that too.<p>Of course there's already `conv => enter [1,3,2]; rw thm`, but putting it directly into `rw` is more convenient while also being more powerful (`conv` has intrinsic limitations for which positions it can access).<p>The interface is what people will notice, but technical idea of the project is to separate the "what" you want to rewrite from "how" to get dependent type theory to accept it, and then make the "how" backend really good at getting rewrites to go through. No more "motive not type correct", but either success or an error message that gives precise explanations of what went wrong with the rewrite.<p>And, yeah, it's great that Lean lets you write your own tactics, since it lets people write domain-specific languages just for solving the sorts of things they run into themselves. There's no real difference between writing tactics as a user or as part of the Lean system itself. Anyone could make a new rewrite tactic as a user package.</p>
]]></description><pubDate>Thu, 31 Jul 2025 17:14:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=44747776</link><dc:creator>kmill</dc:creator><comments>https://news.ycombinator.com/item?id=44747776</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44747776</guid></item><item><title><![CDATA[New comment by kmill in "The Math Is Haunted"]]></title><description><![CDATA[
<p>The "olean" files are a binary format that contain everything that was added to the Lean environment. Among other things, it includes all of the declarations and their Lean.Expr [1] expressions. People have written tools to dump the data for inspection [2], or to independently check that the expressions are type correct and the environment is well-formed (that is, check the correctness).<p>[1] <a href="https://github.com/leanprover/lean4/blob/3a3c816a27c0bd454711e2f9e4e20cbbde47ba85/src/Lean/Expr.lean#L302">https://github.com/leanprover/lean4/blob/3a3c816a27c0bd45471...</a>
[2] <a href="https://github.com/digama0/oleandump">https://github.com/digama0/oleandump</a>
[3] <a href="https://github.com/ammkrn/nanoda_lib">https://github.com/ammkrn/nanoda_lib</a></p>
]]></description><pubDate>Thu, 31 Jul 2025 00:19:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=44741052</link><dc:creator>kmill</dc:creator><comments>https://news.ycombinator.com/item?id=44741052</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44741052</guid></item><item><title><![CDATA[New comment by kmill in "The Math Is Haunted"]]></title><description><![CDATA[
<p>That's Lean 3, from eight years ago, and it's from before 'sorry' really existed in the way we know it now.<p>---<p>To answer the GP's question: Not only is there a verification mode, but Lean generates object files with the fully elaborated definitions and theorems. These can be rechecked by the kernel, or by external verifiers. There's no need to trust the Lean system itself, except to make sure that the theorem statements actually correspond to what we think they're supposed to be.</p>
]]></description><pubDate>Wed, 30 Jul 2025 23:41:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=44740839</link><dc:creator>kmill</dc:creator><comments>https://news.ycombinator.com/item?id=44740839</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44740839</guid></item><item><title><![CDATA[New comment by kmill in "The Math Is Haunted"]]></title><description><![CDATA[
<p>At least you can 'go to definition' on the tactics and see what they're doing. It's a lot to take in at the beginning, but it can all be inspected and understood. (At least until you get to the fundamental type theory; the reduction rules are a lot harder to get into.)<p>> the rewrite (rw) tactic syntax doesn't feel natural either.<p>Do you have any thoughts on what a natural rewrite syntax would be?</p>
]]></description><pubDate>Wed, 30 Jul 2025 23:34:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=44740780</link><dc:creator>kmill</dc:creator><comments>https://news.ycombinator.com/item?id=44740780</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44740780</guid></item><item><title><![CDATA[New comment by kmill in "Binding Application in Idris"]]></title><description><![CDATA[
<p>The system used in Lean 4 is explained in <a href="https://arxiv.org/abs/2001.10490v7" rel="nofollow">https://arxiv.org/abs/2001.10490v7</a> (Ullrich and Moura, "Beyond Notations: Hygienic Macro Expansion for Theorem Proving Languages").<p>There's still a set-of-scopes system, but it seems to be pretty different from <a href="https://users.cs.utah.edu/plt/scope-sets/" rel="nofollow">https://users.cs.utah.edu/plt/scope-sets/</a> (And to clarify what I wrote previously, each identifier has a list of macro scopes.)<p>The set-of-scopes in Lean 4 is for correct expansion of macro-creating-macros. The scopes are associated to <i>macro expansions</i> rather than <i>binding forms</i>. Lean's syntax quotations add the current macro scope to every identifier that appears in quotations in the expansion. That way, if there are multiple expansions of the same macro for example, it's not possible for identifiers in the expansion to collide. There's an example in section 3.2 of a macro macro that defines some top-level definitions.<p>(Section 8 of the paper, related work, suggests that set-of-scopes in Lean and Racket are closer than I'm understanding; I think what's going on is that Lean has a separate withFreshMacroScope primitive that macros can invoke, and syntax quotations participate in macro scopes, whereas Racket seems to give this responsibility to binding forms, and they're responsible for adding macro scopes to their binders and bodies. I'm a bit unclear on this.)</p>
]]></description><pubDate>Tue, 15 Jul 2025 01:49:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=44567129</link><dc:creator>kmill</dc:creator><comments>https://news.ycombinator.com/item?id=44567129</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44567129</guid></item><item><title><![CDATA[New comment by kmill in "Binding Application in Idris"]]></title><description><![CDATA[
<p>In Lean's parsed `Syntax`, binders are plain identifiers. The way this works is that identifiers can be annotated with the module it was parsed in as well as a "macro scope", which is a number that's used to make identifiers created by macros be distinct from any previously created identifiers (the current macro scope is some global state that's incremented whenever a macro is being expanded) — an identifier with this annotation is called a <i>hygienic</i> identifier, and when identifiers are tested for equality the annotations are tested too. With this system in place, there's nothing special you need to do to elaborate binders (and it also lets you splice together syntaxes without any regard for hygiene!). For example, `fun x => b x` elaborates by (1) adding a variable `x` to the local scope, (2) elaborating `b x` in that scope, and then (3) abstracting `x` to make the lambda. The key here is that `x` is a hygienic identifier, so an `x` that's from a different module or macro scope won't be captured by the binder `x`.<p>Yes you can define the syntax that's in the article in Lean. A version of this is the Mathlib `notation3` command, but it's for defining notation rather than re-using the function name (e.g. using a union symbol for `Set.iUnion`), and also the syntax is a bit odd:
  notation3 "⋃ "(...)", "r:60:(scoped f => iUnion f) => r<p>The ideas in the article are neat, and I'll have to think about whether it's something Lean could adopt in some way... Support for nested binders would be cool too. For example, I might be able to see something like `List.all (x in xs) (y in ys) => x + y < 10` for `List.all (fun x => List.all (fun y => x + y < 10) ys) xs`.</p>
]]></description><pubDate>Mon, 14 Jul 2025 15:30:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=44561362</link><dc:creator>kmill</dc:creator><comments>https://news.ycombinator.com/item?id=44561362</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44561362</guid></item><item><title><![CDATA[New comment by kmill in "Why the original Macintosh had a screen resolution of 512×324"]]></title><description><![CDATA[
<p>I don't know, but I can do some numerology: a 3:2 aspect ratio that's 512 pixels wide would need a 341 and a third lines, so round up and you get 512 by 342.<p>The later 384 number corresponds to an exact 4:3 aspect ratio.</p>
]]></description><pubDate>Tue, 27 May 2025 20:55:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=44110611</link><dc:creator>kmill</dc:creator><comments>https://news.ycombinator.com/item?id=44110611</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44110611</guid></item><item><title><![CDATA[New comment by kmill in "Large Language Models for Mathematicians (2023)"]]></title><description><![CDATA[
<p>Kevin Buzzard did great work to popularize mathlib among mathematicians (that's how I got involved myself), but he didn't found mathlib!<p>There's a blurb about the history here: <a href="https://leanprover-community.github.io/papers/mathlib-paper.pdf" rel="nofollow">https://leanprover-community.github.io/papers/mathlib-paper....</a></p>
]]></description><pubDate>Sat, 01 Feb 2025 17:28:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=42900099</link><dc:creator>kmill</dc:creator><comments>https://news.ycombinator.com/item?id=42900099</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42900099</guid></item><item><title><![CDATA[New comment by kmill in "Rosetta 2 creator leaves Apple to work on Lean full-time"]]></title><description><![CDATA[
<p>There's a completely new language reference in the process of being written: <a href="https://lean-lang.org/doc/reference/latest/" rel="nofollow">https://lean-lang.org/doc/reference/latest/</a> (by David Thrane Christiansen, co-author of The Little Typer, and Lean FRO member)<p>Some links here seem to be broken at the moment — and David's currently on vacation so they likely won't be fixed until January — but if you see for example <a href="https://lean-lang.org/basic-types/strings/" rel="nofollow">https://lean-lang.org/basic-types/strings/</a> it's supposed to be <a href="https://lean-lang.org/doc/reference/latest/basic-types/strings/" rel="nofollow">https://lean-lang.org/doc/reference/latest/basic-types/strin...</a></p>
]]></description><pubDate>Sun, 22 Dec 2024 07:32:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=42484912</link><dc:creator>kmill</dc:creator><comments>https://news.ycombinator.com/item?id=42484912</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42484912</guid></item><item><title><![CDATA[New comment by kmill in "How to Study Mathematics (2017)"]]></title><description><![CDATA[
<p>Yeah, during department teas you can hear mutters of "interesting" as ideas are exchanged and evaluated.<p>But, in my last comment I was just trying to temper my previous comment's claim about how important definitions are. At some point you get so used to a definition that even if you don't know a particular formulation word for word, you could still write a textbook on the subject because you know how the theory is supposed to go.</p>
]]></description><pubDate>Tue, 03 Dec 2024 02:27:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=42302535</link><dc:creator>kmill</dc:creator><comments>https://news.ycombinator.com/item?id=42302535</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42302535</guid></item><item><title><![CDATA[New comment by kmill in "How to Study Mathematics (2017)"]]></title><description><![CDATA[
<p>I'm teaching discrete math in January — I'll try the analogy, wish me luck!<p>As someone who's gone through the mathematical ringer, the analogy doesn't ring true to me, but it does sound pedagogically useful still (my students will be CS majors, so the math will be for training rather than an end). Even at the highest levels the definitions are of prime importance, though I suppose once you get to "stage 3" in Terry Tao's classification (see elsewhere in the thread) definitions can start to feel inevitable, since you know what the theory is about, and the definitions need to be what they are to support the theory.<p>Personal aside: In my own math research, something that's really slowed me down was feeling like I needed everything to feel inevitable. It always bugged me reading papers that gave definitions where I'm wondering "why this definition, why not something else", but the paper never really answers it. Now I'm wondering if my standards have just been too high, and incremental progress means being OK with unsatisfactory definitions... After all, it's what the authors managed to discover.</p>
]]></description><pubDate>Tue, 03 Dec 2024 02:22:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=42302494</link><dc:creator>kmill</dc:creator><comments>https://news.ycombinator.com/item?id=42302494</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42302494</guid></item><item><title><![CDATA[New comment by kmill in "How to Study Mathematics (2017)"]]></title><description><![CDATA[
<p>Things get a bit messier once you're doing research mathematics — definitions don't just come from nothing, and a good definition is one that serves its theorems. Definitions <i>can</i> be "wrong" (they might be generalizable, they might have unexpected pathological examples, etc.), and it's the result of lots of hard work by lots of mathematicians throughout history that we have the definitions we enjoy the use of today.<p>But yeah, while studying math, I think it's similar to learning programming — don't blame the compiler for your mistakes, it's a well-tested piece of software.</p>
]]></description><pubDate>Mon, 02 Dec 2024 17:47:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=42298477</link><dc:creator>kmill</dc:creator><comments>https://news.ycombinator.com/item?id=42298477</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42298477</guid></item><item><title><![CDATA[New comment by kmill in "How to Study Mathematics (2017)"]]></title><description><![CDATA[
<p>The author really does mean memorize. To engage with pure mathematics, you <i>must</i> know the definitions, since the definitions are the bedrock of the subject. If you don't know the axioms of a topology, how can you check for yourself whether something forms a topological space? Or without knowing the exact definition of continuous, how can you know whether a proof of continuity is correct? Without knowing the definitions, you can't really <i>know</i> mathematics.<p>To be clear, this does not mean memorizing all the theorems. Getting to know the theorems (and solving problems) is what helps you internalize the subject. Math is the art of what's certain, and knowing exactly what the objects of the subject are is necessary for that. Theorems are derived from the definitions, but definitions can't be derived.<p>In my experience with a math (undergrad and PhD), I realized I had to know definitions to feel competent at all. In my teaching, it's hard to convince students to actually memorize any definitions — so many times students carry around misconceptions (like that "linearly independent" just means that no vector is a scale multiple of any other vector), but if they just had it memorized, they might realize that the misconception doesn't hold up. Math is weird in that the definitions are actually the exact truth (by definition! tautologically so), so it does take some time to get used to the fact that they're essential.</p>
]]></description><pubDate>Mon, 02 Dec 2024 05:24:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=42293374</link><dc:creator>kmill</dc:creator><comments>https://news.ycombinator.com/item?id=42293374</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42293374</guid></item></channel></rss>