<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: feoren</title><link>https://news.ycombinator.com/user?id=feoren</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 13 Apr 2026 17:25:16 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=feoren" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by feoren in "Zohran Mamdani wins the New York mayoral race"]]></title><description><![CDATA[
<p>If this doesn't happen, are you going to accept that you were wrong, or are you going to ignore it and be off spreading unfounded anger about some other imagined offense?</p>
]]></description><pubDate>Wed, 05 Nov 2025 03:29:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=45818667</link><dc:creator>feoren</dc:creator><comments>https://news.ycombinator.com/item?id=45818667</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45818667</guid></item><item><title><![CDATA[New comment by feoren in "Dating: A mysterious constellation of facts"]]></title><description><![CDATA[
<p>Anyone can decide a statement is "ideological" just by disagreeing with it. The statement "the government can and should provide services to its people if those services are important to its future and the free market is incapable of providing them" should not, and did not used to be, ideological. The fact that such a statement is far outside of the current Overton Window is the result of a decades-long propaganda campaign to destroy everyone's faith in government so it can be looted, and everyone who parrots "government bad" is (knowingly or not) playing a part in this propaganda campaign. I assume the issue you have with my comment is that I knew 95% of readers would immediately regurgitate "but government bad!", but of course I was right that that happened.<p>When the Overton Window shifts to the point where saying "people should be decent to one another" becomes a radical ideological statement, make sure you flag every comment that says that too. We can't have radical ideologues on HN, after all.</p>
]]></description><pubDate>Mon, 03 Nov 2025 20:17:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=45803872</link><dc:creator>feoren</dc:creator><comments>https://news.ycombinator.com/item?id=45803872</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45803872</guid></item><item><title><![CDATA[New comment by feoren in "John Carmack on mutable variables"]]></title><description><![CDATA[
<p>> Why would you instantiate a class and call it result‽<p>Are you suggesting that the results of calculations should always be some sort of primitive value? It's not clear what you're getting hung up on here.</p>
]]></description><pubDate>Fri, 31 Oct 2025 19:10:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=45775586</link><dc:creator>feoren</dc:creator><comments>https://news.ycombinator.com/item?id=45775586</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45775586</guid></item><item><title><![CDATA[New comment by feoren in "Tinkering is a way to acquire good taste"]]></title><description><![CDATA[
<p>> For example disassembling a microwave<p>Getting this wrong will not be a learning experience, because it will kill you. This is an incredibly dangerous thing to do and should only be done by people who already know what they're doing.<p>That's not just a tangential tidbit -- you don't learn well when you are completely out of your depth. You learn well when you are right at the edge of your ability and understanding. That involves <i>risk</i> of failure, but the failure isn't the important part, operating on the boundary is.</p>
]]></description><pubDate>Wed, 29 Oct 2025 19:54:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=45752179</link><dc:creator>feoren</dc:creator><comments>https://news.ycombinator.com/item?id=45752179</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45752179</guid></item><item><title><![CDATA[New comment by feoren in "Python 3.14 is here. How fast is it?"]]></title><description><![CDATA[
<p>> Should it be memory-efficient? Fast? Secure? Simple? Easy to formally prove? Easy for beginners? Work on old architecture? Work on embedded architecture?<p>What do any of these have to do with guarantees of long-term compatibility? I'm not arguing that there should be One Programming Language To Rule Them All, I'm asking about whether we can design better guarantees about long-term compatibility into new programming languages.</p>
]]></description><pubDate>Fri, 10 Oct 2025 19:19:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=45542694</link><dc:creator>feoren</dc:creator><comments>https://news.ycombinator.com/item?id=45542694</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45542694</guid></item><item><title><![CDATA[New comment by feoren in "Python 3.14 is here. How fast is it?"]]></title><description><![CDATA[
<p>> > Code is just math.<p>> What?? No. If it was there'd never be any bugs.<p>Are you claiming there is no incorrect math out there? Go offer to grade some high-school algebra tests if you'd like to see buggy math. Or Google for amateur proofs of the Collatz Conjecture. Math is just extremely high (if not all the way) on the side of "if it compiles, it is correct", with the caveat that compilation only can happen in the brains of other mathematicians.</p>
]]></description><pubDate>Fri, 10 Oct 2025 18:28:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=45542182</link><dc:creator>feoren</dc:creator><comments>https://news.ycombinator.com/item?id=45542182</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45542182</guid></item><item><title><![CDATA[New comment by feoren in "Python 3.14 is here. How fast is it?"]]></title><description><![CDATA[
<p>No, weather forecasting <i>models</i> are "just math". The forecast itself is an output of the model. I sure hope our weather forecasting models are still useful next year!<p><pre><code>    weather forecasting models <=> code <=> math

    weather forecast <=> program output <=> calculation results
</code></pre>
So all you're saying is that we should not expect individual weather forecasts, program output, and calculation results to be useful long-term. Nobody is arguing that.</p>
]]></description><pubDate>Fri, 10 Oct 2025 18:22:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=45542118</link><dc:creator>feoren</dc:creator><comments>https://news.ycombinator.com/item?id=45542118</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45542118</guid></item><item><title><![CDATA[New comment by feoren in "Python 3.14 is here. How fast is it?"]]></title><description><![CDATA[
<p>> Mathematical notation evolved a lot in the last thousand years<p>That is not counter to what I'm saying.<p><pre><code>    Mathematical notation <=> Programming Languages.

    Proofs <=> Code.
</code></pre>
When mathematical notation evolves, old proofs do not become obsolete! There is no analogy to a "breaking change" in math. The closest we came to this was Godel's Incompleteness Theorem and the Cambrian Explosion of new sets of axioms, but with a lot of work most of math was "re-founded" on a set of commonly accepted axioms. We can see how hostile the mathematical community is to "breaking changes" by seeing the level of crisis the Incompleteness Theorem caused.<p>You are certainly free to use a different set of axioms than ZF(C), but you need to be very careful about which proofs you rely on; just as you are free to use a very different programming language or programming paradigm, but you may be limited in the libraries available to you. But if you wake up one morning and your code no longer compiles, that is the analogy to one day mathematicians waking up and realizing that a previously correct proof is now suddenly incorrect -- not that it was always wrong, but that changes in math forced it into incorrectness. It's rather unthinkable.<p>Of course programming languages should improve, diversify, and change over time as we learn more. Backward-compatible changes do not violate my principle at all. However, when we are faced with a possible breaking change to a programming language, we should think <i>very</i> hard about whether we're changing the original intent and paradigms of the programming language and whether we're better off basically making a new spinoff language or something similar. I understand why it's annoying that Python 2.7 is around, but I also understand why it'd be so much more annoying if it weren't.<p>Surely our industry could improve dramatically in this area if it cared to. Can we write a family of nested programming languages where core features are guaranteed not to change in breaking ways, and you take on progressively more risk as you use features more to the "outside" of the language? Can we get better at formalizing which language features we're relying on? Better at isolating and versioning our language changes? Better at time-hardening our code? I promise you there's a ton of fruitful work in this area, and my claim is that that would be very good for the long-term health and maturation of our discipline.</p>
]]></description><pubDate>Fri, 10 Oct 2025 18:06:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=45541944</link><dc:creator>feoren</dc:creator><comments>https://news.ycombinator.com/item?id=45541944</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45541944</guid></item><item><title><![CDATA[New comment by feoren in "Python 3.14 is here. How fast is it?"]]></title><description><![CDATA[
<p>> Math is continually updated, clarified and rewritten<p>And yet math proofs from decades and centuries ago are still correct. Note that I said we write "code that lasts", not "programming languages that never change". Math notation is to programming languages as proofs are to code. I am <i>not</i> saying programming languages should never change or improve. I am saying that our entire industry would benefit if we stopped to think about how to write <i>code</i> that remains "correct" (compiling, running, correct behavior) for the next 100 years. Programming languages are free to change in backward-compatible ways, as long once-correct code is always-correct. And it doesn't have to be all code, but you know what they say: there is nothing as permanent as a temporary solution.</p>
]]></description><pubDate>Fri, 10 Oct 2025 17:59:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=45541862</link><dc:creator>feoren</dc:creator><comments>https://news.ycombinator.com/item?id=45541862</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45541862</guid></item><item><title><![CDATA[New comment by feoren in "The government ate my name"]]></title><description><![CDATA[
<p>People's own opinions about what their name is is not a "non-issue", shitty-ass governments or not. Declaring a people's opinions about names stupid and irrelevant (or even illegal) is one of the many ways majorities oppress or even commit slow genocide against minorities.</p>
]]></description><pubDate>Fri, 10 Oct 2025 00:08:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=45534298</link><dc:creator>feoren</dc:creator><comments>https://news.ycombinator.com/item?id=45534298</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45534298</guid></item><item><title><![CDATA[New comment by feoren in "Python 3.14 is here. How fast is it?"]]></title><description><![CDATA[
<p>> I'm pretty sure you didn't go through Στοιχεία (The Elements) in its original version<p>This is like saying "you haven't read the source code of the first version of Linux". The only reason to do that would be for historical interest. There is still something timeless about it, and I absolutely did learn Euclid's postulates which he laid down in those books, all 5 of which are still foundational to most geometry calculations in the world today, and 4 of which are foundational to even non-Euclidean geometry. The Elements is a perfect example of math that has remained relevant and useful for thousands of years.</p>
]]></description><pubDate>Thu, 09 Oct 2025 20:37:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=45532789</link><dc:creator>feoren</dc:creator><comments>https://news.ycombinator.com/item?id=45532789</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45532789</guid></item><item><title><![CDATA[New comment by feoren in "Python 3.14 is here. How fast is it?"]]></title><description><![CDATA[
<p>I respect and understand the appeal of LISP. It is a great example of code not having to change all the time. I personally haven't had a compelling reason to use it (post college), but I'm glad I learned it and I wouldn't be averse to taking a job that required it.<p>While writing "timeless" code is certainly an ideal of mine, it also competes with the ideals of writing useful code that does useful things for my employer or the goals of my hobby project, and I'm not sure "getting actual useful things done" is necessarily LISP's strong suit, although I'm sure I'm ruffling feathers by saying so. I like more modern programming languages for other reasons, but their propensity to make backward-incompatible changes is definitely a point of frustration for me. Languages improving in backward-compatible ways is generally a good thing; your code can still be relatively "timeless" in such an environment. Some languages walk this line better than others.</p>
]]></description><pubDate>Thu, 09 Oct 2025 20:36:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=45532785</link><dc:creator>feoren</dc:creator><comments>https://news.ycombinator.com/item?id=45532785</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45532785</guid></item><item><title><![CDATA[New comment by feoren in "Python 3.14 is here. How fast is it?"]]></title><description><![CDATA[
<p>> the real ones use Typst now<p>Are you intentionally leaning into the exact caricature I'm referring to? "<i>Real</i> programmers only use Typstly, because it's the <i>newest</i>!". The website title for Typst when I Googled it literally says "The new foundation for documents". Its entire appeal is that it's new? Thank you for giving me such a perfect example of the symptom I'm talking about.<p>> TeX and family are stagnant, difficult to use, difficult to integrate into modern workflows, and not written in Rust.<p>You've listed two real issues (difficult to use, difficult to integrate), and two rooted firmly in recency bias (stagnant, not written in Rust). If you can find a typesetting library that is demonstrably better in the ways you care about, great! That is <i>not</i> an argument that TeX itself should change. Healthy competition is great! Addiction to change and newness is not.<p>> nobody uses infinitesimals for derivatives anymore, they all use limits now<p>My point is not that math never changes -- it should, and does. However, math does not simply <i>rot</i> over time, like code seems to (or at least we simply assume it does). Math does not <i>age out</i>. If a math technique becomes obsolete, it's only ever because it was replaced with something better. More often, it forks into multiple different techniques that are useful for different purposes. This is all wonderful, and we can celebrate when this happens in software engineering too.<p>I also think your example is a bit more about math pedagogy than research -- infinitesimals are absolutely used all the time in math research (see Nonstandard Analysis), but it's true that Calculus 1 courses have moved toward placing limits as the central idea.</p>
]]></description><pubDate>Thu, 09 Oct 2025 19:47:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=45532195</link><dc:creator>feoren</dc:creator><comments>https://news.ycombinator.com/item?id=45532195</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45532195</guid></item><item><title><![CDATA[New comment by feoren in "What .NET 10 GC changes mean for developers"]]></title><description><![CDATA[
<p>"Just don't write bad code" means that you can easily avoid some of the anti-patterns that people list as weaknesses of C#. Yes, maybe you inherit code where people do those things, but how much of that is because of C#, and how much is due to it being popular? Any popular language is going to have bucketloads of bad code written in it. Alternatively: "you can write bad code in any language". I'm far more interested in languages that help you write great code than those that prevent you from writing bad code. (Note that I view static typing in the "help you write great code" category -- I am distinguishing "bad code" from "incorrect code" here.)<p>Yes, some programming languages have more landmines and footguns than others (looking at you, JS), and language designers should strive to avoid those as much as possible. But I actually think that C# <i>does</i> avoid those. That is: most of what people complain about are language features that are genuinely important and useful in a narrow scope, but are abused / applied too broadly. It would be impossible to design a language that knows whether you're using Reflection appropriately or not; the question is whether their inclusion of Reflection <i>at all</i> improves the language (it does). C# chose to be a general-purpose, multi-paradigmatic language, and I think they met that goal with flying colors.<p>> Newcomers won't know many DI framework implicit behaviors & conventions until either they shoot themself in their foot or get RTFM'd<p>The question is: does the DI framework reduce the overall complexity or not? Good DI frameworks are built on a very small number of (yes, "magic") conventions that are easy to learn. That being said, bad DI frameworks abound.<p>And can you imagine <i>any other industry</i> where having to read a few pages of documentation before you understood how to do <i>engineering</i> was looked upon with such derision? WTF is wrong with newcomers having to read a few pages of documentation!?</p>
]]></description><pubDate>Thu, 09 Oct 2025 19:14:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=45531853</link><dc:creator>feoren</dc:creator><comments>https://news.ycombinator.com/item?id=45531853</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45531853</guid></item><item><title><![CDATA[New comment by feoren in "Python 3.14 is here. How fast is it?"]]></title><description><![CDATA[
<p>You hope it <i>doesn't</i>?<p>> [Donald Knuth] firmly believes that having an unchanged system that will produce the same output now and in the future is more important than introducing new features<p>This is such a breath of fresh air in a world where everything is considered obsolete after like 3 years. Our industry has a disease, an insatiable hunger for <i>newness</i> over <i>completeness</i> or <i>correctness</i>.<p>There's no reason we can't be writing code that lasts 100 years. Code is just math. Imagine having this attitude with math: "LOL loser you still use <i>polynomials</i>!? Weren't those invented like thousands of years ago? LOL dude get with the times, everyone uses Equately for their equations now. It was made by 3 interns at Facebook, so it's pretty much the new hotness." No, I don't think I will use "Equately", I think I'll stick to the tried-and-true idea that has been around for 3000 years.<p>Forget new versions of everything all the time. The people who can write code that doesn't need to change might be the only people who are really contributing to this industry.</p>
]]></description><pubDate>Thu, 09 Oct 2025 18:56:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=45531630</link><dc:creator>feoren</dc:creator><comments>https://news.ycombinator.com/item?id=45531630</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45531630</guid></item><item><title><![CDATA[New comment by feoren in "New nanotherapy clears amyloid-β, reversing symptoms of Alzheimer's in mice"]]></title><description><![CDATA[
<p>The latter is what happened, for upwards of ten years. And it wasn't a small fraction of the funding -- almost no funding was allocated to any research not looking at amyloid plaques, because the intellectual giants' (falsified) research was showing that that was by far the most promising avenue to explore.</p>
]]></description><pubDate>Thu, 09 Oct 2025 18:41:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=45531472</link><dc:creator>feoren</dc:creator><comments>https://news.ycombinator.com/item?id=45531472</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45531472</guid></item><item><title><![CDATA[New comment by feoren in "New nanotherapy clears amyloid-β, reversing symptoms of Alzheimer's in mice"]]></title><description><![CDATA[
<p>> You can call it "evil" if you want<p>It was a concerted and intentional effort to fake data and falsify research into a pervasive deadly disease, specifically in order to hoard funds going to research they knew was, if not a dead end, at least not nearly as promising as they were claiming, preventing those funds from going to other research groups that might actually make progress, essentially <i>stealing donations from a charity</i>, and using their power and clout to attack the reputations of anyone who challenged them. They directly and knowingly added some X years to how long it will take to cure this disease, with X being at least 2 and possibly as much as 10. When Alzheimer's is finally cured, add up all the people who suffered and died from it in the X years before that point, and this research team is <i>directly and knowingly responsible</i> for all of that suffering. Yes, I think I will call it absolutely fucking evil.</p>
]]></description><pubDate>Thu, 09 Oct 2025 18:41:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=45531465</link><dc:creator>feoren</dc:creator><comments>https://news.ycombinator.com/item?id=45531465</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45531465</guid></item><item><title><![CDATA[New comment by feoren in "What .NET 10 GC changes mean for developers"]]></title><description><![CDATA[
<p>Huh? 10 years ago was 2015. Entity Framework had been around for nearly 7 years by then, and as far back as I remember using it, it used source generators to subclass your domain models. Even with "Code First", it generated subclasses for automatic property tracking. The generated files had a whole other extension, like .g.cs or something (it's been a while), and Visual Studio regenerated them on build. I eventually figured out how to use it effectively without any of the silly code generation magic, but it took effort to get it to <i>not</i> generate code.<p>ASP.NET MVC came with T4 templates for scaffolding out the controllers and views, which I also came to view as an anti-pattern. This stuff was in the official Microsoft tutorials. I'm really not sure why you think these weren't around?</p>
]]></description><pubDate>Thu, 09 Oct 2025 18:31:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=45531346</link><dc:creator>feoren</dc:creator><comments>https://news.ycombinator.com/item?id=45531346</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45531346</guid></item><item><title><![CDATA[New comment by feoren in "What .NET 10 GC changes mean for developers"]]></title><description><![CDATA[
<p>That is tedious with or without a source generator, mainly because there's a <i>much</i> better way to do it:<p><pre><code>    public Optional<string> Name;
</code></pre>
With Optional being something like:<p><pre><code>    class Optional<T> {
      public T? Value;
      public bool IsSet;
    }
</code></pre>
I'm actually partial to using IEnumerable for this, and I'd reverse the boolean:<p><pre><code>    class Optional<T> {
      public IEnumerable<T> ValueOrEmpty;
      public bool IsExplicitNull;
    }
</code></pre>
With this approach (either one) you can easily define Map (or "Select", if you choose LINQ verbiage) on Optional and go delete 80% of your "if" statements that are checking that boolean.<p>Why mess with source generators? They're just making it slightly easier to do this in a way that is really painful.<p>I'd strongly recommend that if you find yourself wanting Null to represent two different ideas, then you actually just want those two different ideas represented explicitly, e.g. with an Enum. Which you can still do with a basic wrapper like this. The user didn't say "Null", they said "Unknown" or "Not Applicable" or something. Record that.<p><pre><code>    public OneOf<string, NotApplicable> Name
</code></pre>
A good OneOf implementation is here (I have nothing to do with this library, I just like it):<p><a href="https://github.com/mcintyre321/OneOf" rel="nofollow">https://github.com/mcintyre321/OneOf</a><p>I wrote a JsonConverter for OneOf and just pass those over the wire.</p>
]]></description><pubDate>Wed, 08 Oct 2025 22:24:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=45521298</link><dc:creator>feoren</dc:creator><comments>https://news.ycombinator.com/item?id=45521298</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45521298</guid></item><item><title><![CDATA[New comment by feoren in "Memory access is O(N^[1/3])"]]></title><description><![CDATA[
<p>Let's be careful about exactly what we mean by this. When we say an algorithm is O(f(N)), we need to be very clear about what N we're talking about. The whole point is to focus on a few variables of interest (often "number of elements" or "total input size") while recognizing that we are leaving out many constants: CPU speed, average CPU utilization, the speed of light, and (usually) the size of memory. If I run a task against some data and say "good job guys, this only took 1 second on 1000 data points! Looks like our work here is done", it would be quite unnerving to learn that the algorithm is actually O(3^N). 1000 data points better be pretty close to the max I'll ever run it on; 2000 data points and I might be waiting until the heat death of the universe.<p>I'm seeing some commenters happily adding 1/3 to the exponent of other algorithms. This insight does not make an O(N^2) algorithm O(N^7/3) or O(N^8/3) or anything else; those are different Ns. It <i>might</i> be O(N^2 + (N*M)^1/3) or O((N * M^(1/3))^2) or almost any other combination, depending on the details of the algorithm.<p>Early algorithm design was happy to treat "speed of memory access" as one of these constants that you don't worry about until you have a speed benchmark. If my algorithm takes 1 second on 1000 data points, I don't care if that's because of memory access speed, CPU speed, or the speed of light -- unless I have some control over those variables. The whole reason we like O(N) algorithms more than O(N^2) ones is because we can (usually) push them farther without having to buy better hardware.<p>More modern algorithm design <i>does</i> take memory access into account, often by trying to maximize usage of caches. The abstract model is a series of progressively larger and slower caches, and there are ways of designing algorithms that have provable bounds on their usage of these various caches. It <i>might</i> be useful for these algorithms to assume that the speed of a cache access is O(M^1/3), where M is the size of memory, but that actually lowers their generality: the same idea holds between L2 -> L3 cache as L3 -> RAM and even RAM -> Disk, and certainly RAM -> Disk does not follow the O(M^1/3) law. See <a href="https://en.wikipedia.org/wiki/Cache-oblivious_algorithm" rel="nofollow">https://en.wikipedia.org/wiki/Cache-oblivious_algorithm</a><p>So basically this matters for people who want some idea of how much faster (or slower) algorithms might run if they change the amount of memory available to the application, but even that depends so heavily on details that it's not likely to be "8x memory = 2x slower". I'd argue it's perfectly fine to keep M^(1/3) as one of your constants that you ignore in algorithm design, even as you develop algorithms that are more cache- and memory-access-aware. This may justify <i>why</i> cache-aware algorithms are important, but it probably doesn't change their design or analysis at all. It seems mainly just a useful insight for people responsible for provisioning resources who think more hardware is always better.</p>
]]></description><pubDate>Wed, 08 Oct 2025 22:16:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=45521219</link><dc:creator>feoren</dc:creator><comments>https://news.ycombinator.com/item?id=45521219</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45521219</guid></item></channel></rss>