<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: haberman</title><link>https://news.ycombinator.com/user?id=haberman</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 14 Apr 2026 20:45:44 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=haberman" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by haberman in "LaGuardia pilots raised safety alarms months before deadly runway crash"]]></title><description><![CDATA[
<p>> Obama's FAA disincentivised its traditional "feeder" colleges that do ATC courses to "promote diversity", net outcome was fewer applicants<p>It was much worse than that. Students who had already spent years studying to be air traffic controllers through the CTI program were subject to a sudden policy change that disqualified them from entering the profession unless they passed a “biographical questionnaire.”<p>85% of candidates failed this questionnaire, but the National Black Coalition of Federal Aviation Employees (the organization that pushed for this change to begin with) was feeding the “right” answers to its own members.<p>“Right” answers included things like having gotten bad grades in high school science class. You can take the test for yourself here and see how you score: <a href="https://kaisoapbox.com/projects/faa_biographical_assessment/" rel="nofollow">https://kaisoapbox.com/projects/faa_biographical_assessment/</a><p>I can’t blame anyone for thinking this sounds too outrageous to be real, but all of it is public record at this point and the subject of an ongoing lawsuit: <a href="https://www.tracingwoodgrains.com/p/the-full-story-of-the-faas-hiring" rel="nofollow">https://www.tracingwoodgrains.com/p/the-full-story-of-the-fa...</a></p>
]]></description><pubDate>Wed, 25 Mar 2026 00:06:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47511403</link><dc:creator>haberman</dc:creator><comments>https://news.ycombinator.com/item?id=47511403</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47511403</guid></item><item><title><![CDATA[New comment by haberman in "Making WebAssembly a first-class language on the Web"]]></title><description><![CDATA[
<p>> Thankfully, there is the esm-integration proposal, which is already implemented in bundlers today and which we are actively implementing in Firefox.<p>From the code sample, it looks like this proposal also lets you load WASM code synchronously.  If so, that would address one issue I've run into when trying to replace JS code with WASM: the ability to load and run code synchronously, during page load.  Currently WASM code can only be loaded async.</p>
]]></description><pubDate>Wed, 11 Mar 2026 17:39:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47338676</link><dc:creator>haberman</dc:creator><comments>https://news.ycombinator.com/item?id=47338676</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47338676</guid></item><item><title><![CDATA[New comment by haberman in "Why can't you tune your guitar? (2019)"]]></title><description><![CDATA[
<p>> Not everyone in history thought that 12-TET was an acceptable compromise. Johann Sebastian Bach thought we should use other tuning systems<p>This is presented as fact, but as I understand it there is no conclusive evidence for what Bach intended wrt temperament. There is a theory that the title page of the Well-Tempered Clavier encodes Bach’s preference in the calligraphic squiggles, but this is a recent theory and speculative.  I don’t believe there are any direct statements by Bach as to his intention.</p>
]]></description><pubDate>Mon, 09 Mar 2026 03:44:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47304654</link><dc:creator>haberman</dc:creator><comments>https://news.ycombinator.com/item?id=47304654</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47304654</guid></item><item><title><![CDATA[New comment by haberman in "Linux Internals: How /proc/self/mem writes to unwritable memory (2021)"]]></title><description><![CDATA[
<p>TL;DR: when a user writes to /proc/self/mem, the kernel bypasses the MMU and hardware address translation, opting to emulate it in software (including emulated page faults!), which allows it to disregard any memory protection that is currently setup in the page tables.</p>
]]></description><pubDate>Mon, 09 Mar 2026 02:13:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47304082</link><dc:creator>haberman</dc:creator><comments>https://news.ycombinator.com/item?id=47304082</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47304082</guid></item><item><title><![CDATA[New comment by haberman in "An AI agent published a hit piece on me – more things have happened"]]></title><description><![CDATA[
<p>> So many of our foundational institutions – hiring, journalism, law, public discourse – are built on the assumption that reputation is hard to build and hard to destroy. That every action can be traced to an individual, and that bad behavior can be held accountable. That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth.  [...]  The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system.<p>I disagree.  While AI certainly acts as a force multiplier, all of these dynamics were already in play.<p>It was already possible to make an anonymous (or not-so-anonymous) account that circulated personal attacks and innuendo, to make hyperbolic accusations and inflated claims of harm.<p>It's especially ironic that the paragraph above talks about how it's good when "bad behavior can be held accountable."  The AI could argue that this is exactly what it's doing, holding Shambaugh's "bad behavior" accountable.  It is precisely this impulse -- the desire to punish bad behavior by means of public accusation -- that the AI was indulging or emulating when it wrote its blog post.<p>What if the blog post had been written by a human rather than an AI?  Would that make it justified?  I think the answer is no.  The problem here is not the AI authorship, but the actual conduct, which is an attempt to drag a person's reputation through mudslinging, mind-reading, impugning someone's motive and character, etc. in a manner that was dramatically disproportionate to the perceived offense.</p>
]]></description><pubDate>Sat, 14 Feb 2026 23:47:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47019625</link><dc:creator>haberman</dc:creator><comments>https://news.ycombinator.com/item?id=47019625</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47019625</guid></item><item><title><![CDATA[New comment by haberman in "Open source is not about you (2018)"]]></title><description><![CDATA[
<p>> he eventually needed to write bluntly<p>Is there a history of that here?  Were there earlier clear statements of expectations (like CONTRIBUTING.md) that expressed the same expectations, but in a straightforward way, that people just willfully disregarded?<p>I don't mean to "ding" anybody, I mostly just felt bad that things had gotten to the point where the author was so frustrated.  I completely agree that project owners have the right to set whatever terms they want, and should not suffer grief for standing by those terms.</p>
]]></description><pubDate>Fri, 13 Feb 2026 16:57:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47004885</link><dc:creator>haberman</dc:creator><comments>https://news.ycombinator.com/item?id=47004885</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47004885</guid></item><item><title><![CDATA[New comment by haberman in "Open source is not about you (2018)"]]></title><description><![CDATA[
<p>I don't see anything wrong with the way he expressed himself, and I think his point is totally legitimate.  I mostly just felt bad that he experienced so much grief about it, on account of a gift he was offering to the world.</p>
]]></description><pubDate>Fri, 13 Feb 2026 16:42:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47004694</link><dc:creator>haberman</dc:creator><comments>https://news.ycombinator.com/item?id=47004694</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47004694</guid></item><item><title><![CDATA[New comment by haberman in "Open source is not about you (2018)"]]></title><description><![CDATA[
<p>If I'm criticizing the linked post, then I'm also criticizing myself, because I could easily imagine having written it.</p>
]]></description><pubDate>Fri, 13 Feb 2026 16:21:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47004447</link><dc:creator>haberman</dc:creator><comments>https://news.ycombinator.com/item?id=47004447</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47004447</guid></item><item><title><![CDATA[New comment by haberman in "Open source is not about you (2018)"]]></title><description><![CDATA[
<p>Lately I'm seeing more and more value in writing down expectations explicitly, especially when people's implicit assumptions about those expectations diverge.<p>The linked gist seems to mostly be describing a misalignment between the expectations of the project owners and its users.  I don't know the context, but it seems to have been written in frustration.  It does articulate a set of expectations, but it is written in a defensive and exasperated tone.<p>If I found myself in a situation like that today, I would write a CONTRIBUTING.md file in the project root that describes my expectations (eg. PRs are / are not welcome, decisions about the project are made in X fashion, etc.) in a dispassionate way.  If users expressed expectations that were misaligned with my intentions, I would simply point them to CONTRIBUTING.md and close off the discussion.  I would try to take this step long before I had the level of frustration that is expressed in the gist.<p>I don't say this to criticize the linked post; I've only recently come to this understanding.  But it seems like a healthier approach than to let frustration and resentment grow over time.</p>
]]></description><pubDate>Fri, 13 Feb 2026 15:25:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47003738</link><dc:creator>haberman</dc:creator><comments>https://news.ycombinator.com/item?id=47003738</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47003738</guid></item><item><title><![CDATA[New comment by haberman in "C isn't a programming language anymore (2022)"]]></title><description><![CDATA[
<p>More concretely, I think the magic lies in these two properties:<p>1. Conservation of mass: the amount of C code you put in will be pretty close to the amount of machine code you get out.  Aside from the preprocessor, which is very obviously expanding macros, there are almost no features of C that will take a small amount of code and expand it to a large amount of output.  This makes some things annoyingly verbose to code in C (eg. string manipulation), but that annoyance is reflecting a true fact of machine code, which is that it cannot handle strings very easily.<p>2. Conservation of energy: the only work that will be performed is the code that you put into your program.  There is no "supervisor" performing work on the side (garbage collection, stack checking, context switching), on your behalf.  From a practical perspective, this means that the machine code produced by a C compiler is standalone, and can be called from any runtime without needing a special environment to be set up.  This is what makes C such a good language for <i>implementing</i> garbage collection, stack checking, context switching, etc.<p>There are some exceptions to both of these principles.  Auto-vectorizing compilers can produce large amounts of output from small amounts of input.  Some C compilers <i>do</i> support stack checking (eg. `-fstack-check`).  Some implementations of C <i>will</i> perform garbage collection (eg. Boehm, Fil-C).  For dynamically linked executables, the PLT stubs <i>will</i> perform hash table lookups the first time you call a function. The point is that C makes it very possible to avoid all of these things, which has made it a great technology for programming close to the machine.<p>Some languages excel at one but not the other.  Byte-code oriented languages generally do well at (1): for example, Java .class files are usually pretty lean, as the byte-code semantics are pretty close to the Java langauge.  Go is also pretty good at (1).  Languages like C++ or Rust are generally good at (2), but have much larger binaries on average than C thanks to generics, exceptions/panics, and other features.  C is one of the few languages I've seen that does both (1) and (2) well.</p>
]]></description><pubDate>Fri, 06 Feb 2026 07:13:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=46910015</link><dc:creator>haberman</dc:creator><comments>https://news.ycombinator.com/item?id=46910015</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46910015</guid></item><item><title><![CDATA[New comment by haberman in "Fighting Fire with Fire: Scalable Oral Exams"]]></title><description><![CDATA[
<p>The rate of college attendance has increased dramatically in the last 250 years, and especially in the last 75.<p>In 1789 there were 1,000 enrolled college students total, in a country of 2.8M.  In 2025, it is 19M students in a country of 340M.  <a href="https://educationalpolicy.org/wp-content/uploads/2025/11/2511-November-Higher-Ed-Growth.pdf" rel="nofollow">https://educationalpolicy.org/wp-content/uploads/2025/11/251...</a><p>In 1950, 5.5% of adults ages 25-34 had completed a 4 year college degree.  In 2018, it was 39%.  <a href="https://www.highereddatastories.com/2019/08/changes-in-educational-attainment-1940.html" rel="nofollow">https://www.highereddatastories.com/2019/08/changes-in-educa...</a><p>With attendance increasing at this rate (not to mention the exploding costs of tuition), it seems possible that the methods need to change as well.</p>
]]></description><pubDate>Sat, 03 Jan 2026 07:10:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=46473504</link><dc:creator>haberman</dc:creator><comments>https://news.ycombinator.com/item?id=46473504</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46473504</guid></item><item><title><![CDATA[New comment by haberman in "Python 3.15’s interpreter for Windows x86-64 should hopefully be 15% faster"]]></title><description><![CDATA[
<p>I’ll repeat what I said at that time: one of the benefits of the new design is that it’s less vulnerable to the whims of the optimizer: <a href="https://news.ycombinator.com/item?id=43322451">https://news.ycombinator.com/item?id=43322451</a><p>If getting the optimal code is relying on getting a pile of heuristics to go in your favor, you’re more vulnerable to the possibility that someday the heuristics will go the other way.  Tail duplication is what we want in case, but it’s possible that a future version of the compiler could decide that it’s not desired because of the increased code size.<p>With the new design, the Python interpreter can express the desired shape of the machine code more directly, leaving it less vulnerable to the whims of the optimizer.</p>
]]></description><pubDate>Thu, 25 Dec 2025 19:03:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=46386354</link><dc:creator>haberman</dc:creator><comments>https://news.ycombinator.com/item?id=46386354</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46386354</guid></item><item><title><![CDATA[New comment by haberman in "OpenSCAD is kinda neat"]]></title><description><![CDATA[
<p>A long time ago I read that CadQuery has a fundamentally more powerful geometry kernel than OpenSCAD, so I dropped any attempt to try OpenSCAD.<p>Years later, I never actually got the hang of CadQuery, and I'm wondering if it was a mistake to write off OpenSCAD.<p>I am pretty new to CAD, so I don't actually know when I would run into OpenSCAD's limitations.</p>
]]></description><pubDate>Sat, 20 Dec 2025 22:08:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=46340092</link><dc:creator>haberman</dc:creator><comments>https://news.ycombinator.com/item?id=46340092</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46340092</guid></item><item><title><![CDATA[New comment by haberman in "Oracle made a $300B bet on OpenAI. It's paying the price"]]></title><description><![CDATA[
<p>> Hotspot is the choice for high performance programs. Approaching its performance even with C++ requires a dedicated team of experts.<p>It's very surprising to hear you say this, as it's so contrary to my experience.<p>From the smallest programs (Computer Language Benchmarks Game) to pretty big programs (web browsers), from low-level programs (OS kernels) to high-level programs (GUI Applications), from short-lived programs (command-line utilities) to long-lived programs (database servers), it's hard to think of a single segment where even average Java programs will out-perform average C, C++, or Rust programs.<p>I hadn't heard of QuestDB before, but it sounds like it's written in zero-GC Java using manual memory management.  That's pretty unusual for Java, and would require a team of experts to pull off, I'd think.  It also sounds like it drops to C++ and Rust for performance-critical tasks.</p>
]]></description><pubDate>Fri, 12 Dec 2025 19:12:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=46247584</link><dc:creator>haberman</dc:creator><comments>https://news.ycombinator.com/item?id=46247584</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46247584</guid></item><item><title><![CDATA[New comment by haberman in "Garbage collection for Rust: The finalizer frontier"]]></title><description><![CDATA[
<p>I agree, but in my experience arena allocation in Rust leaves something to be desired. I wrote something about this here: <a href="https://blog.reverberate.org/2021/12/19/arenas-and-rust.html" rel="nofollow">https://blog.reverberate.org/2021/12/19/arenas-and-rust.html</a><p>I was previously excited about this project which proposed to support arena allocation in the language in a more fundamental way: <a href="https://www.sophiajt.com/search-for-easier-safe-systems-programming/" rel="nofollow">https://www.sophiajt.com/search-for-easier-safe-systems-prog...</a><p>That effort was focused primarily on learnability and teachability, but it seems like more fundamental arena support could help even for experienced devs if it made patterns like linked lists fundamentally easier to work with.</p>
]]></description><pubDate>Wed, 15 Oct 2025 21:03:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=45598364</link><dc:creator>haberman</dc:creator><comments>https://news.ycombinator.com/item?id=45598364</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45598364</guid></item><item><title><![CDATA[New comment by haberman in "Python 3.14 is here. How fast is it?"]]></title><description><![CDATA[
<p>Good to know! Thanks for confirming. Yes, I would guess that the tail call interpreter explains part of the difference between 3.13 and 3.14.  Previously the overall improvement to the interpreter has been measured at 1-5%, or even 10-15% depending on the compiler version you are using: <a href="https://blog.nelhage.com/post/cpython-tail-call/" rel="nofollow">https://blog.nelhage.com/post/cpython-tail-call/</a><p>If your benchmark setup is easy to re-run, it would be awesome to see numbers that compare the tail call interpreter to the build where it is disabled, to isolate how much improvement is due to that.</p>
]]></description><pubDate>Fri, 10 Oct 2025 15:28:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=45540107</link><dc:creator>haberman</dc:creator><comments>https://news.ycombinator.com/item?id=45540107</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45540107</guid></item><item><title><![CDATA[New comment by haberman in "Python 3.14 is here. How fast is it?"]]></title><description><![CDATA[
<p>Do any of these tests measure the new experimental tail call interpreter (<a href="https://docs.python.org/3.14/using/configure.html#cmdoption-with-tail-call-interp" rel="nofollow">https://docs.python.org/3.14/using/configure.html#cmdoption-...</a>)?<p>I couldn't find any note of it, so I would assume not.<p>It would be interesting to see how the tail call interpreter compares to the other variants.</p>
]]></description><pubDate>Thu, 09 Oct 2025 21:01:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=45533043</link><dc:creator>haberman</dc:creator><comments>https://news.ycombinator.com/item?id=45533043</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45533043</guid></item><item><title><![CDATA[New comment by haberman in "Memory access is O(N^[1/3])"]]></title><description><![CDATA[
<p>The article says exactly this in bold at the bottom:<p>> If you can break up a task into many parts, each of which is highly local, then memory access in each part will be O(1). GPUs are already often very good at getting precisely these kinds of efficiencies. But if the task requires a lot of memory interdependencies, then you will get lots of O(N^⅓) terms. An open problem is coming up with mathematical models of computation that are simple but do a good job of capturing these nuances.</p>
]]></description><pubDate>Wed, 08 Oct 2025 23:28:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=45521796</link><dc:creator>haberman</dc:creator><comments>https://news.ycombinator.com/item?id=45521796</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45521796</guid></item><item><title><![CDATA[New comment by haberman in "Why I chose Lua for this blog"]]></title><description><![CDATA[
<p>LuaJIT bucks the trend of slow-warmup JITs.  It is extremely quick to compile and load, and its interpreter is very fast -- faster than the JIT-compiled code from LuaJIT v1 IIRC, and certainly faster than the interpreter of Lua.<p>It wasn't until LuaJIT that I realized that JIT didn't inherently have to be these slow lumbering beasts that take hundreds of milliseconds just to wake from their slumber.</p>
]]></description><pubDate>Thu, 02 Oct 2025 20:07:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=45454867</link><dc:creator>haberman</dc:creator><comments>https://news.ycombinator.com/item?id=45454867</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45454867</guid></item><item><title><![CDATA[New comment by haberman in "If all the world were a monorepo"]]></title><description><![CDATA[
<p>This was an interesting article, but it made me even more interested in the author's larger take on R as a language:<p>> In the years since, my discomfort has given away to fascination. I’ve come to respect R’s bold choices, its clarity of focus, and the R community’s continued confidence to ‘do their own thing’.<p>I would love to see a follow-up article about the key insights that the author took away from diving more deeply into R.</p>
]]></description><pubDate>Sat, 20 Sep 2025 03:10:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=45309941</link><dc:creator>haberman</dc:creator><comments>https://news.ycombinator.com/item?id=45309941</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45309941</guid></item></channel></rss>