<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: mlochbaum</title><link>https://news.ycombinator.com/user?id=mlochbaum</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 14:55:40 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=mlochbaum" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by mlochbaum in "Prefix sums at gigabytes per second with ARM NEON"]]></title><description><![CDATA[
<p>I don't think this really describes neon_prefixsum_fast as a whole? The algorithm does use a Hillis-Steele sum on sums of 4 values, but each of these is computed with a sequential sum, interleaving those with a transposed order. In terms of what's added to what, it's actually quite a bit like my "Sequential broadcasting" picture from [0]. The reference I'd use for a general form is "Parallel Scan as a Multidimensional Array Problem"[1], breaking 16 elements into a 4x4 array; the paper describes how the scan splits into a row-wise scan, plus values obtained from an <i>exclusive</i> scan on carries from the rows.<p>[0] <a href="https://mlochbaum.github.io/BQN/implementation/primitive/fold.html#scan-architecture" rel="nofollow">https://mlochbaum.github.io/BQN/implementation/primitive/fol...</a><p>[1] <a href="https://ashinkarov.github.io/pubs/2022-scan.html" rel="nofollow">https://ashinkarov.github.io/pubs/2022-scan.html</a></p>
]]></description><pubDate>Fri, 13 Mar 2026 16:07:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47366300</link><dc:creator>mlochbaum</dc:creator><comments>https://news.ycombinator.com/item?id=47366300</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47366300</guid></item><item><title><![CDATA[New comment by mlochbaum in "Mathematicians disagree on the essential structure of the complex numbers (2024)"]]></title><description><![CDATA[
<p>More on not being able to find π, as I'm piecing it together: given only the field structure, you can't construct an equation identifying π or even narrowing it down, because if π is the only free variable then it will work out to finding roots of a polynomial (you only have field operations!) and π is transcendental so that polynomial can only be 0 (if you're allowed to use not-equals instead of equals, of course you can specify that π isn't in various sets of algebraic numbers). With other free variables, because the field's algebraically closed, you can fix π to whatever transcendental you like and still solve for the remaining variables. So it's something like, the rationals plus a continuum's worth of arbitrary field extensions? Not terribly surprising that all instances of this are isomorphic as fields but it's starting to feel about as useful as claiming the real numbers are "up to set isomorphism, the unique set whose cardinality matches the power set of the natural numbers", like, of course it's got automorphisms, you didn't finish defining it.</p>
]]></description><pubDate>Tue, 10 Feb 2026 22:04:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=46967604</link><dc:creator>mlochbaum</dc:creator><comments>https://news.ycombinator.com/item?id=46967604</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46967604</guid></item><item><title><![CDATA[New comment by mlochbaum in "Mathematicians disagree on the essential structure of the complex numbers (2024)"]]></title><description><![CDATA[
<p>I was interested in how it would make sense to define complex numbers without fixing the reals, but I'm not terribly convinced by the method here. It seemed kind of suspect that you'd reduce the complex numbers purely to its field properties of addition and multiplication when these aren't enough to get from the rationals to the reals (some limit-like construction is needed; the article uses Dedekind cuts later on). Anyway, the "algebraic conception" is defined as "up to isomorphism, the unique algebraically closed field of characteristic zero and size continuum", that is, you just declare it has the same size as the reals. And of course now you have no way to tell where π is, since it has no algebraic relation to the distinguished numbers 0 and 1. If I'm reading right, this can be done with any uncountable cardinality with uniqueness up to isomorphism. It's interesting that algebraic closure is enough to get you this far, but with the arbitrary choice of cardinality and all these "wild automorphisms", doesn't this construction just seem... defective?<p>It feels a bit like the article's trying to extend some legitimate debate about whether fixing i versus -i is natural to push this other definition as an equal contender, but there's hardly any support offered. I expect the last-place 28% poll showing, if it does reflect serious mathematicians at all, is those who treat the topological structure as a given or didn't think much about the implications of leaving it out.</p>
]]></description><pubDate>Tue, 10 Feb 2026 21:03:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46966885</link><dc:creator>mlochbaum</dc:creator><comments>https://news.ycombinator.com/item?id=46966885</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46966885</guid></item><item><title><![CDATA[New comment by mlochbaum in "Variation on Iota"]]></title><description><![CDATA[
<p>Ooh, I've run into this one before! I'm a big fan of interval index[0], which performs a binary search, so Josh's suggestion is the one I prefer as well (the implementation might sometimes optimize by transforming it into a table lookup like the other solutions). Searching for +`≠¨ in my BQN files turns up a few uses, including one used to group primitives into types in a utility involved in compiling BQN[1] and an annotated function used a few times in the markdown processor that builds BQN's website[2].<p>[0] <a href="https://aplwiki.com/wiki/Interval_Index" rel="nofollow">https://aplwiki.com/wiki/Interval_Index</a><p>[1] <a href="https://github.com/mlochbaum/BQN/blob/717555b0db/src/pr.bqn#L15" rel="nofollow">https://github.com/mlochbaum/BQN/blob/717555b0db/src/pr.bqn#...</a><p>[2] <a href="https://github.com/mlochbaum/BQN/blob/717555b0db/md.bqn#L45-L51" rel="nofollow">https://github.com/mlochbaum/BQN/blob/717555b0db/md.bqn#L45-...</a></p>
]]></description><pubDate>Fri, 23 Jan 2026 13:53:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=46732496</link><dc:creator>mlochbaum</dc:creator><comments>https://news.ycombinator.com/item?id=46732496</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46732496</guid></item><item><title><![CDATA[New comment by mlochbaum in "An Implementation of J (1992)"]]></title><description><![CDATA[
<p>It was the subject of quite some debate, see "Panel: Is J a Dialect of APL?" at <a href="http://www.jsoftware.com/papers/Vector_8_2_BarmanCamacho.pdf" rel="nofollow">http://www.jsoftware.com/papers/Vector_8_2_BarmanCamacho.pdf</a> . Ken and Roger backed off this stance after witnessing the controversy.<p>"Ken Iverson - The dictionary of J contains an introductory comment that J is a dialect of APL, so in a sense the whole debate is Ken's fault! He is flattered to think that he has actually created a new language."</p>
]]></description><pubDate>Sun, 14 Dec 2025 03:36:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=46260557</link><dc:creator>mlochbaum</dc:creator><comments>https://news.ycombinator.com/item?id=46260557</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46260557</guid></item><item><title><![CDATA[New comment by mlochbaum in "Learning to read Arthur Whitney's C to become smart (2024)"]]></title><description><![CDATA[
<p>I think the article expresses no position. Most source code for array languages is not, in fact, inspired by APL. I encourage you to check a few random entries at [0]; Kap and April are some particularly wordy implementations, and even A+ mostly consists of code by programmers other than Whitney, with a variety of styles.<p>I do agree that Whitney was inspired to some extent by APL conventions (not exclusively; he was quite a Lisp fan and that's the source of his indentation style when he writes multi-line functions, e.g. in [1]). The original comment was not just a summary of this claim but more like an elaboration, and began with the much stronger statement "The way to understand Arthur Whitney's C code is to first learn APL", which I moderately disagree with.<p>[0] <a href="https://aplwiki.com/wiki/List_of_open-source_array_languages" rel="nofollow">https://aplwiki.com/wiki/List_of_open-source_array_languages</a><p>[1] <a href="https://code.jsoftware.com/wiki/Essays/Incunabulum" rel="nofollow">https://code.jsoftware.com/wiki/Essays/Incunabulum</a></p>
]]></description><pubDate>Mon, 03 Nov 2025 20:08:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=45803777</link><dc:creator>mlochbaum</dc:creator><comments>https://news.ycombinator.com/item?id=45803777</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45803777</guid></item><item><title><![CDATA[New comment by mlochbaum in "Learning to read Arthur Whitney's C to become smart (2024)"]]></title><description><![CDATA[
<p>Dunno why electroly is dragging me into this but I believe you've misread the article. When it says "His languages take significantly after APL" it means the languages themselves and not their implementations.</p>
]]></description><pubDate>Mon, 03 Nov 2025 19:02:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=45802910</link><dc:creator>mlochbaum</dc:creator><comments>https://news.ycombinator.com/item?id=45802910</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45802910</guid></item><item><title><![CDATA[New comment by mlochbaum in "Learning to read Arthur Whitney's C to become smart (2024)"]]></title><description><![CDATA[
<p>It looks like a weirdo C convention to APLers too though. <i>Whitney</i> writes K that way, but single-line functions in particular aren't used a lot in production APL, and weren't even possible before dfns were introduced (the classic "tradfn" always starts with a header line). All the stuff like macros with implicit variable names, type punning, and ternary operators just doesn't exist in APL. And what APL's actually about, arithmetic and other primives that act on whole immutable arrays, is not part of the style at all!</p>
]]></description><pubDate>Mon, 03 Nov 2025 18:23:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=45802384</link><dc:creator>mlochbaum</dc:creator><comments>https://news.ycombinator.com/item?id=45802384</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45802384</guid></item><item><title><![CDATA[New comment by mlochbaum in "My Ideal Array Language"]]></title><description><![CDATA[
<p>Ordinarily I'd make fun of the Germans for giving such an ugly name to a nice concept, but I've always found "comfortable" to be rather unpleasant too (the root "comfort" is fine).</p>
]]></description><pubDate>Mon, 04 Aug 2025 16:59:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=44788512</link><dc:creator>mlochbaum</dc:creator><comments>https://news.ycombinator.com/item?id=44788512</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44788512</guid></item><item><title><![CDATA[New comment by mlochbaum in "My Ideal Array Language"]]></title><description><![CDATA[
<p>It's just. So gross. Say it. Sudden interruption of slime coming up your throat. Like walking out the door into a spiderweb. Alphabetically I was mistaken but in every way that matters I was right.</p>
]]></description><pubDate>Mon, 04 Aug 2025 16:43:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=44788290</link><dc:creator>mlochbaum</dc:creator><comments>https://news.ycombinator.com/item?id=44788290</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44788290</guid></item><item><title><![CDATA[New comment by mlochbaum in "My Ideal Array Language"]]></title><description><![CDATA[
<p>Well, do you know how it works? Don't judge a book by its cover and all. Although none of these are entirely aiming for elegance. The first is code golf and the other two have some performance hacks that I doubt are even good any more, but replacing ∧≢⥊ with ∧⌜ in the last gets you something decent (personally I'm more in the "utilitarian code is never art" camp, but I'd have no reason to direct that at any specific language).<p>The double-struck characters have disappeared from the second and third lines creating a fun puzzle. Original post <a href="https://www.ashermancinelli.com/csblog/2022-5-2-BQN-reflections.html" rel="nofollow">https://www.ashermancinelli.com/csblog/2022-5-2-BQN-reflecti...</a> has the answers.</p>
]]></description><pubDate>Mon, 04 Aug 2025 16:36:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=44788184</link><dc:creator>mlochbaum</dc:creator><comments>https://news.ycombinator.com/item?id=44788184</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44788184</guid></item><item><title><![CDATA[New comment by mlochbaum in "Piano Keys"]]></title><description><![CDATA[
<p>The point that the article is addressing (but you have to ignore the image and study the equations to see this!) is that this sort of shifting can't equalize everything. In the span of 3 white keys C to E at the front, you have 2 black keys at the back, so if you take r to be the ratio of back-width to white key front-width then you have 3 = 5r. But in the 4 keys F to B, you've got 3 black keys so 4 = 7r. No single ratio works! So the article investigates various compromises. The B/12 solution is what seems to me the most straightforward, divide white keys in each of the sections C to E and F to B equally at the back, and don't expect anyone to notice the difference.</p>
]]></description><pubDate>Sun, 20 Jul 2025 00:09:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=44620676</link><dc:creator>mlochbaum</dc:creator><comments>https://news.ycombinator.com/item?id=44620676</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44620676</guid></item><item><title><![CDATA[New comment by mlochbaum in "Blazing Matrix Products"]]></title><description><![CDATA[
<p>And the reason +˝ is fairly fast for long rows, despite that page claiming no optimizations, is that ˝ is defined to split its argument into cells, e.g. rows of a matrix, and apply + with those as arguments. So + is able to apply its ordinary vectorization, while it can't in some other situations where it's applied element-wise. This still doesn't make great use of cache and I do have some special code working for floats that does much better with a tiling pattern, but I wanted to improve +˝ for integers along with it and haven't finished those (widening on overflow is complicated).</p>
]]></description><pubDate>Fri, 27 Jun 2025 14:24:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=44397039</link><dc:creator>mlochbaum</dc:creator><comments>https://news.ycombinator.com/item?id=44397039</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44397039</guid></item><item><title><![CDATA[New comment by mlochbaum in "Blazing Matrix Products"]]></title><description><![CDATA[
<p>The relevant operations for matrix multiply are leading-axis extension, shown near the end of [0], and Insert +˝ shown in [1]. Both for floats; the leading-axis operation is × but it's the same speed as + with floating-point SIMD. We don't handle these all that well, with needless copying in × and a lot of per-row overhead in +˝, but of course it's way better than scalar evaluation.<p>[0] <a href="https://mlochbaum.github.io/bencharray/pages/arith.html" rel="nofollow">https://mlochbaum.github.io/bencharray/pages/arith.html</a><p>[1] <a href="https://mlochbaum.github.io/bencharray/pages/fold.html" rel="nofollow">https://mlochbaum.github.io/bencharray/pages/fold.html</a></p>
]]></description><pubDate>Fri, 27 Jun 2025 14:10:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=44396937</link><dc:creator>mlochbaum</dc:creator><comments>https://news.ycombinator.com/item?id=44396937</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44396937</guid></item><item><title><![CDATA[New comment by mlochbaum in "Klong: A Simple Array Language"]]></title><description><![CDATA[
<p>To be clear, you are referring to the preface to "An Introduction to Array Programming in Klong", right? Having just checked it, I find this to be a very strange angle of attack, because that section is almost exclusively about why the syntax in particular is important. Obviously you disagree (I also think the syntax is overblown, and wish more writing focused on APL's semantic advantages over other array-oriented languages). I think this is a simple difference in taste and there's no need to reach so far for another explanation.</p>
]]></description><pubDate>Fri, 20 Jun 2025 22:38:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=44332758</link><dc:creator>mlochbaum</dc:creator><comments>https://news.ycombinator.com/item?id=44332758</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44332758</guid></item><item><title><![CDATA[New comment by mlochbaum in "Klong: A Simple Array Language"]]></title><description><![CDATA[
<p>Oddly enough, the biggest mistake in how I presented BQN early on was thinking only APL insiders would be interested, when in fact the APLers went back to APL and people who hadn't tried other array languages or hadn't gotten far with them were were most successful with BQN. Plenty of people coming to BQN have worked with Numpy or whatever, but I don't think this has the same deterrent effect; they see BQN as different enough to be worth learning. Julia in particular is very different: I don't find that it culturally emphasizes array programming at all.</p>
]]></description><pubDate>Fri, 20 Jun 2025 22:07:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=44332563</link><dc:creator>mlochbaum</dc:creator><comments>https://news.ycombinator.com/item?id=44332563</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44332563</guid></item><item><title><![CDATA[New comment by mlochbaum in "Klong: A Simple Array Language"]]></title><description><![CDATA[
<p>Search for "teaching" at <a href="https://aplwiki.com/wiki/APL_conference" rel="nofollow">https://aplwiki.com/wiki/APL_conference</a>. I count at least five papers about teaching non-APL topics using APL. The language is not only possible to read, it's designed for it.</p>
]]></description><pubDate>Fri, 20 Jun 2025 20:46:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=44331939</link><dc:creator>mlochbaum</dc:creator><comments>https://news.ycombinator.com/item?id=44331939</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44331939</guid></item><item><title><![CDATA[New comment by mlochbaum in "Klong: A Simple Array Language"]]></title><description><![CDATA[
<p>"already": APL dates back to about 1966, and even K from 1993 predates Numpy and Julia. But yes, we do not live in caves and are familiar with these languages. Klong has even been implemented in Numpy, see <a href="https://github.com/briangu/klongpy">https://github.com/briangu/klongpy</a>.</p>
]]></description><pubDate>Fri, 20 Jun 2025 18:44:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=44330674</link><dc:creator>mlochbaum</dc:creator><comments>https://news.ycombinator.com/item?id=44330674</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44330674</guid></item><item><title><![CDATA[New comment by mlochbaum in "DumPy: NumPy except it's OK if you're dum"]]></title><description><![CDATA[
<p>Author of BQN here, I agree with how section "What about APL?" describes the APL family as not fundamentally better (although details like indexing are often less messy). I outlined a system with lexically-scoped named axes at <a href="https://gist.github.com/mlochbaum/401e379ff09d422e2761e16fedbcd506" rel="nofollow">https://gist.github.com/mlochbaum/401e379ff09d422e2761e16fed...</a> . The linear algebra example would end up something like this:<p><pre><code>    solve(X[i,_], Y[j,_], A[i,j,_,_]) = over i, j/+: Y * linalg_solve(A, X)</code></pre></p>
]]></description><pubDate>Sat, 24 May 2025 17:40:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=44082602</link><dc:creator>mlochbaum</dc:creator><comments>https://news.ycombinator.com/item?id=44082602</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44082602</guid></item><item><title><![CDATA[New comment by mlochbaum in "Purely Functional Sliding Window Aggregation Algorithm"]]></title><description><![CDATA[
<p>The queue method is popular, but there's a much faster (branch-free) and in my opinion simpler way, known as the van Herk/Gil-Werman algorithm in image processing. It splits the input into windows and pairs up a backward scan on one window with a forward scan on the next. This works for any associative function. I was very surprised when I learned about it that it's not taught more often (the name's not doing it any favors)! And I wrote a tutorial page on it for my SIMD-oriented language, mostly about vectorizing it which I didn't quite finish writing up, but with what I think is a reasonable presentation in the first part: <a href="https://github.com/mlochbaum/Singeli/blob/master/doc/minfilter.md">https://github.com/mlochbaum/Singeli/blob/master/doc/minfilt...</a><p>I also found an interesting streaming version here recently: <a href="https://signalsmith-audio.co.uk/writing/2022/constant-time-peak-hold/" rel="nofollow">https://signalsmith-audio.co.uk/writing/2022/constant-time-p...</a><p>EDIT: On closer inspection, this method is equivalent to the one I described, and not the one I'm used to seeing with queues (that starts my tutorial). The stack-reversing step is what forms a backwards scan. The combination of turning it sequential by taking in one element at a time but then expressing this in functional programming makes for a complicated presentation, I think.</p>
]]></description><pubDate>Mon, 24 Feb 2025 16:18:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=43161286</link><dc:creator>mlochbaum</dc:creator><comments>https://news.ycombinator.com/item?id=43161286</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43161286</guid></item></channel></rss>