<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: PeCaN</title><link>https://news.ycombinator.com/user?id=PeCaN</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 02 May 2026 00:45:25 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=PeCaN" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by PeCaN in "A Ray of Hope: Array Programming for the 21st Century [video]"]]></title><description><![CDATA[
<p>To be honest I don't really see people who don't want to learn APL being that interested in putting in the effort to completely upend how they think about programming and algorithms in order to use other array languages, regardless of syntax. (After all this is by far the hardest part of learning APL, the symbols are easy enough and easy to look up anyway.)<p>map is general in kind of the wrong way. You could after all add a #map method to Object for scalars and  make a Matrix class that also implements it and then just call map everywhere. However you still run into the problem, mentioned in the video, that it doesn't easily generalize to x + y where both x and y are arrays; you have to use zip or map2 or something (and now you still have to figure out how to do vector + matrix) and yes you can kind of do explicit "array programming" in Ruby if for some reason you're really compelled to do that but it will look awful. And that's just what array languages do for you implicitly. As a paradigm there's a bit more too it than "just call map everywhere"—there's still all the functions for expressing algorithms as computations on arrays.</p>
]]></description><pubDate>Fri, 20 Nov 2020 07:49:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=25158188</link><dc:creator>PeCaN</dc:creator><comments>https://news.ycombinator.com/item?id=25158188</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25158188</guid></item><item><title><![CDATA[New comment by PeCaN in "A Ray of Hope: Array Programming for the 21st Century [video]"]]></title><description><![CDATA[
<p>If you watch the video it looks like their proposed syntax is not APL-like but closer to mainstream languages.<p>I'm honestly not sure if this is a good thing or not. You said "easier" syntax than APL but APL is honestly a very easy syntax for working with arrays. That's a significant part of the advantage of APL, it makes it very easy to come up with, talk about, and maintain array algorithms.<p>Matlab and Julia and other languages aimed at scientific computing have some array language-like traits but lack a lot of the functions that make APL more generally applicable. And .map is all wrong; it's extra noise and it doesn't generalize down to scalars or up to matrices—the defining feature of array languages is that operations are <i>implicitly</i> polymorphic over the rank of the input.</p>
]]></description><pubDate>Thu, 19 Nov 2020 12:26:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=25148799</link><dc:creator>PeCaN</dc:creator><comments>https://news.ycombinator.com/item?id=25148799</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25148799</guid></item><item><title><![CDATA[New comment by PeCaN in "A Ray of Hope: Array Programming for the 21st Century [video]"]]></title><description><![CDATA[
<p>I've been working on something like this on and off for the past 4 years or so, although with something more like generators than streams.<p>I think it's a very, very promising idea (I admit to heavily being biased towards anything APL-influenced) although surprisingly difficult to get right. Gilad Bracha is obviously way smarter than me so I'm definitely curious where he goes with this.<p>One additional idea that I keep trying to make work is integrating variations of (constraint) logic programming and treating solutions to a predicate as a generator or stream that operations can be lifted to rank-polymorphically. As a simple example a range function could be defined and used like (imaginary illustrative syntax)<p><pre><code>    range(n,m,x) :- n <= x, x <= m
    
    primesUpto(n) = range(2,n,r),               # create a generator containing all solutions wrt r
      mask(not(contains(outerProduct(*, r, r), r)), r)  # as in the video
    </code></pre>
I've never really gotten this to work nicely and it always feels like there's a sort of separation between the logic world and the array world. However this <i>feels</i> incredibly powerful, especially as part of some sort of database, so I keep returning to it even though I'm not really sure it goes anywhere super useful in the end.</p>
]]></description><pubDate>Thu, 19 Nov 2020 08:42:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=25147472</link><dc:creator>PeCaN</dc:creator><comments>https://news.ycombinator.com/item?id=25147472</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25147472</guid></item><item><title><![CDATA[New comment by PeCaN in "Servo’s new home"]]></title><description><![CDATA[
<p>it's not so much that gcc does anything specific so much as LLVM is just really really inefficient—they don't track compilation time at all so it's easy for releases to regress, half the stuff in there is academics implementing their PhD thesis aiming for algorithmic accuracy with little regard for efficiency, and LLVM's design itself is somewhat inefficient (multiple IRs, lots and lots of pointers in IR representation, etc)<p>that said this makes it an excellent testbed but compilation time will keep getting slower every release until they start caring about it</p>
]]></description><pubDate>Wed, 18 Nov 2020 07:42:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=25134127</link><dc:creator>PeCaN</dc:creator><comments>https://news.ycombinator.com/item?id=25134127</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25134127</guid></item><item><title><![CDATA[New comment by PeCaN in "Servo’s new home"]]></title><description><![CDATA[
<p>that's just because gcc has certain optimization passes that can't be disabled<p>(that said gcc -O0 is still absolutely nothing like what a human would write)</p>
]]></description><pubDate>Wed, 18 Nov 2020 07:27:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=25134066</link><dc:creator>PeCaN</dc:creator><comments>https://news.ycombinator.com/item?id=25134066</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25134066</guid></item><item><title><![CDATA[New comment by PeCaN in "Apple Silicon M1 Emulating x86 Is Still Faster Than Every Other Mac"]]></title><description><![CDATA[
<p>- it takes some die space sure but no x86 processors are actually limited by instruction decoding (except, iirc, the first generation xeon phi in some cases)<p>- huge pages don't exactly require weird OS hoops although i agree the 4kb→2mb→1gb page sizes are inconvenient</p>
]]></description><pubDate>Mon, 16 Nov 2020 03:34:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=25107661</link><dc:creator>PeCaN</dc:creator><comments>https://news.ycombinator.com/item?id=25107661</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25107661</guid></item><item><title><![CDATA[New comment by PeCaN in "XuanTie C906 based Allwinner RISC-V processor to power $12 Linux SBC's"]]></title><description><![CDATA[
<p>All the Intel CPUs with RDRAND, although I guess that's not exactly hidden anymore.</p>
]]></description><pubDate>Mon, 09 Nov 2020 15:11:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=25035547</link><dc:creator>PeCaN</dc:creator><comments>https://news.ycombinator.com/item?id=25035547</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25035547</guid></item><item><title><![CDATA[New comment by PeCaN in "A First Look at the JIT"]]></title><description><![CDATA[
<p>I sort of wonder if this approach to JITing is worth it over just writing a faster interpreter. This is basically like what V8's baseline JIT used to be and they switched to an interpreter without that much of a performance hit (and there's still a lot of potential optimizations for their interpreter). LuaJIT 1's compiler was similar, although somewhat more elaborate, and yet still routinely beaten by LuaJIT 2's interpreter (to be fair LuaJIT 2's interpreter is an insane feat of engineering).</p>
]]></description><pubDate>Thu, 05 Nov 2020 09:21:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=24996767</link><dc:creator>PeCaN</dc:creator><comments>https://news.ycombinator.com/item?id=24996767</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24996767</guid></item><item><title><![CDATA[New comment by PeCaN in "An ex-ARM engineer critiques RISC-V"]]></title><description><![CDATA[
<p>That's fair. It's definitely not a killer, (or even in my opinion the worst thing about RISC-V,) just another one of these random little annoyances that I'm not really sure why RISC-V doesn't include.</p>
]]></description><pubDate>Sun, 01 Nov 2020 13:24:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=24959344</link><dc:creator>PeCaN</dc:creator><comments>https://news.ycombinator.com/item?id=24959344</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24959344</guid></item><item><title><![CDATA[New comment by PeCaN in "An ex-ARM engineer critiques RISC-V"]]></title><description><![CDATA[
<p>It's not like someone is proposing some crazy new instruction to do vector math on binary coded decimals while also calculating CRC32 values as a byproduct. It's conditional move. Every ISA I can think of has that.</p>
]]></description><pubDate>Sun, 01 Nov 2020 13:03:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=24959191</link><dc:creator>PeCaN</dc:creator><comments>https://news.ycombinator.com/item?id=24959191</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24959191</guid></item><item><title><![CDATA[New comment by PeCaN in "An ex-ARM engineer critiques RISC-V"]]></title><description><![CDATA[
<p>>[She] complains Risc-V need 4 instructions to do what x86_64 and arm does in two, but... it says Risc-V.<p>So… what, it should take 5 instructions?<p>Executing more instructions for a (really) common operation doesn't mean an ISA is somehow better designed or "more RISC", it means it executes more instructions.<p>>And x86_64 CISC instructions devolve to a pile of microcode anyway.<p>Some people seem to have this impression that like every x86 instruction is implemented in microcode (very, very few of them are) and even charitably interpreting that as "decodes to multiple uops" (which is completely different) is still not right. The mov in the example is 1 uop.</p>
]]></description><pubDate>Sun, 01 Nov 2020 12:53:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=24959121</link><dc:creator>PeCaN</dc:creator><comments>https://news.ycombinator.com/item?id=24959121</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24959121</guid></item><item><title><![CDATA[New comment by PeCaN in "An ex-ARM engineer critiques RISC-V"]]></title><description><![CDATA[
<p>I'm not sure about this "RISC way" stuff. From a uarch standpoint the RISC vs CISC distinction is moot and from an ISA standpoint the only real quantifiable difference seems to be being a load-store architecture.<p>ISAs without conditional moves tend to have predicated instructions which are functionally the same thing. I'm not actually aware of any traditionally RISC architectures that have neither conditional moves or predicated instructions. While ARMv7 removed predicated instructions as a general feature ARMv8 gained a few "conditional data processing" instructions (e.g. CSEL is basically cmov), so clearly at least ARM thinks there's a benefit even with modern branch predictors.<p>Conditional instructions are really, really handy when you need them. It's an escape hatch for when you have an unbiased branch and need to turn control flow into data flow.</p>
]]></description><pubDate>Sun, 01 Nov 2020 12:06:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=24958900</link><dc:creator>PeCaN</dc:creator><comments>https://news.ycombinator.com/item?id=24958900</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24958900</guid></item><item><title><![CDATA[New comment by PeCaN in "The Heart of RISC-V Development Is Unmatched"]]></title><description><![CDATA[
<p>ARM has it. DEC Alpha had it (before x86, even).<p>I get that there are a lot of narrow-use instructions but popcount is a pretty well-known and common operation.</p>
]]></description><pubDate>Fri, 30 Oct 2020 10:58:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=24941007</link><dc:creator>PeCaN</dc:creator><comments>https://news.ycombinator.com/item?id=24941007</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24941007</guid></item><item><title><![CDATA[New comment by PeCaN in "The Heart of RISC-V Development Is Unmatched"]]></title><description><![CDATA[
<p>popcount is extremely useful in a lot of algorithms, from RSA to chess engines to sparse arrays and tries<p>it's honestly pretty baffling that RISC-V doesnt have it (perils of design-by-academia)</p>
]]></description><pubDate>Fri, 30 Oct 2020 09:36:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=24940572</link><dc:creator>PeCaN</dc:creator><comments>https://news.ycombinator.com/item?id=24940572</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24940572</guid></item><item><title><![CDATA[New comment by PeCaN in "Pyston v2: Faster Python"]]></title><description><![CDATA[
<p>Not that. <a href="https://en.wikipedia.org/wiki/Threaded_code" rel="nofollow">https://en.wikipedia.org/wiki/Threaded_code</a><p>It's an interpreter implementation technique.</p>
]]></description><pubDate>Thu, 29 Oct 2020 05:40:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=24927613</link><dc:creator>PeCaN</dc:creator><comments>https://news.ycombinator.com/item?id=24927613</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24927613</guid></item><item><title><![CDATA[New comment by PeCaN in "Intel exits memory business, sale to Hynix for $9B"]]></title><description><![CDATA[
<p>Ironically enough I feel like IBM is actually an example of a giant company that, against all odds, is somehow still innovating, at least with its POWER CPUs. They're doing really interesting and open-ended things like CAPI and the upcoming Open Memory Interface.</p>
]]></description><pubDate>Wed, 21 Oct 2020 10:32:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=24846432</link><dc:creator>PeCaN</dc:creator><comments>https://news.ycombinator.com/item?id=24846432</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24846432</guid></item><item><title><![CDATA[New comment by PeCaN in "Room-Temperature Superconductivity Achieved for the First Time"]]></title><description><![CDATA[
<p>how are europoors so consistently butthurt about fucking units of temperature</p>
]]></description><pubDate>Thu, 15 Oct 2020 23:43:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=24795714</link><dc:creator>PeCaN</dc:creator><comments>https://news.ycombinator.com/item?id=24795714</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24795714</guid></item><item><title><![CDATA[New comment by PeCaN in "What you could steal from the Kakoune code editor, and get away with"]]></title><description><![CDATA[
<p>I considered multiple selections a bit of a gimmick in sublime text and various emacs/vim extensions but the way it works in kakoune feels completely different.<p>Think about vim (and kakoune) as basically a highly interactive language for editing text. At least at a conceptual level something like diw is basically a function (d) applied to data (iw). Vim is completely scalar. The type of all the functions is more or less something like string → string. Kakoune is the APL of text editors. Multiple selections are an array of strings to operate on. It automatically maps the function (editor commands) over every selection. I find this extremely attractive and powerful.<p>At least for me kakoune blows all other editors out of the water. It really is quite good.</p>
]]></description><pubDate>Tue, 06 Oct 2020 09:38:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=24696096</link><dc:creator>PeCaN</dc:creator><comments>https://news.ycombinator.com/item?id=24696096</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24696096</guid></item><item><title><![CDATA[New comment by PeCaN in "Einstein's description of gravity just got much harder to beat"]]></title><description><![CDATA[
<p>Humans were actually incapable of abstract thought until hacker news invented it.</p>
]]></description><pubDate>Sat, 03 Oct 2020 09:40:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=24670835</link><dc:creator>PeCaN</dc:creator><comments>https://news.ycombinator.com/item?id=24670835</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24670835</guid></item><item><title><![CDATA[New comment by PeCaN in "Nvidia Ampere GA102 GPU Architecture [pdf]"]]></title><description><![CDATA[
<p>Probably not, since it's optimized for scientific workloads (being designed specifically for the K computer replacement) (so it doesn't have texture units, ROPs, etc; you'd have to do too much in software to make it actually render things). However I think the overall design is really good and has enormous potential, if not for graphics at the very least for ML.<p>The vector architectures with extremely high memory bandwidth coming out of Japan recently (NEC SX-Aurora Tsubasa, Fujitsu A64FX) are pretty fascinating.</p>
]]></description><pubDate>Sat, 19 Sep 2020 20:50:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=24530236</link><dc:creator>PeCaN</dc:creator><comments>https://news.ycombinator.com/item?id=24530236</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24530236</guid></item></channel></rss>