<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: dandotway</title><link>https://news.ycombinator.com/user?id=dandotway</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 14 Apr 2026 13:24:30 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=dandotway" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by dandotway in "Emergence of Life in an Inflationary Universe (2020)"]]></title><description><![CDATA[
<p>> The most philosophically and mathematically consistent interpretation of Quantum Mechanics is that there are "many worlds".<p>Why is "many worlds" the most mathematically consistent? (I won't bother asking for the philosophical bit; I'm sure it's too long for an HN post.)</p>
]]></description><pubDate>Mon, 24 Jan 2022 05:50:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=30054125</link><dc:creator>dandotway</dc:creator><comments>https://news.ycombinator.com/item?id=30054125</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30054125</guid></item><item><title><![CDATA[New comment by dandotway in "Emergence of Life in an Inflationary Universe (2020)"]]></title><description><![CDATA[
<p>Atheists: "We have faith natural selection at a cosmic level makes life way more likely."<p>Christians: "We have faith supernatural selection at a cosmic level makes afterlife way more likely."</p>
]]></description><pubDate>Mon, 24 Jan 2022 05:43:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=30054095</link><dc:creator>dandotway</dc:creator><comments>https://news.ycombinator.com/item?id=30054095</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30054095</guid></item><item><title><![CDATA[New comment by dandotway in "Faster CPython (2021) [pdf]"]]></title><description><![CDATA[
<p>Question for JIT experts: JS and Python are extremely hard to optimize because they both allow redefining anything at any time, yet V8 crushes Python by an order of magnitude in many benchmarks[1]:<p><pre><code>         All times in seconds (lower is better)
  
  benchmark          Node.js     Python 3    Py takes x times longer
  ==================================================================
  regex-redux          5.06         1.34     ** Py3 is faster (PCRE C)
  pidigits             1.14         1.16     0.02
  reverse-complement   2.59         6.62     2.56
  k-nucleotide        15.84        46.31     2.92
  binary-trees         7.13        44.70     6.27
  fasta                1.91        36.90     19.3
  fannkuch-redux      11.31       341.45     30.19 (wut?)
  mandelbrot           4.04       177.35     43.9  (srsly?)
  n-body               8.42       541.34     64.3  (no numpy fortran cheat?)
  spectral-norm        1.67       112.97     67.65 (Python for Science[TM])

</code></pre>
(If Python is allowed to call fast C code (PCRE) for regex-redux, I don't see why Python shouldn't be allowed to call fast Fortran BLAS/etc for n-body, but rules are rules, I guess. V8 doesn't cheat at spectral-norm, it's 100% legit JS.)<p>Both ecosystems have billions invested by corporations worth trillions; bottomless money exists to make Python faster. So why isn't Python faster?<p>V8's tactics include dynamically watching which loops/calls get run more than (say) 10,000 times and then speculatively generating[2] native machine instructions on the assumption the types don't change ("yep, foo(a,b) is only called with a and b both float64, generate greased x86-64 fastpath"), but gracefully falling back if the types later do change ("our greased x86-64 'foo(float64,float64)' routine will be passed a string! Fall back to slowpath! Fall back!"). Why doesn't Python do this? Is it because Google recruited the only genius unobtainium experts who could write such a thing? Google is a massive Python user, too.<p>[1] <a href="https://benchmarksgame-team.pages.debian.net/benchmarksgame/fastest/python.html" rel="nofollow">https://benchmarksgame-team.pages.debian.net/benchmarksgame/...</a><p>[2] <a href="https://ponyfoo.com/articles/an-introduction-to-speculative-optimization-in-v8" rel="nofollow">https://ponyfoo.com/articles/an-introduction-to-speculative-...</a><p>EDIT: HN commenter Jasper_ perhaps has the answer in another post[3]: "The current CPython maintainers believe that a JIT would be far too much complexity to maintain, and would drive away new contributors. I don't agree with their analysis; I would argue the bigger thing driving new contributors away is them actively doing so. People show up all the time with basic patches for things other language runtimes have been doing for 20+ years and get roasted alive for daring to suggest something as arcane as method dispatch caches. The horror!"<p>[3] <a href="https://news.ycombinator.com/item?id=30047289#30050248" rel="nofollow">https://news.ycombinator.com/item?id=30047289#30050248</a></p>
]]></description><pubDate>Mon, 24 Jan 2022 05:05:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=30053916</link><dc:creator>dandotway</dc:creator><comments>https://news.ycombinator.com/item?id=30053916</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30053916</guid></item><item><title><![CDATA[New comment by dandotway in "Keeping POWER relevant in the open source world"]]></title><description><![CDATA[
<p>Big Endian POWER isn't bug-for-bug compatible with buggy Javascript usage of typed arrays that assumes little endianness, and thus browsers/nodejs/deno on POWER will be exposed to bugs that don't affect little endian x86-64/ARM.<p>After so many years of endianness bugs in C/C++ code, it's perplexing that the web standards committee voted to put typed arrays in Javascript in such a way that exposes platform byte order to Javascript programmers who can't generally be expected to have low-level C/C++/ASM experience with memory layout issues:<p><pre><code>  function endianness () {
    let u32arr = new Uint32Array([0x11223344]);
    let u8arr = new Uint8Array(u32arr.buffer);
    if (u8arr[0] === 0x44)
        return 'Little Endian';
    else if (u8arr[0] === 0x11)
        return 'Big Endian';
    else
        return 'WTF (What a Terrible Failure)';
  }
</code></pre>
EDIT: my old Power Mac was big endian, but I just read POWER has an endianness toggle. So in little endian mode it ought run endian-buggy JS with bug-for-bug compatibility.</p>
]]></description><pubDate>Sat, 22 Jan 2022 23:58:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=30042153</link><dc:creator>dandotway</dc:creator><comments>https://news.ycombinator.com/item?id=30042153</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30042153</guid></item><item><title><![CDATA[New comment by dandotway in "SICP: JavaScript Edition available for pre-order"]]></title><description><![CDATA[
<p>> [15] Richard Waters (1979) developed a program that automatically analyzes traditional Fortran programs<p>Anyone have a PDF link to a paper about this Fortran analyzer?</p>
]]></description><pubDate>Fri, 21 Jan 2022 00:34:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=30017578</link><dc:creator>dandotway</dc:creator><comments>https://news.ycombinator.com/item?id=30017578</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30017578</guid></item><item><title><![CDATA[New comment by dandotway in "The horizon problem for faster than light travel"]]></title><description><![CDATA[
<p>While faster-than-light (FTL) travel is impossible through spacetime as PhD physicists know it (i.e. as "Relativity-confirming experiments reveal it"), there are forms of apparent FTL that have mainstream PhD acceptance, because they don't allow information to travel from point A to point B faster than light and thus do not violate causality.<p>The expansion of the observable universe itself being apparently FTL is perhaps the most interesting. If we point the Hubble telescope (and soon JWST) towards opposite ends of the observable universe, we observe extreme redshift galaxies receding away from each other in opposite directions greater than twice lightspeed if we measure according to "distance in light years" from earth. But no photons or information from a galaxy at one side of our observable universe can be sent to a galaxy at the opposite end: they are receding away from each other too rapidly, these two galaxies exist Beyond the Cosmological Event Horizon relative to each other and are forever unknowable to each other, and thus causality is not violated.<p>An advanced alien race in galaxy A might be able to beam a message to earth before galaxy A vanishes Beyond the Cosmological Event Horizon within the next 100 million years earth-time, and likewise galaxy B might beam a message to earth within the next 100 million years earth-time, but earth can't relay the message from A to B or B to A because they have already forever receded from each other and will soon both likewise forever recede away from us.</p>
]]></description><pubDate>Fri, 21 Jan 2022 00:18:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=30017423</link><dc:creator>dandotway</dc:creator><comments>https://news.ycombinator.com/item?id=30017423</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30017423</guid></item><item><title><![CDATA[New comment by dandotway in "Patents are out of control, and they’re hurting innovation (2017)"]]></title><description><![CDATA[
<p>Individuals and small business owners don't have time to learn patent law, nor do they have money to have dedicated legal departments like rich corporations. If I was sued I would not listen to some anonymous stranger on HN. I would get a lawyer. And I would be charged $300-$800/hr by said lawyer. In the patent ecosystem the patent-holding apex whales and lawyer-sharks hunt smaller creatures to eat, and devour whole.</p>
]]></description><pubDate>Thu, 20 Jan 2022 20:06:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=30013997</link><dc:creator>dandotway</dc:creator><comments>https://news.ycombinator.com/item?id=30013997</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30013997</guid></item><item><title><![CDATA[New comment by dandotway in "Patents are out of control, and they’re hurting innovation (2017)"]]></title><description><![CDATA[
<p><p><pre><code>  - never talk to cops

  - never read a patent

  - never read proprietary source code
</code></pre>
I need a nice printable version of this to post on my wall.</p>
]]></description><pubDate>Thu, 20 Jan 2022 18:17:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=30012521</link><dc:creator>dandotway</dc:creator><comments>https://news.ycombinator.com/item?id=30012521</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30012521</guid></item><item><title><![CDATA[New comment by dandotway in "Patents are out of control, and they’re hurting innovation (2017)"]]></title><description><![CDATA[
<p>Clearly you've never been sued by a patent troll.<p>The patent system only benefits (1.) rich corporations that can afford the millions of dollars in lawyer fees to litigate patent claims, (2.) the lawyers that receive said fees.</p>
]]></description><pubDate>Thu, 20 Jan 2022 18:14:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=30012478</link><dc:creator>dandotway</dc:creator><comments>https://news.ycombinator.com/item?id=30012478</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30012478</guid></item><item><title><![CDATA[New comment by dandotway in "Why static languages suffer from complexity"]]></title><description><![CDATA[
<p>+NaN  For comments that make me laugh out loud for duration T>2.0 seconds, I wish HN provided a way to transmute/sacrifice one's past karma points into additional +1 mod points.</p>
]]></description><pubDate>Wed, 19 Jan 2022 20:11:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=29999314</link><dc:creator>dandotway</dc:creator><comments>https://news.ycombinator.com/item?id=29999314</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29999314</guid></item><item><title><![CDATA[New comment by dandotway in "Why static languages suffer from complexity"]]></title><description><![CDATA[
<p>So whenever I have to study someone else's 'dynamic' python I encounter this sort of thing:<p><pre><code>  def foo(bar, baz):
      bar(baz)
      ...
</code></pre>
What the heck is 'bar' and 'baz'? I deduce no more than 'bar' can be called with a single 'baz'. I can't use my editor/IDE to "go to definition" of bar/baz to figure out what is going on because everything is dynamically determined at runtime, and even<p><pre><code>  grep -ri '\(foo\|bar\|baz\)' --include \*.py
</code></pre>
Won't tell me much about foo/bar/baz, it will only start a hound dog on a long and windy scent trail.</p>
]]></description><pubDate>Wed, 19 Jan 2022 18:20:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=29997726</link><dc:creator>dandotway</dc:creator><comments>https://news.ycombinator.com/item?id=29997726</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29997726</guid></item><item><title><![CDATA[New comment by dandotway in "How do you handle a plutonium-powered pacemaker?"]]></title><description><![CDATA[
<p><a href="https://archive.ph/uEt1V" rel="nofollow">https://archive.ph/uEt1V</a></p>
]]></description><pubDate>Mon, 17 Jan 2022 18:40:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=29970079</link><dc:creator>dandotway</dc:creator><comments>https://news.ycombinator.com/item?id=29970079</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29970079</guid></item><item><title><![CDATA[How do you handle a plutonium-powered pacemaker?]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.wsj.com/articles/how-do-you-handle-a-plutonium-powered-pacemaker-11642437060">https://www.wsj.com/articles/how-do-you-handle-a-plutonium-powered-pacemaker-11642437060</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=29970075">https://news.ycombinator.com/item?id=29970075</a></p>
<p>Points: 4</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 17 Jan 2022 18:40:02 +0000</pubDate><link>https://www.wsj.com/articles/how-do-you-handle-a-plutonium-powered-pacemaker-11642437060</link><dc:creator>dandotway</dc:creator><comments>https://news.ycombinator.com/item?id=29970075</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29970075</guid></item><item><title><![CDATA[New comment by dandotway in "A simple defer feature for C"]]></title><description><![CDATA[
<p>> I like the idea of a small, performant language<p>So the earliest C compilers were under 5000 lines of C+asm:<p><pre><code>  https://github.com/mortdeus/legacy-cc
</code></pre>
If you want a minimal "standard committee approved" C89 compiler then David Hanson's lcc and Fabrice Bellard's tcc both come out to over 30,000 lines. To understand C89 fully you at a minimum have to read a ~220 page (14,248 line) copy of the (draft) ANSI standard:<p><pre><code>  http://port70.net/~nsz/c/c89/c89-draft.txt
</code></pre>
I don't know what the smallest C23 compiler would be with all the new features since C89 added, but it's at the point where a single human can't implement a C compiler anymore. It's becoming a language only rich corporations have the wealth and power to implement and steer.</p>
]]></description><pubDate>Mon, 17 Jan 2022 04:05:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=29963203</link><dc:creator>dandotway</dc:creator><comments>https://news.ycombinator.com/item?id=29963203</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29963203</guid></item><item><title><![CDATA[New comment by dandotway in "A simple defer feature for C"]]></title><description><![CDATA[
<p>"Another programming language" cannot even meaningfully exist if all programming languages are forced to have the same feature set. Should Python get C-like low-level pointer manipulation so that Python users don't need to "pull in another programming language" of C to do pointer manipulation?<p>C doesn't need "defer" because C programmers have managed since the 1970s to implement operating systems, compilers, interpreters, editors, etc., just fine without it. Those who want a bigger C can use C++, this pond is big enough for two fish.</p>
]]></description><pubDate>Sun, 16 Jan 2022 23:20:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=29961460</link><dc:creator>dandotway</dc:creator><comments>https://news.ycombinator.com/item?id=29961460</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29961460</guid></item><item><title><![CDATA[New comment by dandotway in "A simple defer feature for C"]]></title><description><![CDATA[
<p>Features that seem like a good idea at the time often don't stand the test of time 20-30 years in the future. In the mid-90s Object-Oriented Programming was super-hyped so a bunch of other languages bolted on OO, such as Fortran and Ada. But now we have Go/Rust/Zig rejecting brittle OO taxonomies because you always end up having a DuckBilledPlatypus that "is a" Mammal and "is a" EggLayer.<p>A great strength of C is that if you want more features you just go to a subset of C++, no need to add them to C. C++ is the big, ambitious, kitchen-sink language. When C++ exists we don't need to bloat C.<p>Fortran was originally carefully designed so that people who aren't compiler experts can generate very fast (and easily parallelized) code working with arrays the intuitive and obvious way. But later Fortran added OO and pointers making it much harder to auto-parallelize and avoid aliasing slowdown. Now that GPUs are rising it turns out that the original Fortran model of everything-is-array-or-scalar works really well for automatically offloading to the GPU. GPUs don't like method-lookup tables, nor do they like lambdas which are equivalent to stateful Objects with a single Apply method.<p>Scientists are moving to CUDA now, which on the GPU side deletes all these features that Fortran was bloated with. Now nVidia offers proprietary CUDA Fortran which is much more in the spirit of original Fortran, deleting OO and pointers for code that runs on GPU. If the ISO standards committee didn't ruin ISO Fortran for scientific computing by bloating it with trendy features we could all be running ISO Fortran automatically on CPUs and GPUs with identical code (or just a few pragmas) and not be locked in to proprietary nVidia CUDA.<p>But GPUs are now mainly used for crypto greed instead of science for finding cancer cures or making more aerodynamic aircraft so maybe it all doesn't matter anyway.</p>
]]></description><pubDate>Sun, 16 Jan 2022 19:13:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=29958901</link><dc:creator>dandotway</dc:creator><comments>https://news.ycombinator.com/item?id=29958901</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29958901</guid></item><item><title><![CDATA[New comment by dandotway in "Nothing like this will be built again (2002)"]]></title><description><![CDATA[
<p>> The two reactors at Torness have a combined electricity output of 1200 MW<p>If they bumped 1200 MW to 1210 MW they'd have the 1.21 GW they need to operate a Flux Capacitor.<p><a href="https://www.youtube.com/watch?v=f-77xulkB_U" rel="nofollow">https://www.youtube.com/watch?v=f-77xulkB_U</a></p>
]]></description><pubDate>Tue, 11 Jan 2022 17:24:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=29894488</link><dc:creator>dandotway</dc:creator><comments>https://news.ycombinator.com/item?id=29894488</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29894488</guid></item><item><title><![CDATA[New comment by dandotway in "How to make $13M on the App Store"]]></title><description><![CDATA[
<p>As long as Apple gets its 30% cut, why is Apple incentivized to hire a bunch more staff with healthcare, vacation, and pension benefits who are paid to crack down on this? "Dear AAPL Shareholders, rejoice! We reduced our 30% cut of $10B per quarter from garbage in the App Store by spending over $1B per quarter to hire over 10,000 staff to eliminate these ill-gotten gains."</p>
]]></description><pubDate>Mon, 10 Jan 2022 21:14:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=29882622</link><dc:creator>dandotway</dc:creator><comments>https://news.ycombinator.com/item?id=29882622</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29882622</guid></item><item><title><![CDATA[New comment by dandotway in "Practical Common Lisp (2005)"]]></title><description><![CDATA[
<p>If you have a second, I'm curious to learn more about your head scratching experiences with JVM. I want to make a program that I can trust will still run the exact same (bug-for-bug compatible) many years in the future without maintenance. One approach is to make it completely bug-free using a formal verifier for a strict formalization of C, but this is extraordinary effort and there is no guarantee that bugs in the stack of garbage my app sits atop and the libraries I call (SDL2?) will cause unwanted user observable behavior. Truly bug-free is actually stricter than what I really need; I just need exact bug-for-bug compatibility so that my bugs always are deterministic. It seems with the JVM at least, the bug-for-bug determinism is really good (except when it obviously isn't and can't be, like thread scheduling, network communications, ...). For client GC there is a low-latency guarantee and people seem happy. Have you found the Java GC is not all it's reputed to be? There are so many huge companies with billions invested in Java and its bug-for-bug compatibility, I think it could easily still be around in a 100 years along with COBOL and is a safe investment for individuals who value longevity above what is trendiest and shiniest.</p>
]]></description><pubDate>Sat, 08 Jan 2022 21:32:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=29856522</link><dc:creator>dandotway</dc:creator><comments>https://news.ycombinator.com/item?id=29856522</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29856522</guid></item><item><title><![CDATA[New comment by dandotway in "Practical Common Lisp (2005)"]]></title><description><![CDATA[
<p>Having learned a number of Lisp systems in the past, I wouldn't necessarily recommend ANSI Common Lisp as a first choice for a Lisp unless your needs are very particular, because it is an enormous design-by-committee language having a draft standard of about 1360 pages:<p><pre><code>  https://lisp.com.br/archive/ansi_cl_standard_draft_nice.pdf
</code></pre>
This means that in addition to the time spent doing the programming that solves your technical problem, you also have to devote considerable time to language lawyering, investigating if the interpretation of a Lisp expression your Common Lisp produces is or is not standard conforming, using only the frequently ambiguous English of that enormous standard as your guide.<p>The Java JVM was carefully designed to give identical results on all hosts for the Java platform unless you go out of your way to get nondeterminism or platform specific behavior (make the value of the number N depend on thread scheduling, or use JNI that assumes a platform byte order, etc.). C/C++/Rust allow undefined behavior if you shift a 64-bit int more than 64 bits, but Java on the JVM for example masks the lower 6 bits of a 64-bit shift ('& 0x3f') so that you get the same result on all CPUs rather than a CPU-dependent result:<p><pre><code>  https://docs.oracle.com/javase/specs/jls/se8/html/jls-15.html#jls-15.19
</code></pre>
The nice thing about this is that the JVM makes a "try it and see" approach more viable: if your program has a bug at least it has the <i>same</i> bug everywhere. You won't suddenly get a crash 10 years in the future when your customers upgrade to a new CPU, because your Java bytecodes on their new CPU will faithfully maintain bug-for-bug compatibility.<p>Clojure is a Lisp that runs on the JVM. I haven't personally used Clojure, but having done quite a bit of Common Lisp, Emacs Lisp, Scheme, etc., it looks to be very well designed and very well loved by its users, and using it could spare you from having to language lawyer the ANSI Common Lisp standard, as it seems to be more "try it and see" friendly.<p>I've been learning a lot about formal verification for C programs, how to truly make code that is bug free, and to do this you need to first make certain decisions about how you are going to formalize the language standard. E.g. will you assume that CHAR_BIT==8 or will you allow CHAR_BIT>=8 because the official ANSI/ISO C standard allows this even though all modern computers have CHAR_BIT==8? Then for any program you input to your verifier you must judge whether all its behavior is well-defined or if there is undefined behavior (arithmetic overflow, etc.).<p>There are quite good formal verification tools for Java, also, like JBMC and Krakatoa, and smaller Lisp languages are traditionally among the least tedious to formally verify (very simple semantics, unlike Common Lisp), but the time investment to learn these tools is enormous.</p>
]]></description><pubDate>Sat, 08 Jan 2022 19:12:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=29855044</link><dc:creator>dandotway</dc:creator><comments>https://news.ycombinator.com/item?id=29855044</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29855044</guid></item></channel></rss>