<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: tripletao</title><link>https://news.ycombinator.com/user?id=tripletao</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 13 Apr 2026 21:05:24 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=tripletao" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by tripletao in "All elementary functions from a single binary operator"]]></title><description><![CDATA[
<p>> The core of this is the Arrhenius rate, (A × T^n × exp(-E_a/(R×T))), which involves an exponentiation, a division, a multiplication, and an exponential. On a GPU, that's multiple SFU calls chained with ALU ops. In an EML tree, the whole expression compiles to a single tree that flows through the pipeline in one pass.<p>I think you're missing the reason why the GPU kicks you out of the fast path when you need that special function. The special function evaluation is fundamentally more expensive in energy, whether that cost is paid in area or time. Evaluation of the special functions with throughput similar to the arithmetic throughput would require much more area for the special functions, which for most computation isn't a good tradeoff. That's why the GPU's designers chose to make your exp2 slow.<p>Replacing everything with dozens of cascaded special functions makes everything uniform, but it's uniformly much worse. I feel like you're assuming that by parallelizing your "EML tree" in dedicated hardware that problem goes away; but area isn't free in either dollars or power, so it doesn't.</p>
]]></description><pubDate>Mon, 13 Apr 2026 19:12:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47756597</link><dc:creator>tripletao</dc:creator><comments>https://news.ycombinator.com/item?id=47756597</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47756597</guid></item><item><title><![CDATA[New comment by tripletao in "All elementary functions from a single binary operator"]]></title><description><![CDATA[
<p>So to clarify, you think that replacing every multiplication with 24 transcendental function evaluations (12 eml(x, y), each of which evaluates exp(x) and ln(y) plus the subtraction; see the paper's Fig 2) is somehow a win?<p>The fact that addition, subtraction, and multiplication run quickly on typical processors isn't arbitrary--those operations map well onto hardware, for roughly the same reasons that elementary school students can easily hand-calculate them. General transcendental functions are fundamentally more expensive in time, die area, and/or power, for the same reasons that elementary school students can't easily hand-calculate them. A primitive where all arithmetic (including addition, subtraction, or negation) involves multiple transcendental function evaluations is not computationally faster, lower-power, lower-area, or otherwise better in any other practical way.<p>The comments here are filled with people who seem to be unaware of this, and it's pretty weird. Do CS programs not teach computer arithmetic anymore?</p>
]]></description><pubDate>Mon, 13 Apr 2026 16:55:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47754862</link><dc:creator>tripletao</dc:creator><comments>https://news.ycombinator.com/item?id=47754862</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47754862</guid></item><item><title><![CDATA[New comment by tripletao in "All elementary functions from a single binary operator"]]></title><description><![CDATA[
<p>It would almost always be much, much worse. Practical numerical libraries (whether implemented in hardware or software) contain lots of redundancy, because their goal is to give you an optimized primitive as close as possible to the operation you actually want. For example, the library provides an optimized tan(x) to save you from calling sin(x)/cos(x), because one nasty function evaluation (as a power series, lookup table, CORDIC, etc.) is faster than two nasty function evaluations and a divide.<p>Of course the redundant primitives aren't free, since they add code size or die area. In choosing how many primitives to provide, the designer of a numerical library aims to make a reasonable tradeoff between that size cost and the speed benefit.<p>This paper takes that tradeoff to the least redundant extreme because that's an interesting theoretical question, at the cost of transforming commonly-used operations with simple hardware implementations (e.g. addition, multiplication) into computational nightmares. I don't think anyone has found a practical application for their result yet, but that's not the point of the work.</p>
]]></description><pubDate>Mon, 13 Apr 2026 07:07:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47748683</link><dc:creator>tripletao</dc:creator><comments>https://news.ycombinator.com/item?id=47748683</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47748683</guid></item><item><title><![CDATA[New comment by tripletao in "All elementary functions from a single binary operator"]]></title><description><![CDATA[
<p>I'm not sure what you mean by this? It's true that any Boolean operation can be expressed in terms of two-input NAND gates, but that's almost never how real IC designers work. A typical standard cell library has lots of primitives, including all common gates and up to entire flip-flops and RAMs, each individually optimized at a transistor level. Realization with NAND2 and nothing else would be possible, but much less efficient.<p>Efficient numerical libraries likewise contain lots of redundancy. For example, sqrt(x) is mathematically equivalent to pow(x, 0.5), but sqrt(x) is still typically provided separately and faster. Anyone who thinks that eml() function is supposed to lead directly to more efficient computation has missed the point of this (interesting) work.</p>
]]></description><pubDate>Mon, 13 Apr 2026 06:09:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47748213</link><dc:creator>tripletao</dc:creator><comments>https://news.ycombinator.com/item?id=47748213</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47748213</guid></item><item><title><![CDATA[New comment by tripletao in "A forecast of the fair market value of SpaceX's businesses"]]></title><description><![CDATA[
<p>If you already own highly appreciated QQQ in a taxable account then your options are limited, since moving to a different ETF would realize the capital gain. It may be preferable to hold even if you think you're losing money buying SpaceX at an inflated price, if selling would lose even more in taxes.<p>If you own an ETF that buys SpaceX but without overweighting vs. float, then you're not contributing to the inflated price in that sense. You're still buying at the inflated price though, so the NASDAQ rule change still affects you indirectly.<p>I guess the point of the "wealth tax" comment is that any higher taxation of the wealthiest individuals would reduce their power to shape the rules to their favor, and a wealth tax is potentially harder to avoid than income taxes. I think most prior attempts just made them emigrate, though.</p>
]]></description><pubDate>Thu, 02 Apr 2026 20:36:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47619838</link><dc:creator>tripletao</dc:creator><comments>https://news.ycombinator.com/item?id=47619838</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47619838</guid></item><item><title><![CDATA[New comment by tripletao in "What Every Experimenter Must Know About Randomization"]]></title><description><![CDATA[
<p>Nobody cares even then, because any bias due to theoretical deviation from k-equidistribution is negligible compared to the desired random variance, even if we average trials until the Sun burns out. By analogy, if we're generating an integer between 1 and 3 with an 8-bit PRNG without rejection, then we should worry about bias because 2^8 isn't a multiple of 3; but if we're using a 256-bit PRNG then we should not, even though 2^256 also isn't a multiple.<p>If you think there's any practical difference between a stream of true randomness and a modern CSPRNG seeded once with 256 bits of true randomness, then you should be able to provide a numerical simulation that detects it. If you (and, again, the world's leading cryptographers) are unable to adversarially create such a situation, then why are you worried that it will happen by accident?<p>SHA-1 is practically broken, in the sense that a practically relevant chosen-prefix attack can be performed for <$100k. This has no analogy with anything we're discussing here, so I'm not sure why you mentioned it.<p>You wrote:<p>> There are concepts like "k-dimensional equidistribution" etc. etc... where in some ways the requirements of a PRNG are far, far, higher than a cryptographically sound PRNG<p>I believe this claim is unequivocally false. A non-CS PRNG may be better because it's faster or otherwise easier to implement, but it's not better because it's less predictable. You've provided no reference for this claim except that PCG comparison table that I believe you've misunderstood per mananaysiempre's comments. It would be nice if you could either post something to support your claim or correct it.</p>
]]></description><pubDate>Fri, 20 Feb 2026 04:38:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47083832</link><dc:creator>tripletao</dc:creator><comments>https://news.ycombinator.com/item?id=47083832</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47083832</guid></item><item><title><![CDATA[New comment by tripletao in "What Every Experimenter Must Know About Randomization"]]></title><description><![CDATA[
<p>If you can find a case where this matters, then you've found a practical way to distinguish a CSPRNG seeded with true randomness from a stream of all true randomness. The cryptographers would consider that a weakness in the CSPRNG algorithm, which for the usual choices would be headline news. I don't think it's possible to prove that no such structure exists, but the world's top (unclassified) cryptographers have tried and failed to find it.</p>
]]></description><pubDate>Thu, 19 Feb 2026 19:11:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47077784</link><dc:creator>tripletao</dc:creator><comments>https://news.ycombinator.com/item?id=47077784</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47077784</guid></item><item><title><![CDATA[New comment by tripletao in "What Every Experimenter Must Know About Randomization"]]></title><description><![CDATA[
<p>Ah, then I see where you got 4 assignments and 2x probability. Then I think that is the problem the author was worried about and that it would be a real concern with those numbers, but that the much smaller number of possibilities in your example causes incorrect intuition for the 2^256-possibility case.</p>
]]></description><pubDate>Thu, 19 Feb 2026 18:10:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47076941</link><dc:creator>tripletao</dc:creator><comments>https://news.ycombinator.com/item?id=47076941</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47076941</guid></item><item><title><![CDATA[New comment by tripletao in "What Every Experimenter Must Know About Randomization"]]></title><description><![CDATA[
<p>This is correct, but for the author's example of randomizing turkeys I wouldn't bother. A modern CSPRNG is fast enough that it's usually easier just to generate lots of excess randomness (so that the remainder is nonzero but tiny compared to the quotient and thus negligible) than to reject for exactly zero remainder.<p>For example, the turkeys could be randomized by generating 256 bits of randomness per turkey, then sorting by that and taking the first half of the list. By a counting argument this must be biased (since the number of assignments isn't usually a power of two), but the bias is negligible.<p>The rejection methods may be faster, and thus beneficial in something like a Monte Carlo simulation that executes many times. Rejection methods are also often the simplest way to get distributions other than uniform. The additional complexity doesn't seem worthwhile to me otherwise though, more effort and risk of a coding mistake for no meaningful gain.</p>
]]></description><pubDate>Thu, 19 Feb 2026 17:47:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47076662</link><dc:creator>tripletao</dc:creator><comments>https://news.ycombinator.com/item?id=47076662</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47076662</guid></item><item><title><![CDATA[New comment by tripletao in "What Every Experimenter Must Know About Randomization"]]></title><description><![CDATA[
<p>> If, say, you do the assignment from a 256 bit random number such that 4 of the possible assignments are twice as likely as the others under your randomization procedure<p>Your numbers don't make sense. Your number of assignments is way fewer than 2^256, so the problem the author is (mistakenly) concerned about doesn't arise--no sane method would result in any measurable deviation from equiprobable, certainly not "twice as likely".<p>With a larger number of turkeys and thus assignments, the author is correct that some assignments must be impossible by a counting argument. They are incorrect that it matters--as long as the process of winnowing our set to 2^256 candidates isn't measurably biased (i.e., correlated with turkey weight ex television effects), it changes nothing. There is no difference between discarding a possible assignment because the CSPRNG algorithm choice excludes it (as we do for all but 2^256) and discarding it because the seed excludes it (as we do for all but one), as long as both processes are unbiased.</p>
]]></description><pubDate>Thu, 19 Feb 2026 17:36:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47076520</link><dc:creator>tripletao</dc:creator><comments>https://news.ycombinator.com/item?id=47076520</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47076520</guid></item><item><title><![CDATA[New comment by tripletao in "What Every Experimenter Must Know About Randomization"]]></title><description><![CDATA[
<p>Everyone agrees that most of the possible shuffles become impossible when a CSPRNG with 256 bits of state is used. The question is just whether that matters practically. The original author seems to imply it does, but I believe they're mistaken.<p>Perhaps it would help to think of the randomization in two stages. In the first, we select 2^256 members from the set of all possible permutations. (This happens when we select our CSPRNG algorithm.) In the second, we select a single member from the new set of 2^256. (This happens when we select our seed and run the CSPRNG.) I believe that measurable structure in either selection would imply a practical attack on the cryptographic algorithm used in the CSPRNG, which isn't known to exist for any common such algorithm.</p>
]]></description><pubDate>Thu, 19 Feb 2026 08:19:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47071259</link><dc:creator>tripletao</dc:creator><comments>https://news.ycombinator.com/item?id=47071259</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47071259</guid></item><item><title><![CDATA[New comment by tripletao in "What Every Experimenter Must Know About Randomization"]]></title><description><![CDATA[
<p>As you note, a 256-bit CSPRNG is trivially not equidistributed for a tuple of k n-bit integers when k*n > 256. For a block cipher I think it trivially is equidistributed in some cases, like AES-CTR when k*n is an integer submultiple of 256 (since the counter enumerates all the states and AES is a bijection). Maybe more cases could be proven if someone cared, but I don't think anyone does.<p>Computational feasibility is what matters. That's roughly what I meant by "measurable", though it's better to say it explicitly as you did. I'm also unaware of any computationally feasible way to distinguish a CSPRNG seeded once with true randomness from a stream of all true randomness, and I think that if one existed then the PRNG would no longer be considered CS.</p>
]]></description><pubDate>Thu, 19 Feb 2026 08:00:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47071134</link><dc:creator>tripletao</dc:creator><comments>https://news.ycombinator.com/item?id=47071134</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47071134</guid></item><item><title><![CDATA[New comment by tripletao in "What Every Experimenter Must Know About Randomization"]]></title><description><![CDATA[
<p>Are they claiming that ChaCha20 deviates measurably from equally distributed in k dimensions in tests, or just that it hasn't been proven to be equally distributed? I can't find any reference for the former, and I'd find that surprising. The latter is not surprising or meaningful, since the same structure that makes cryptanalysis difficult also makes that hard to prove or disprove.<p>For emphasis, an empirically measurable deviation from k-equidistribution would be a cryptographic weakness (since it means that knowing some members of the k-tuple helps you guess the others). So that would be a strong claim requiring specific support.</p>
]]></description><pubDate>Thu, 19 Feb 2026 05:21:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47070219</link><dc:creator>tripletao</dc:creator><comments>https://news.ycombinator.com/item?id=47070219</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47070219</guid></item><item><title><![CDATA[New comment by tripletao in "A flawed paper in management science has been cited more than 6k times"]]></title><description><![CDATA[
<p>It's good that Ioannidis improved the analysis in response to criticism, but that doesn't mean the criticism was invalid; if anything, that's typically evidence of the opposite. As I read Gelman's complaint of wasted time and demand for an apology, it seems entirely focused on the incorrect analysis. He writes:<p>> The point is, if you’re gonna go to all this trouble collecting your data, be a bit more careful in the analysis!<p>I read that as a complaint about the analysis, not a claim that the study shouldn't have been conducted (and analyzed correctly).<p>Gelman's blog has exposed bad statistical research from many authors, including the management scientists under discussion here. I don't see any evidence that they applied a harsher standard to Ioannidis.</p>
]]></description><pubDate>Tue, 27 Jan 2026 04:35:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46775590</link><dc:creator>tripletao</dc:creator><comments>https://news.ycombinator.com/item?id=46775590</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46775590</guid></item><item><title><![CDATA[New comment by tripletao in "A flawed paper in management science has been cited more than 6k times"]]></title><description><![CDATA[
<p>I mean that in the context of "Most Published Research Findings Are False", he criticized work (unrelated to COVID, since that didn't exist yet) that used incorrect statistical methods even if its final conclusions happened to be correct. He was right to do so, just as Gelman was right to criticize his serosurvey--it's nice when you get the right answer by luck, but that doesn't help you or anyone else get the right answer next time.<p>It's also hard to determine whether that serosurvey (or any other study) got the right answer. The IFR is typically observed to decrease over the course of a pandemic. For example, the IFR for COVID is much lower now than in 2020 even among unvaccinated patients, since they almost certainly acquired natural immunity in prior infections. So high-quality later surveys showing lower IFR don't say much about the IFR back in 2020.</p>
]]></description><pubDate>Mon, 26 Jan 2026 00:34:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46760289</link><dc:creator>tripletao</dc:creator><comments>https://news.ycombinator.com/item?id=46760289</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46760289</guid></item><item><title><![CDATA[New comment by tripletao in "A flawed paper in management science has been cited more than 6k times"]]></title><description><![CDATA[
<p>Ioannidis corrected for false positives with a point estimate rather than the confidence interval. That's better than not correcting, but not defensible when that's the biggest source of statistical uncertainty in the whole calculation. Obviously true zero can be excluded by other information (people had already tested positive by PCR), but if we want p < 5% in any meaningful sense then his serosurvey provided no new information. I think it was still an interesting and publishable result, but the correct interpretation is something like Figure 1 from Gelman's<p><a href="https://sites.stat.columbia.edu/gelman/research/unpublished/specificity.pdf" rel="nofollow">https://sites.stat.columbia.edu/gelman/research/unpublished/...</a><p>I don't think Gelman walked anything back in his P.S. paragraphs. The only part I see that could be mistaken for that is his statement that "'not statistically significant' is not the same thing as 'no effect'", but that's trivially obvious to anyone with training in statistics. I read that as a clarification for people without that background.<p>We'd already discussed PCR specificity ad nauseam, at<p><a href="https://news.ycombinator.com/item?id=36714034">https://news.ycombinator.com/item?id=36714034</a><p>These test accuracies mattered a lot while trying to forecast the pandemic, but in retrospect one can simply look at the excess mortality, no tests required. So it's odd to still be arguing about that after all the overrun hospitals, morgues, etc.</p>
]]></description><pubDate>Sun, 25 Jan 2026 18:25:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=46756642</link><dc:creator>tripletao</dc:creator><comments>https://news.ycombinator.com/item?id=46756642</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46756642</guid></item><item><title><![CDATA[New comment by tripletao in "A flawed paper in management science has been cited more than 6k times"]]></title><description><![CDATA[
<p>He published a serosurvey that claimed to have found a signal in a positivity rate that was within the 95% CI of the false-positive rate of the test (and thus indistinguishable from zero to within the usual p < 5%). He wasn't necessarily wrong in all his conclusions, but neither were the other researchers that he rightly criticized for their own statistical gymnastics earlier.<p><a href="https://statmodeling.stat.columbia.edu/2020/04/19/fatal-flaws-in-stanford-study-of-coronavirus-prevalence/" rel="nofollow">https://statmodeling.stat.columbia.edu/2020/04/19/fatal-flaw...</a><p>That said, I'd put both his serosurvey and the conduct he criticized in "Most Published Research Findings Are False" in a different category from the management science paper discussed here. Those seem mostly explainable by good-faith wishful thinking and motivated reasoning to me, while that paper seems hard to explain except as a knowing fraud.</p>
]]></description><pubDate>Sun, 25 Jan 2026 16:52:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46755692</link><dc:creator>tripletao</dc:creator><comments>https://news.ycombinator.com/item?id=46755692</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46755692</guid></item><item><title><![CDATA[New comment by tripletao in "Chase to become new issuer of Apple Card"]]></title><description><![CDATA[
<p>Bank of America Travel Rewards is effectively 2.625% on everything if you put >$100k in a Merrill account and redeem the rewards against "travel" (including restaurants in any location) expenses charged to the card. There's no foreign transaction fee.<p>The biggest downside is all the dark patterns at Merrill trying to sell you advisory services. That seems to be only upon account opening, though.</p>
]]></description><pubDate>Thu, 08 Jan 2026 07:26:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46538267</link><dc:creator>tripletao</dc:creator><comments>https://news.ycombinator.com/item?id=46538267</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46538267</guid></item><item><title><![CDATA[New comment by tripletao in "Chase to become new issuer of Apple Card"]]></title><description><![CDATA[
<p>Their problem is with false positives they find, not true positives you find. My application for a credit card was somehow flagged as fraudulent. Chase repeatedly asked for additional forms of ID, then told me the scans I sent were illegible. (The scans were fine; I think they just needed an excuse.) I went to a branch with the physical documents, and they said they couldn't look at them. The branch put me in an office and called the same telephone support, with the same result. I eventually gave up.<p>I guess I'm lucky they rejected me before any money changed hands. I've heard horror stories from people with significant assets at their bank, locked out until an actual lawsuit (the letter from a lawyer didn't work) finally got their attention. I think it's like Google support, usually fine but catastrophic when it's not.</p>
]]></description><pubDate>Thu, 08 Jan 2026 01:25:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=46535862</link><dc:creator>tripletao</dc:creator><comments>https://news.ycombinator.com/item?id=46535862</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46535862</guid></item><item><title><![CDATA[New comment by tripletao in "Creating a bespoke data diode for air‑gapped networks"]]></title><description><![CDATA[
<p>I think those optoisolators are indeed sold mostly for switching power supplies. That's probably why someone cared enough about aging to write an app note, since the ambient temperature is high there and the exact CTR matters more when it's in that analog feedback loop. I've also seen them for digital inputs in industrial control systems, where speeds are slow and the wires might be coming from far away on a noisy ground.<p>That said, I believe optical isolation is typical for these "data diode" applications, even between two computers in the same rack. I don't think it provides any security benefit, but it's cheap and customers expect it; so there's no commercial incentive to do anything else.</p>
]]></description><pubDate>Wed, 07 Jan 2026 06:30:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=46523256</link><dc:creator>tripletao</dc:creator><comments>https://news.ycombinator.com/item?id=46523256</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46523256</guid></item></channel></rss>