<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: SideQuark</title><link>https://news.ycombinator.com/user?id=SideQuark</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 10:29:11 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=SideQuark" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by SideQuark in "It's OK to compare floating-points for equality"]]></title><description><![CDATA[
<p>Completely worked out at least 20 years ago: <a href="https://www.lomont.org/papers/2005/CompareFloat.pdf" rel="nofollow">https://www.lomont.org/papers/2005/CompareFloat.pdf</a></p>
]]></description><pubDate>Sat, 18 Apr 2026 11:09:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47814931</link><dc:creator>SideQuark</dc:creator><comments>https://news.ycombinator.com/item?id=47814931</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47814931</guid></item><item><title><![CDATA[New comment by SideQuark in "The GNU libc atanh is correctly rounded"]]></title><description><![CDATA[
<p>>  The extra half ulp error makes no difference to the accuracy of calculations<p>It absolutely does matter. The first, and most important reason, is one needs to know the guarantees of every operation in order to design numerical algorithms that meet some guarantee. Without knowing that the components provide, it's impossible to design algorithms on top with some guarantee. And this is needed in a massive amount of applications, from CAD, simulation, medical and financial items, control items, aerospace, and on and on.<p>And once one has a guarantee, making the lower components tighter allows higher components to do less work. This is a very low level component, so putting the guarantees there reduces work for tons of downstream work.<p>All this is precisely what drove IEEE 754 to become a thing and to become the standard in modern hardware.<p>>  the problem is that languages traditionally rely on an OS provided libm leading to cross architecture differences<p>No, they don't not things like sqrt and atanh and related. They've relied on compiler provided libs since, well, as long as there have been languages. And the higher level libs, like BLAS, are built on specific compilers that provide guarantees by, again, libs the compiler used. I've not seen OS level calls describing the accuracy of the floating point items, but a lot of languages do, including C/C++ which underlies a lot of this code.</p>
]]></description><pubDate>Sat, 18 Apr 2026 11:01:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47814904</link><dc:creator>SideQuark</dc:creator><comments>https://news.ycombinator.com/item?id=47814904</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47814904</guid></item><item><title><![CDATA[New comment by SideQuark in "All elementary functions from a single binary operator"]]></title><description><![CDATA[
<p>All of these are standard fare in abstract algebra classes, and I didn’t care to write it all out. Once you have the “inverse” operations - and reciprocal, the entire structure follows, for a large set of objects, whether N or Q or R or C or finite fields or division rings, and a host of other structures. So I only wrote - and 1/x<p>Then, subtraction is (x#y)#0 = x-y. Reciprocal is x#0 = 1/x. Addition follows from x+y=x-((x-x)-y). This used the additive identity 0.<p>Multiplication follows from<p>x^2= x-1/(1/x + 1/(1-x)), so we can square things. Then -2xy = (x-y)^2 -x^2 - y^2 is constructible. Then we can divide by -2 via x/-2 = 1/((0-1/x)-1/x), and there’s multiplication. In terms of #, this expression only needed the constant 1, which is the multiplicative identity.<p>Now mult and reciprocal give x * 1/y = x/y, division.<p>Any nontrivial ring needs additive and multiplicative identities 1!=0, which are the only constants needed above. If you assume this is Q or R or C, it may be possible to derive one from the other, not sure. But if you’re in these fields, you know 0 and 1 exist.<p>Then any element of Q is a finite set of ops. R can be constructed in whatever way you want: Dedekind cuts, Cauchy sequences, whatever usual constructions. Or assume R exists, and compute in it via the f(x,y).<p>This also works over finite fields (eml does not), division rings, even infinite fields of positive characteristic, function fields (think elements are ratio of polynomials), basically any algebraic object with the 4 ops.</p>
]]></description><pubDate>Thu, 16 Apr 2026 02:13:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47787856</link><dc:creator>SideQuark</dc:creator><comments>https://news.ycombinator.com/item?id=47787856</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47787856</guid></item><item><title><![CDATA[New comment by SideQuark in "All elementary functions from a single binary operator"]]></title><description><![CDATA[
<p>It generates the same class of functions. Read the comments and links in this thread.</p>
]]></description><pubDate>Thu, 16 Apr 2026 01:40:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47787661</link><dc:creator>SideQuark</dc:creator><comments>https://news.ycombinator.com/item?id=47787661</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47787661</guid></item><item><title><![CDATA[New comment by SideQuark in "Not all elementary functions can be expressed with exp-minus-log"]]></title><description><![CDATA[
<p>> If you take a real analysis class, the elementary functions will be defined exactly as the author of the EML paper does.<p>I just looked through many of the best known real analysis texts, and not a single one defines them this way. This list included the texts by<p>Royden, Terence Tao, Rudin, Spivak, Bartle & Sherbert, Pugh, and a few others....<p>Can you cite a single text book that has this definition you claim is in every real analysis course? I find all evidence points to the opposite.</p>
]]></description><pubDate>Wed, 15 Apr 2026 11:35:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47777691</link><dc:creator>SideQuark</dc:creator><comments>https://news.ycombinator.com/item?id=47777691</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47777691</guid></item><item><title><![CDATA[New comment by SideQuark in "All elementary functions from a single binary operator"]]></title><description><![CDATA[
<p>Yep, I’ve written numerical methods papers, and am very well aware of the field.<p>A limit process is a definition. Try computing with it. You’ll end up with an infinite sequence, or an approximation.<p>An iterative process is an infinite series. They’re equivalent.<p>Newtons method is the same. Completely equivalent to an infinite series as you increase precision.<p>And both require constants, infinitely precise. So you’re still not doing anything the 1/(x-y) operation cannot do, and to do those series you’ll compute using things amenable to being done via ops easy to do by hand or machine via the 1/(x-y) op.</p>
]]></description><pubDate>Tue, 14 Apr 2026 22:55:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47772500</link><dc:creator>SideQuark</dc:creator><comments>https://news.ycombinator.com/item?id=47772500</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47772500</guid></item><item><title><![CDATA[New comment by SideQuark in "All elementary functions from a single binary operator"]]></title><description><![CDATA[
<p>His paper misses infinitely many such functions.</p>
]]></description><pubDate>Tue, 14 Apr 2026 22:34:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47772336</link><dc:creator>SideQuark</dc:creator><comments>https://news.ycombinator.com/item?id=47772336</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47772336</guid></item><item><title><![CDATA[New comment by SideQuark in "All elementary functions from a single binary operator"]]></title><description><![CDATA[
<p>The world has had many types of logic before and after Boolean logic was created, many used in computing. Boolean logic isn’t a constraint; it’s used where it’s useful, and others are used where they’re useful.</p>
]]></description><pubDate>Tue, 14 Apr 2026 22:33:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47772333</link><dc:creator>SideQuark</dc:creator><comments>https://news.ycombinator.com/item?id=47772333</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47772333</guid></item><item><title><![CDATA[New comment by SideQuark in "All elementary functions from a single binary operator"]]></title><description><![CDATA[
<p>No, it approximates exp poorly over an infinitesimally small interval compared to exp. Resistors and capacitors are no where ideal components, which is why they have spec sheets to show how quickly they diverge.<p>If we’re making sloppy approximations to a tiny range of exp, then I too can do it with a few terms.</p>
]]></description><pubDate>Tue, 14 Apr 2026 22:30:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47772302</link><dc:creator>SideQuark</dc:creator><comments>https://news.ycombinator.com/item?id=47772302</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47772302</guid></item><item><title><![CDATA[New comment by SideQuark in "All elementary functions from a single binary operator"]]></title><description><![CDATA[
<p>It’s only finite by putting the infinite series into an operation.<p>And the basic monomial basis is not a single binary operation capable of reproducing the set of basic arithmetic ops. If you want trivial and basic, pick Peano postulates. But that’s not what this thread was about.</p>
]]></description><pubDate>Tue, 14 Apr 2026 07:35:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47762480</link><dc:creator>SideQuark</dc:creator><comments>https://news.ycombinator.com/item?id=47762480</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47762480</guid></item><item><title><![CDATA[New comment by SideQuark in "All elementary functions from a single binary operator"]]></title><description><![CDATA[
<p>Any transcendental function can be produced by arithmetic, since its complete for R.<p>Go ahead and show how to compute exp or ln without an infinite series without circular reasoning. You can’t, since they’re transcendental.<p>There are infinitely many ways to make these binary operators. Picking extremely high compute cost ones really doesn’t make a good basis for computation.</p>
]]></description><pubDate>Tue, 14 Apr 2026 07:33:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47762452</link><dc:creator>SideQuark</dc:creator><comments>https://news.ycombinator.com/item?id=47762452</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47762452</guid></item><item><title><![CDATA[New comment by SideQuark in "All elementary functions from a single binary operator"]]></title><description><![CDATA[
<p>Show me a way to physically compute exp or ln that is less gates than add. More gates means costlier in $, more energy in compute, and for these functions, higher latency.<p>You don’t get to make up free ops, claim there is no cost in reality, and hand wave away reality.<p>There are infinitely many ways to do what the paper did. There’s no gain other than it’s pretty. It loses on every practical front to simply using current ops and architectures.</p>
]]></description><pubDate>Tue, 14 Apr 2026 07:27:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47762404</link><dc:creator>SideQuark</dc:creator><comments>https://news.ycombinator.com/item?id=47762404</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47762404</guid></item><item><title><![CDATA[New comment by SideQuark in "All elementary functions from a single binary operator"]]></title><description><![CDATA[
<p>Ooh, that 2nd link has a nice construction by Terry Tao giving a clear way to show infinitely many such functions exist for pretty much any set of operations.</p>
]]></description><pubDate>Mon, 13 Apr 2026 22:13:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47758597</link><dc:creator>SideQuark</dc:creator><comments>https://news.ycombinator.com/item?id=47758597</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47758597</guid></item><item><title><![CDATA[New comment by SideQuark in "All elementary functions from a single binary operator"]]></title><description><![CDATA[
<p>Do you claim all things you don’t understand are LLMs? This is what I mean by these and many of your other comments being extremely poor quality to the point of deliberate ignorance.<p>The paper above was published in 2012 [1], so that’s quite a feat for an LLM. This takes about zero effort to check.<p>Put some thought or effort into your claims; they’ll look less silly.<p>[1] <a href="https://orcid.org/0000-0002-0438-633X" rel="nofollow">https://orcid.org/0000-0002-0438-633X</a></p>
]]></description><pubDate>Mon, 13 Apr 2026 22:09:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47758544</link><dc:creator>SideQuark</dc:creator><comments>https://news.ycombinator.com/item?id=47758544</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47758544</guid></item><item><title><![CDATA[New comment by SideQuark in "All elementary functions from a single binary operator"]]></title><description><![CDATA[
<p>Yes it can, by using the same infinite series that exp and ln use to compute. This one just costs less in money, hardware, energy, and is faster for basically every basic op.</p>
]]></description><pubDate>Mon, 13 Apr 2026 21:58:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47758393</link><dc:creator>SideQuark</dc:creator><comments>https://news.ycombinator.com/item?id=47758393</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47758393</guid></item><item><title><![CDATA[New comment by SideQuark in "All elementary functions from a single binary operator"]]></title><description><![CDATA[
<p>The exp and ln are infinite series. Exp is roughly the infinite series for cos AND the infinite series for sin. Hiding that every op is an infinite series behind a name doesn’t make things free. It just makes even trivial ops like 1+2 vastly more work.</p>
]]></description><pubDate>Mon, 13 Apr 2026 21:56:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47758374</link><dc:creator>SideQuark</dc:creator><comments>https://news.ycombinator.com/item?id=47758374</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47758374</guid></item><item><title><![CDATA[New comment by SideQuark in "All elementary functions from a single binary operator"]]></title><description><![CDATA[
<p>And those come from the infinite series needed to compute exp and ln. They’re just as much work either way. The exp and ln way are vastly costlier for every op, including simply adding 1 and 2.</p>
]]></description><pubDate>Mon, 13 Apr 2026 21:55:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47758358</link><dc:creator>SideQuark</dc:creator><comments>https://news.ycombinator.com/item?id=47758358</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47758358</guid></item><item><title><![CDATA[New comment by SideQuark in "All elementary functions from a single binary operator"]]></title><description><![CDATA[
<p>Computing exp or ln is an infinite series, and vastly more compute. Hiding series behind a name doesn’t make them free to compute.</p>
]]></description><pubDate>Mon, 13 Apr 2026 21:53:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47758339</link><dc:creator>SideQuark</dc:creator><comments>https://news.ycombinator.com/item?id=47758339</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47758339</guid></item><item><title><![CDATA[New comment by SideQuark in "All elementary functions from a single binary operator"]]></title><description><![CDATA[
<p>It's math. You can check it yourself instead of this (and many other) thoughtless posts.</p>
]]></description><pubDate>Mon, 13 Apr 2026 19:25:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47756743</link><dc:creator>SideQuark</dc:creator><comments>https://news.ycombinator.com/item?id=47756743</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47756743</guid></item><item><title><![CDATA[New comment by SideQuark in "All elementary functions from a single binary operator"]]></title><description><![CDATA[
<p>This isn't unique, or even the least compute way to do this. For example, let f(x,y) = 1/(x-y). This too is universal. I think there's a theorem stating for any finite set of binary operators there is a single one replacing it.<p>write x#y for 1/(x-y).<p>x#0 = 1/(x-0) = 1/x, so you get reciprocals.
Then (x#y)#0 = 1/((1/(x-y)) - 0) = x-y, so subtraction.<p>it's common problem to show in any (insert various algebraic structure here ) inverse and subtraction gives all 4 elementary ops.<p>I haven't checked this carefully, but this note seems to give a short proof (modulo knowing some other items...) <a href="https://dmg.tuwien.ac.at/goldstern/www/papers/notes/singlebinary.pdf" rel="nofollow">https://dmg.tuwien.ac.at/goldstern/www/papers/notes/singlebi...</a></p>
]]></description><pubDate>Mon, 13 Apr 2026 18:56:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47756403</link><dc:creator>SideQuark</dc:creator><comments>https://news.ycombinator.com/item?id=47756403</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47756403</guid></item></channel></rss>