<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: fdej</title><link>https://news.ycombinator.com/user?id=fdej</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 07:17:50 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=fdej" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by fdej in "AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms"]]></title><description><![CDATA[
<p>If you don't want to allow division by 2 then there is Winograd's algorithm from 1967 which works over any commutative ring and uses 48 multiplications for 4 x 4.</p>
]]></description><pubDate>Wed, 14 May 2025 22:36:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=43989899</link><dc:creator>fdej</dc:creator><comments>https://news.ycombinator.com/item?id=43989899</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43989899</guid></item><item><title><![CDATA[New comment by fdej in "AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms"]]></title><description><![CDATA[
<p>> From the paper, "Notably, for multiplying two 4 × 4 matrices, applying the algorithm of Strassen recursively results in an algorithm with 49 multiplications, which works over any field...AlphaEvolve is the first method to find an algorithm to multiply two 4 × 4 complex-valued matrices using 48 multiplications."<p>...but Waksman's algorithm from 1970 [1] multiplies two 4 x 4 complex-valued matrices  using only 46 multiplications (indeed, it works in any ring admitting division by 2).<p>Sloppy by DeepMind and by Nature to publish such a claim - did they not ask someone knowledgeable about matrix multiplication to review the work?<p>[1] <a href="https://doi.org/10.1109/T-C.1970.222926" rel="nofollow">https://doi.org/10.1109/T-C.1970.222926</a></p>
]]></description><pubDate>Wed, 14 May 2025 21:03:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=43989197</link><dc:creator>fdej</dc:creator><comments>https://news.ycombinator.com/item?id=43989197</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43989197</guid></item><item><title><![CDATA[New comment by fdej in "Powers of 2 with all even digits"]]></title><description><![CDATA[
<p>Just check for the existence of at least one odd digit mod 10^B for some well chosen B.<p>Here is a C program that does the verification up to 2^(10^10) in 30 seconds: <a href="https://gist.github.com/fredrik-johansson/8924e10e5d74e391094893932914c852" rel="nofollow">https://gist.github.com/fredrik-johansson/8924e10e5d74e39109...</a><p>Edit: made it multithreaded, goes up to 2^(10^12) in nine minutes on 8 cores.</p>
]]></description><pubDate>Thu, 20 Mar 2025 21:31:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=43429212</link><dc:creator>fdej</dc:creator><comments>https://news.ycombinator.com/item?id=43429212</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43429212</guid></item><item><title><![CDATA[New comment by fdej in "Author-paid publication fees corrupt science and should be abandoned"]]></title><description><![CDATA[
<p>There's no a priori reason why the expected success rate of research projects should be 50% and not, say, 1% or 99%.</p>
]]></description><pubDate>Sat, 24 Aug 2024 07:54:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=41336433</link><dc:creator>fdej</dc:creator><comments>https://news.ycombinator.com/item?id=41336433</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41336433</guid></item><item><title><![CDATA[New comment by fdej in "What is gained and lost with 63-bit integers? (2014)"]]></title><description><![CDATA[
<p>That is the way the FLINT fmpz type does it. The benefit is that you can check for pointers once and then use the operands like ordinary ints without the +1/-1 adjustments. For example, to multiply two integer matrices, you can do O(n^2) pointer checks and then do O(n^3) arithmetic operations without overhead. But if the pointer check is done each operation then the representation doesn't make much difference; whether you use the LSB or MSB as a tag, what kills performance is checking and branching based on a tag bit in the first place. What you'd want is a sufficiently smart compiler (TM) that can eliminate as many such checks as possible.</p>
]]></description><pubDate>Sat, 12 Aug 2023 09:20:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=37098437</link><dc:creator>fdej</dc:creator><comments>https://news.ycombinator.com/item?id=37098437</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37098437</guid></item><item><title><![CDATA[New comment by fdej in "Large Multiplication"]]></title><description><![CDATA[
<p>This is indeed the issue. Using provable bounds loses too many bits for complex FFTs to make sense for long multiplies.</p>
]]></description><pubDate>Fri, 14 Jul 2023 09:47:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=36721700</link><dc:creator>fdej</dc:creator><comments>https://news.ycombinator.com/item?id=36721700</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36721700</guid></item><item><title><![CDATA[New comment by fdej in "Beyond automatic differentiation"]]></title><description><![CDATA[
<p>This looks quite similar like the "Lagrange models" defined by Joris van der Hoeven in <a href="https://hal.science/hal-01188378/document" rel="nofollow">https://hal.science/hal-01188378/document</a>, a version of Taylor models where the constant error term is replaced by an interval multiple of x^n.<p>Representing functions locally is a very interesting problem: there are lots of possible approaches using intervals, Taylor series, Chebyshev series, etc. Big open design space if you ask me.</p>
]]></description><pubDate>Fri, 14 Apr 2023 17:30:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=35572199</link><dc:creator>fdej</dc:creator><comments>https://news.ycombinator.com/item?id=35572199</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35572199</guid></item><item><title><![CDATA[New comment by fdej in "Now I’m calculating with constructive reals"]]></title><description><![CDATA[
<p>There is an algorithm by Richardson to prove equality of real and complex numbers composed from exponentials and logarithms. (It doesn't have a complete proof of correctness, but it never returns a wrong result: it will loop forever iff it encounters a counterexample to Schanuel's conjecture.)<p>It can also be extended to higher transcendental functions, but it gets harder to guarantee termination.<p>I have a library for exact reals/complexes with an incomplete implementation of Richardson's algorithm: <a href="https://fredrikj.net/calcium/" rel="nofollow">https://fredrikj.net/calcium/</a></p>
]]></description><pubDate>Wed, 27 Apr 2022 08:45:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=31177725</link><dc:creator>fdej</dc:creator><comments>https://news.ycombinator.com/item?id=31177725</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31177725</guid></item><item><title><![CDATA[New comment by fdej in "Now I’m calculating with constructive reals"]]></title><description><![CDATA[
<p>No, this is wrong. Richardson's theorem is about functions, not constants. Equality of constants constructed from exponentials and logarithms is decidable (assuming Schanuel's conjecture) by another theorem (and algorithm!) of Richardson.</p>
]]></description><pubDate>Wed, 27 Apr 2022 08:34:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=31177669</link><dc:creator>fdej</dc:creator><comments>https://news.ycombinator.com/item?id=31177669</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31177669</guid></item><item><title><![CDATA[New comment by fdej in "Things I would like to see in a computer algebra system"]]></title><description><![CDATA[
<p>Author here. I'm only talking about using formalizable and mathematically consistent type definitions in a CAS, not requiring formal proofs in the implementations of types.<p>For example, if you want to prove a+b+c+d=d+b+c+a, you will define a, b, c, d as elements of a type R implementing the "additive abelian group" interface (or an extension of this interface, like "ring"); R = integers, for example. The CAS can then use builtin techniques for abelian additive groups to prove the property efficiently. It will not have to prove those properties from first principles.<p>In the distant future there could perhaps be some kind of convergence between CASes and proof assistants to completely rule out implementation errors, but this would require breakthroughs in proof automation for the reason you mentioned.</p>
]]></description><pubDate>Thu, 21 Apr 2022 07:57:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=31106989</link><dc:creator>fdej</dc:creator><comments>https://news.ycombinator.com/item?id=31106989</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31106989</guid></item><item><title><![CDATA[New comment by fdej in "A surprisingly hard CS problem: sums of square roots (2018)"]]></title><description><![CDATA[
<p>Correcting myself, the bound is worse than exponential (so read "at least exponential in N"), but the point I wanted to make is that it is explicit.<p>Again, this follows from the general theory of algebraic numbers: the degree and height of a sum, product or root of algebraic numbers can be bounded explicitly (resultants + Mignotte bound for factors), and finally root separation bounds can be applied to the resulting polynomial.</p>
]]></description><pubDate>Tue, 25 Jan 2022 11:26:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=30070633</link><dc:creator>fdej</dc:creator><comments>https://news.ycombinator.com/item?id=30070633</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30070633</guid></item><item><title><![CDATA[New comment by fdej in "A surprisingly hard CS problem: sums of square roots (2018)"]]></title><description><![CDATA[
<p>Yes, the worst-case complexity is exponential in N, but the wording in the article could lead you to believe that no explicit exponential bound is known, which is false.</p>
]]></description><pubDate>Mon, 24 Jan 2022 17:22:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=30060751</link><dc:creator>fdej</dc:creator><comments>https://news.ycombinator.com/item?id=30060751</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30060751</guid></item><item><title><![CDATA[New comment by fdej in "A surprisingly hard CS problem: sums of square roots (2018)"]]></title><description><![CDATA[
<p>> But how long will we need to look through these sequences of digits before we find the disagreeing digit? It feels intuitively like we should be able to establish some kind of bound on this. Like, maybe we should be able to say “if you add two lists of n numbers, each of which has d digits, then they can’t disagree for more than k * n * d digits” for some k. But no-one’s been able to prove anything like this.<p>You can write down a completely explicit bound here using the Mahler-Mignotte root separation bound. More generally, for any algebraic expression involving algebraic numbers, you can bound a priori the number of digits you need to check to determine its sign.<p>When you involve transcendental numbers, things do get much harder though.</p>
]]></description><pubDate>Mon, 24 Jan 2022 14:32:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=30058017</link><dc:creator>fdej</dc:creator><comments>https://news.ycombinator.com/item?id=30058017</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30058017</guid></item><item><title><![CDATA[New comment by fdej in "Matrix Multiplication Inches Closer To Mythic Goal"]]></title><description><![CDATA[
<p>Yes, indeed, and Strassen is a bad default algorithm because of this. There are specialized situations where the numerical stability isn't an issue though.</p>
]]></description><pubDate>Sat, 18 Dec 2021 12:52:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=29604124</link><dc:creator>fdej</dc:creator><comments>https://news.ycombinator.com/item?id=29604124</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29604124</guid></item><item><title><![CDATA[New comment by fdej in "Matrix Multiplication Inches Closer To Mythic Goal"]]></title><description><![CDATA[
<p>What makes you say that? I've had good speedups with Strassen multiplication in variable precision (or with floating-point, in case you meant fixed-point arithmetic).</p>
]]></description><pubDate>Sat, 18 Dec 2021 08:05:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=29602715</link><dc:creator>fdej</dc:creator><comments>https://news.ycombinator.com/item?id=29602715</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29602715</guid></item><item><title><![CDATA[New comment by fdej in "Launching Version 13.0 of Wolfram Language and Mathematica"]]></title><description><![CDATA[
<p>I'm the poster. I think you misunderstood the followup post -- the same library is definitely used in the Cloud (same as the standalone Mathematica).</p>
]]></description><pubDate>Thu, 16 Dec 2021 17:03:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=29580491</link><dc:creator>fdej</dc:creator><comments>https://news.ycombinator.com/item?id=29580491</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29580491</guid></item><item><title><![CDATA[New comment by fdej in "Launching Version 13.0 of Wolfram Language and Mathematica"]]></title><description><![CDATA[
<p>Indeed. There's no way for users of Wolfram Cloud, for example, to see that information though.</p>
]]></description><pubDate>Thu, 16 Dec 2021 16:51:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=29580316</link><dc:creator>fdej</dc:creator><comments>https://news.ycombinator.com/item?id=29580316</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29580316</guid></item><item><title><![CDATA[New comment by fdej in "Faster CPython Ideas – Issue Tracker for Faster CPython Project"]]></title><description><![CDATA[
<p>My number one wish would be the ability to read definitions from .h files automatically. Other than that, I've had some issues with memory management with ctypes (objects being deallocated prematurely) that I never had with Cython, but that may just be my own fault. I agree about the lack of interop with Python ints.</p>
]]></description><pubDate>Fri, 14 May 2021 09:22:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=27152380</link><dc:creator>fdej</dc:creator><comments>https://news.ycombinator.com/item?id=27152380</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=27152380</guid></item><item><title><![CDATA[New comment by fdej in "Faster CPython Ideas – Issue Tracker for Faster CPython Project"]]></title><description><![CDATA[
<p>I'd like to see optimizations targeting ctypes, or a successor to ctypes. I wish I could write elegant, performant C wrappers in pure Python. Right now the best choices are Cython, which is a hassle (separate, slow compilation, various warts), and ctypes, which is slow (and has some design problems of its own). Julia's ccall alone is a major selling point over Python right now.</p>
]]></description><pubDate>Thu, 13 May 2021 12:27:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=27141317</link><dc:creator>fdej</dc:creator><comments>https://news.ycombinator.com/item?id=27141317</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=27141317</guid></item><item><title><![CDATA[New comment by fdej in "Using Computer Modern on the web (2013)"]]></title><description><![CDATA[
<p>This version of CM looks too thin. The version that comes with KaTeX (<a href="https://github.com/KaTeX/katex-fonts" rel="nofollow">https://github.com/KaTeX/katex-fonts</a>) looks great in the browser though. Would be nice if someone packaged that font in a more user-friendly way.</p>
]]></description><pubDate>Wed, 12 May 2021 13:30:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=27130091</link><dc:creator>fdej</dc:creator><comments>https://news.ycombinator.com/item?id=27130091</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=27130091</guid></item></channel></rss>