<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: thomasahle</title><link>https://news.ycombinator.com/user?id=thomasahle</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 04 Apr 2026 01:11:06 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=thomasahle" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by thomasahle in "OpenAI closes funding round at an $852B valuation"]]></title><description><![CDATA[
<p>I don't know anyway using these models everyday who think they are hitting a ceiling.<p>If anything there's a plateau between each model release.</p>
]]></description><pubDate>Tue, 31 Mar 2026 21:14:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47593555</link><dc:creator>thomasahle</dc:creator><comments>https://news.ycombinator.com/item?id=47593555</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47593555</guid></item><item><title><![CDATA[New comment by thomasahle in "OpenAI closes funding round at an $852B valuation"]]></title><description><![CDATA[
<p>It's hard to train models in the open. All the big players are using lots of "dodgy" training data. Like books, video, code, destinations. If you did that in the open, the lawyers would shut you down.</p>
]]></description><pubDate>Tue, 31 Mar 2026 21:10:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47593510</link><dc:creator>thomasahle</dc:creator><comments>https://news.ycombinator.com/item?id=47593510</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47593510</guid></item><item><title><![CDATA[New comment by thomasahle in "Even Faster Asin() Was Staring Right at Me"]]></title><description><![CDATA[
<p>Sorry, I said that wrong. Estrin's doesn't reduce the number of multiplications.</p>
]]></description><pubDate>Mon, 16 Mar 2026 15:09:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47400041</link><dc:creator>thomasahle</dc:creator><comments>https://news.ycombinator.com/item?id=47400041</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47400041</guid></item><item><title><![CDATA[New comment by thomasahle in "Even faster asin() was staring right at me"]]></title><description><![CDATA[
<p>Did you try polynomial preprocessing methods, like Knuth's and Estrin's methods?
<a href="https://en.wikipedia.org/wiki/Polynomial_evaluation#Evaluation_with_preprocessing" rel="nofollow">https://en.wikipedia.org/wiki/Polynomial_evaluation#Evaluati...</a>
they let you compute polynomials with half the multiplications of Horner's method, and I used them in the past to improve the speed of the exponential function in Boost.</p>
]]></description><pubDate>Mon, 16 Mar 2026 13:49:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47399013</link><dc:creator>thomasahle</dc:creator><comments>https://news.ycombinator.com/item?id=47399013</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47399013</guid></item><item><title><![CDATA[New comment by thomasahle in "Qwen3.5: Towards Native Multimodal Agents"]]></title><description><![CDATA[
<p>We scaled on "virtually all RL tasks and environments we could conceive." - apparently, they didn't conceive of pelican SVG RL.<p>I've long thought multi-modal LLMs should be strong enough to do RL for TikZ and SVG generation. Maybe Google is doing it.</p>
]]></description><pubDate>Mon, 16 Feb 2026 23:09:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47041597</link><dc:creator>thomasahle</dc:creator><comments>https://news.ycombinator.com/item?id=47041597</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47041597</guid></item><item><title><![CDATA[New comment by thomasahle in "Magnus Carlsen Wins the Freestyle (Chess960) World Championship"]]></title><description><![CDATA[
<p>To encourage female participation and representation. Most people think it would be good for chess long-term to have a larger female player base.</p>
]]></description><pubDate>Mon, 16 Feb 2026 13:41:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47034810</link><dc:creator>thomasahle</dc:creator><comments>https://news.ycombinator.com/item?id=47034810</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47034810</guid></item><item><title><![CDATA[New comment by thomasahle in "Magnus Carlsen Wins the Freestyle (Chess960) World Championship"]]></title><description><![CDATA[
<p>Most players actually peak in strength around age 35 [1].<p>But Carlsen has been number one for more time than any player for him, safe Kasparov [2]:<p>- Kasparov 255 months at number 1<p>- Carlsen 188<p>- Karpov 102<p>- Fischer 54<p>Bonus nuance: Carlsen has the longest unbroken run of 174 consecutive rating lists<p>[1]: <a href="https://en.chessbase.com/post/the-age-related-decline-in-chess#:~:text=Figure%201.%20Asymmetric%20cubic%20curves" rel="nofollow">https://en.chessbase.com/post/the-age-related-decline-in-che...</a><p>[2]: <a href="https://en.wikipedia.org/wiki/List_of_FIDE_chess_world_number_ones#:~:text=since%20July%202011.-,Time%20at%20FIDE%20number%20one%20and%20youngest%20age%20at%20FIDE%20number%20one,-Player" rel="nofollow">https://en.wikipedia.org/wiki/List_of_FIDE_chess_world_numbe...</a></p>
]]></description><pubDate>Mon, 16 Feb 2026 13:31:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47034732</link><dc:creator>thomasahle</dc:creator><comments>https://news.ycombinator.com/item?id=47034732</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47034732</guid></item><item><title><![CDATA[New comment by thomasahle in "The Feynman Lectures on Physics (1961-1964)"]]></title><description><![CDATA[
<p>On the topic of lecture notes, I can really recommend Scott Aaron's Quantum Information lecture notes: <a href="https://www.scottaaronson.com/qclec.pdf" rel="nofollow">https://www.scottaaronson.com/qclec.pdf</a></p>
]]></description><pubDate>Wed, 11 Feb 2026 09:10:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46972668</link><dc:creator>thomasahle</dc:creator><comments>https://news.ycombinator.com/item?id=46972668</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46972668</guid></item><item><title><![CDATA[New comment by thomasahle in "Thoughts on Generating C"]]></title><description><![CDATA[
<p>Maybe they just checked with a compiler and got the same code?</p>
]]></description><pubDate>Mon, 09 Feb 2026 21:13:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=46951433</link><dc:creator>thomasahle</dc:creator><comments>https://news.ycombinator.com/item?id=46951433</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46951433</guid></item><item><title><![CDATA[New comment by thomasahle in "Experts Have World Models. LLMs Have Word Models"]]></title><description><![CDATA[
<p>> This matters because (1) the world cannot be modeled anywhere close to completely with language alone<p>LLMs being "Language Models" means they model language, it doesn't mean they "model the world with language".<p>On the contrary, modeling language <i>requires you to also model the world</i>, but that's in the hidden state, and <i>not using language</i>.</p>
]]></description><pubDate>Sun, 08 Feb 2026 21:26:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46938661</link><dc:creator>thomasahle</dc:creator><comments>https://news.ycombinator.com/item?id=46938661</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46938661</guid></item><item><title><![CDATA[New comment by thomasahle in "Attention at Constant Cost per Token via Symmetry-Aware Taylor Approximation"]]></title><description><![CDATA[
<p>There's a graveyard of 100s of papers with "approximate near linear time attention."<p>They always hope the speed increase makes up for the lower quality, but it never does. The quadratic time seems inherent to the problem.<p>Indeed, there are lower bounds showing that sub n^2 algorithms can't work: <a href="https://arxiv.org/pdf/2302.13214" rel="nofollow">https://arxiv.org/pdf/2302.13214</a></p>
]]></description><pubDate>Wed, 04 Feb 2026 15:33:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46887069</link><dc:creator>thomasahle</dc:creator><comments>https://news.ycombinator.com/item?id=46887069</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46887069</guid></item><item><title><![CDATA[New comment by thomasahle in "Prism"]]></title><description><![CDATA[
<p>I tried Prism, but it's actually a lot more work than just using claude code. The latter allows you to "vibe code" your paper with no manual interaction, while Prism actually requires you review every change.<p>I actually think Prism promotes a much more responsible approach to AI writing than "copying from chatgpt" or the likes.</p>
]]></description><pubDate>Wed, 28 Jan 2026 10:32:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=46793564</link><dc:creator>thomasahle</dc:creator><comments>https://news.ycombinator.com/item?id=46793564</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46793564</guid></item><item><title><![CDATA[New comment by thomasahle in "GPTZero finds 100 new hallucinations in NeurIPS 2025 accepted papers"]]></title><description><![CDATA[
<p>> And also plagiarism, when you claim authorship of it.<p>I don't actually mind putting Claude as a co-author on my github commits.<p>But for papers there are usually so many tools involved. It would be crowded to include each of Claude, Gemini, Codex, Mathematica, Grammarly, Translate etc. as co-authors, even though I used all of them for some parts.<p>Maybe just having a "tools used" section could work?</p>
]]></description><pubDate>Fri, 23 Jan 2026 06:56:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=46729340</link><dc:creator>thomasahle</dc:creator><comments>https://news.ycombinator.com/item?id=46729340</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46729340</guid></item><item><title><![CDATA[New comment by thomasahle in "Claude Chill: Fix Claude Code's flickering in terminal"]]></title><description><![CDATA[
<p>Textual is cook, but it's maintained by a single guy, and the roadmap hasn't been updated since 2023, <a href="https://textual.textualize.io/roadmap/" rel="nofollow">https://textual.textualize.io/roadmap/</a></p>
]]></description><pubDate>Thu, 22 Jan 2026 03:42:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46715024</link><dc:creator>thomasahle</dc:creator><comments>https://news.ycombinator.com/item?id=46715024</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46715024</guid></item><item><title><![CDATA[New comment by thomasahle in "Claude Chill: Fix Claude Code's flickering in terminal"]]></title><description><![CDATA[
<p>I'm always surprised that Python doesn't have as good TUI libraries as Javascript or Rust. With the amount of CLI tooling written in Python, you'd think it had better libraries than any other language.</p>
]]></description><pubDate>Wed, 21 Jan 2026 02:00:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=46700290</link><dc:creator>thomasahle</dc:creator><comments>https://news.ycombinator.com/item?id=46700290</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46700290</guid></item><item><title><![CDATA[New comment by thomasahle in "Are arrays functions?"]]></title><description><![CDATA[
<p>What about replacing<p>> Haskell provides indexable arrays, which may be thought of as functions whose domains are isomorphic to contiguous subsets of the integers.<p>with<p>> Haskell provides indexable arrays, which are functions on the domain [0, ..., k-1]?<p>Or is the domain actually anything "isomorphic to contiguous subsets of the integers"?</p>
]]></description><pubDate>Wed, 21 Jan 2026 01:52:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=46700237</link><dc:creator>thomasahle</dc:creator><comments>https://news.ycombinator.com/item?id=46700237</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46700237</guid></item><item><title><![CDATA[New comment by thomasahle in "Nanolang: A tiny experimental language designed to be targeted by coding LLMs"]]></title><description><![CDATA[
<p>I'd rather see a programing language optimized for "few tokens". Something like toon, but for code.</p>
]]></description><pubDate>Tue, 20 Jan 2026 05:19:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=46688188</link><dc:creator>thomasahle</dc:creator><comments>https://news.ycombinator.com/item?id=46688188</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46688188</guid></item><item><title><![CDATA[New comment by thomasahle in "“Erdos problem #728 was solved more or less autonomously by AI”"]]></title><description><![CDATA[
<p>That's amazing :D</p>
]]></description><pubDate>Sat, 10 Jan 2026 00:55:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46561515</link><dc:creator>thomasahle</dc:creator><comments>https://news.ycombinator.com/item?id=46561515</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46561515</guid></item><item><title><![CDATA[New comment by thomasahle in "“Erdos problem #728 was solved more or less autonomously by AI”"]]></title><description><![CDATA[
<p>Sure, but then 50% reliability just becomes a matter of whether you can make a strong enough verifier.</p>
]]></description><pubDate>Sat, 10 Jan 2026 00:55:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=46561510</link><dc:creator>thomasahle</dc:creator><comments>https://news.ycombinator.com/item?id=46561510</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46561510</guid></item><item><title><![CDATA[New comment by thomasahle in "“Erdos problem #728 was solved more or less autonomously by AI”"]]></title><description><![CDATA[
<p>It took Andrew Wiles 7 years of intense work to solve Fermat's Last Theorem.<p>The METR institute predicts that the length of tasks AI agents can complete doubles every 7 months.<p>We should expect it to take until 2033 before AI solves Clay Institute-level problems with 50% reliability.</p>
]]></description><pubDate>Sat, 10 Jan 2026 00:19:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=46561244</link><dc:creator>thomasahle</dc:creator><comments>https://news.ycombinator.com/item?id=46561244</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46561244</guid></item></channel></rss>