<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: bmc7505</title><link>https://news.ycombinator.com/user?id=bmc7505</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 21:04:58 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=bmc7505" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by bmc7505 in "Expanding Swift's IDE Support"]]></title><description><![CDATA[
<p>I've recently been using this plugin [1], which is still under development but is an adequate stopgap until a better solution comes along.<p>[1]: <a href="https://plugins.jetbrains.com/plugin/22150-noctule-the-swift-ide" rel="nofollow">https://plugins.jetbrains.com/plugin/22150-noctule-the-swift...</a></p>
]]></description><pubDate>Wed, 08 Apr 2026 23:30:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47697517</link><dc:creator>bmc7505</dc:creator><comments>https://news.ycombinator.com/item?id=47697517</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47697517</guid></item><item><title><![CDATA[Tetris Is Hard with Just One Piece Type]]></title><description><![CDATA[
<p>Article URL: <a href="https://arxiv.org/abs/2603.09958">https://arxiv.org/abs/2603.09958</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47346201">https://news.ycombinator.com/item?id=47346201</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 12 Mar 2026 03:45:54 +0000</pubDate><link>https://arxiv.org/abs/2603.09958</link><dc:creator>bmc7505</dc:creator><comments>https://news.ycombinator.com/item?id=47346201</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47346201</guid></item><item><title><![CDATA[New comment by bmc7505 in "A CPU that runs entirely on GPU"]]></title><description><![CDATA[
<p>As foretold six years ago. [1]<p>[1]: <a href="https://breandan.net/2020/06/30/graph-computation#roadmap" rel="nofollow">https://breandan.net/2020/06/30/graph-computation#roadmap</a></p>
]]></description><pubDate>Wed, 04 Mar 2026 06:29:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47243888</link><dc:creator>bmc7505</dc:creator><comments>https://news.ycombinator.com/item?id=47243888</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47243888</guid></item><item><title><![CDATA[Yet another catalogue of fast matrix multiplication algorithms]]></title><description><![CDATA[
<p>Article URL: <a href="https://fmm.univ-lille.fr/">https://fmm.univ-lille.fr/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47195211">https://news.ycombinator.com/item?id=47195211</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 28 Feb 2026 13:36:02 +0000</pubDate><link>https://fmm.univ-lille.fr/</link><dc:creator>bmc7505</dc:creator><comments>https://news.ycombinator.com/item?id=47195211</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47195211</guid></item><item><title><![CDATA[New comment by bmc7505 in "Smallest transformer that can add two 10-digit numbers"]]></title><description><![CDATA[
<p>Fast matrix multiplication would be a more useful benchmark: <a href="https://fmm.univ-lille.fr/" rel="nofollow">https://fmm.univ-lille.fr/</a></p>
]]></description><pubDate>Sat, 28 Feb 2026 13:35:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47195207</link><dc:creator>bmc7505</dc:creator><comments>https://news.ycombinator.com/item?id=47195207</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47195207</guid></item><item><title><![CDATA[New comment by bmc7505 in "The path to ubiquitous AI (17k tokens/sec)"]]></title><description><![CDATA[
<p>17k TPS is slow compared to other probabilistic models. It was possible to hit ~10-20 million TPS decades ago with n-gram and PDFA models, without custom silicon. A more informative KPI would be Pass@k on a downstream reasoning task - for many such benchmarks, increasing token throughput by several orders of magnitude does not even move the needle on sample efficiency.</p>
]]></description><pubDate>Fri, 20 Feb 2026 17:40:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47091143</link><dc:creator>bmc7505</dc:creator><comments>https://news.ycombinator.com/item?id=47091143</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47091143</guid></item><item><title><![CDATA[Show HN: TidyPython – Real-time syntax repair for Python]]></title><description><![CDATA[
<p>Article URL: <a href="https://tidyparse.github.io/python.html">https://tidyparse.github.io/python.html</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46712917">https://news.ycombinator.com/item?id=46712917</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 21 Jan 2026 23:03:13 +0000</pubDate><link>https://tidyparse.github.io/python.html</link><dc:creator>bmc7505</dc:creator><comments>https://news.ycombinator.com/item?id=46712917</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46712917</guid></item><item><title><![CDATA[New comment by bmc7505 in "“Stop Designing Languages. Write Libraries Instead” (2016)"]]></title><description><![CDATA[
<p>There are a few approaches if you want to write a new language. One, as the author argues, is to write a library in an existing language, which may require sacrificing ergonomics to fit inside the syntax of the host language, but is safe, modular and reusable.<p>Many DSLs can be bolted onto an existing language with support for compiler extensions. This approach offers more flexibility, but often leads to fragmentation and poor interoperability in the language ecosystem.<p>There is third approach, established by a group in Minnesota [1], which is to design languages and tools which are modular and extensible from the get-go, so that extensions are more interoperable. They do research on how to make this work using attribute grammars.<p>If the host language has a sufficiently expressive type system, you can often get away with writing a fluent API [2] or type safe embedded DSL. But designing languages and type systems with good support for meta-programming is also an active area of research. [3, 4]<p>If none of these options work, the last resort is to start from tabula rasa and write your own parser, compiler, and developer tools. This offers the most flexibility, but requires an enormous amount of engineering, and generally is not recommended in 2026.<p>[1]: <a href="https://melt.cs.umn.edu" rel="nofollow">https://melt.cs.umn.edu</a><p>[2]: <a href="https://arxiv.org/pdf/2211.01473" rel="nofollow">https://arxiv.org/pdf/2211.01473</a><p>[3]: <a href="https://www.cs.tsukuba.ac.jp/~kam/papers/aplas2016.pdf" rel="nofollow">https://www.cs.tsukuba.ac.jp/~kam/papers/aplas2016.pdf</a><p>[4]: <a href="https://arxiv.org/pdf/2404.17065" rel="nofollow">https://arxiv.org/pdf/2404.17065</a></p>
]]></description><pubDate>Wed, 07 Jan 2026 17:57:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=46529854</link><dc:creator>bmc7505</dc:creator><comments>https://news.ycombinator.com/item?id=46529854</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46529854</guid></item><item><title><![CDATA[New comment by bmc7505 in "Fabrice Bellard Releases MicroQuickJS"]]></title><description><![CDATA[
<p>Interesting. I wonder if mqjs would make it feasible to massively parallelize JavaScript on the GPU. I’m looking for a way to run thousands of simultaneous JS interpreters, each with an isolated heap and some shared memory. There are some research projects [1, 2] in this direction, but they are fairly experimental.<p>[1]: <a href="https://github.com/SamGinzburg/VectorVisor" rel="nofollow">https://github.com/SamGinzburg/VectorVisor</a><p>[2]: <a href="https://github.com/beehive-lab/ProtonVM" rel="nofollow">https://github.com/beehive-lab/ProtonVM</a></p>
]]></description><pubDate>Wed, 24 Dec 2025 00:37:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=46371187</link><dc:creator>bmc7505</dc:creator><comments>https://news.ycombinator.com/item?id=46371187</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46371187</guid></item><item><title><![CDATA[Introduction to Linear Types]]></title><description><![CDATA[
<p>Article URL: <a href="https://austral-lang.org/linear-types">https://austral-lang.org/linear-types</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45151970">https://news.ycombinator.com/item?id=45151970</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 06 Sep 2025 19:01:56 +0000</pubDate><link>https://austral-lang.org/linear-types</link><dc:creator>bmc7505</dc:creator><comments>https://news.ycombinator.com/item?id=45151970</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45151970</guid></item><item><title><![CDATA[New comment by bmc7505 in "Does the Bitter Lesson Have Limits?"]]></title><description><![CDATA[
<p>You could argue that since automatic differentiation and symbolic differentiation are equivalent, [1] symbolic AI did succeed by becoming massively parallelizable, we just needed to scale up the data and hardware in kind.<p>[1]: <a href="https://arxiv.org/pdf/1904.02990" rel="nofollow">https://arxiv.org/pdf/1904.02990</a></p>
]]></description><pubDate>Sat, 02 Aug 2025 11:48:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=44766849</link><dc:creator>bmc7505</dc:creator><comments>https://news.ycombinator.com/item?id=44766849</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44766849</guid></item><item><title><![CDATA[New comment by bmc7505 in "Does the Bitter Lesson Have Limits?"]]></title><description><![CDATA[
<p>> The solvers participating in this track will be executed with a wall-clock time limit of 1000 seconds. Each solver will be run an a single AWS machine of the type m6i.16xlarge, which has 64 virtual cores and 256GB of memory.<p>For comparison, an H100 has 14,592 CUDA cores, with GPU clusters measured in the exaflops. The scaling exponents are clearly favorable for LLM training and inference, but whether the same algorithms used for parallel SAT would benefit from compute scaling is unclear. I maintain that either (1) SAT researchers have not yet learned the bitter lesson, or (2) it is not applicable across all of AI as Sutton claims.</p>
]]></description><pubDate>Sat, 02 Aug 2025 11:31:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=44766740</link><dc:creator>bmc7505</dc:creator><comments>https://news.ycombinator.com/item?id=44766740</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44766740</guid></item><item><title><![CDATA[New comment by bmc7505 in "Does the Bitter Lesson Have Limits?"]]></title><description><![CDATA[
<p>The difference is that SAT/SMT solvers have primarily relied on single-threaded algorithmic improvements [1] and unlike neural networks, we have not [yet] discovered a uniformly effective strategy for leveraging additional computation to accelerate wall-clock runtime. [2]<p>[1]: <a href="https://arxiv.org/pdf/2008.02215" rel="nofollow">https://arxiv.org/pdf/2008.02215</a><p>[2]: <a href="https://news.ycombinator.com/item?id=36081350">https://news.ycombinator.com/item?id=36081350</a></p>
]]></description><pubDate>Sat, 02 Aug 2025 00:18:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=44763838</link><dc:creator>bmc7505</dc:creator><comments>https://news.ycombinator.com/item?id=44763838</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44763838</guid></item><item><title><![CDATA[New comment by bmc7505 in "Type-constrained code generation with language models"]]></title><description><![CDATA[
<p>The correct way to do this is with finite model theory but we're not there yet.</p>
]]></description><pubDate>Wed, 14 May 2025 01:03:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=43979622</link><dc:creator>bmc7505</dc:creator><comments>https://news.ycombinator.com/item?id=43979622</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43979622</guid></item><item><title><![CDATA[New comment by bmc7505 in "Bored of It"]]></title><description><![CDATA[
<p>Gingsberg stole it from Yeats — “the best lack all conviction…” / “the best minds of my generation…” — many similar verses, e.g., “what rough beast…” / “what sphinx of cement…”<p><a href="https://www.poetryfoundation.org/poems/43290/the-second-coming" rel="nofollow">https://www.poetryfoundation.org/poems/43290/the-second-comi...</a></p>
]]></description><pubDate>Fri, 04 Apr 2025 13:23:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=43581982</link><dc:creator>bmc7505</dc:creator><comments>https://news.ycombinator.com/item?id=43581982</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43581982</guid></item><item><title><![CDATA[New comment by bmc7505 in "Five Kinds of Nondeterminism"]]></title><description><![CDATA[
<p><a href="https://cstheory.stackexchange.com/questions/632/what-is-the-difference-between-non-determinism-and-randomness" rel="nofollow">https://cstheory.stackexchange.com/questions/632/what-is-the...</a></p>
]]></description><pubDate>Thu, 20 Feb 2025 23:32:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=43121939</link><dc:creator>bmc7505</dc:creator><comments>https://news.ycombinator.com/item?id=43121939</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43121939</guid></item><item><title><![CDATA[New comment by bmc7505 in "US and UK refuse to sign AI safety declaration at summit"]]></title><description><![CDATA[
<p>Called it three years ago: <a href="https://news.ycombinator.com/item?id=30142353">https://news.ycombinator.com/item?id=30142353</a></p>
]]></description><pubDate>Thu, 13 Feb 2025 07:23:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=43033521</link><dc:creator>bmc7505</dc:creator><comments>https://news.ycombinator.com/item?id=43033521</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43033521</guid></item><item><title><![CDATA[Show HN: Tidyparse – Real-time context-free syntax repair]]></title><description><![CDATA[
<p>Article URL: <a href="https://tidyparse.github.io/">https://tidyparse.github.io/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42871477">https://news.ycombinator.com/item?id=42871477</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 29 Jan 2025 21:34:51 +0000</pubDate><link>https://tidyparse.github.io/</link><dc:creator>bmc7505</dc:creator><comments>https://news.ycombinator.com/item?id=42871477</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42871477</guid></item><item><title><![CDATA[New comment by bmc7505 in "RE#: High Performance Derivative-Based Regex Matching"]]></title><description><![CDATA[
<p><a href="https://dl.acm.org/doi/10.1145/3704837" rel="nofollow">https://dl.acm.org/doi/10.1145/3704837</a></p>
]]></description><pubDate>Mon, 13 Jan 2025 00:40:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=42678668</link><dc:creator>bmc7505</dc:creator><comments>https://news.ycombinator.com/item?id=42678668</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42678668</guid></item><item><title><![CDATA[RE#: High Performance Derivative-Based Regex Matching]]></title><description><![CDATA[
<p>Article URL: <a href="https://ieviev.github.io/resharp-webapp/">https://ieviev.github.io/resharp-webapp/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42678667">https://news.ycombinator.com/item?id=42678667</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 13 Jan 2025 00:40:08 +0000</pubDate><link>https://ieviev.github.io/resharp-webapp/</link><dc:creator>bmc7505</dc:creator><comments>https://news.ycombinator.com/item?id=42678667</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42678667</guid></item></channel></rss>