<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: md_rumpf</title><link>https://news.ycombinator.com/user?id=md_rumpf</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 22 Apr 2026 18:44:06 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=md_rumpf" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by md_rumpf in "In Defense of Text Labels"]]></title><description><![CDATA[
<p>My guess is a lot of these studies don't replicate as well in Europe vs. US.<p>Road signs in the US are predominantly text: PED XING. ONE WAY. DO NOT PASS. Where the European equivalents are all pictograms. Europe needs to do it this way, because the countries are so small you'd expect German-speakers driving in France and vice-versa: French-only text signs would be criminal. Similarly, a French-speaker can navigate a German train station without understanding German writing.<p>Europeans are actually much more attuned to deriving meaning from pictograms than their American counterparts.</p>
]]></description><pubDate>Sun, 23 Feb 2025 12:02:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=43148688</link><dc:creator>md_rumpf</dc:creator><comments>https://news.ycombinator.com/item?id=43148688</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43148688</guid></item><item><title><![CDATA[New comment by md_rumpf in "5G networks meet consumer needs as mobile data growth slows"]]></title><description><![CDATA[
<p>Are there really use cases for faster chips? I can run all models I want on an H100 pod. 
No models exist that I can't run with at least 64 H100s. NVIDIA should just stop.</p>
]]></description><pubDate>Wed, 12 Feb 2025 17:56:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=43027824</link><dc:creator>md_rumpf</dc:creator><comments>https://news.ycombinator.com/item?id=43027824</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43027824</guid></item><item><title><![CDATA[New comment by md_rumpf in "Robots might be 1000x harder than superintelligence"]]></title><description><![CDATA[
<p>I think it goes hand in hand.<p>Knowledge work is made up (literally). It's all tasks that humans have created for themselves. We invented the maths, the programming languages, the spreadsheet software, the government form – everything.<p>The real world is very much not made up (especially if you're nonreligious). Most of the tasks are sampled from a distribution we did not invent and that means they might be much, much harder.<p>I think supersonic airliners might even be thinking too small. If we scaled speed like we did transistors, then if an Intel 4004 is like walking, an M2 Max would be like going 0.28c (c = speed of light)!</p>
]]></description><pubDate>Sat, 18 Jan 2025 01:47:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=42745048</link><dc:creator>md_rumpf</dc:creator><comments>https://news.ycombinator.com/item?id=42745048</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42745048</guid></item><item><title><![CDATA[Robots might be 1000x harder than superintelligence]]></title><description><![CDATA[
<p>Article URL: <a href="https://maxrumpf.com/writing/2024-10-20-robots-might-be-hard.html">https://maxrumpf.com/writing/2024-10-20-robots-might-be-hard.html</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42744881">https://news.ycombinator.com/item?id=42744881</a></p>
<p>Points: 6</p>
<p># Comments: 3</p>
]]></description><pubDate>Sat, 18 Jan 2025 01:12:51 +0000</pubDate><link>https://maxrumpf.com/writing/2024-10-20-robots-might-be-hard.html</link><dc:creator>md_rumpf</dc:creator><comments>https://news.ycombinator.com/item?id=42744881</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42744881</guid></item><item><title><![CDATA[New comment by md_rumpf in "Program Synthesis and Large Language Models"]]></title><description><![CDATA[
<p>Well... How do humans synthesizs programs then? We don't do exponential search over some symbols. We "somehow know" which branches get us closer to a program and which don't. It's foolish to believe it is <i>impossible</i> to teach an AI to "somehow know" this, too.
Also: For some reason the bar is always higher for AI. Next to no human code is verifiably correct. And the actually verified-to-be-correct code is an even smaller subset.</p>
]]></description><pubDate>Mon, 16 Dec 2024 05:35:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=42428234</link><dc:creator>md_rumpf</dc:creator><comments>https://news.ycombinator.com/item?id=42428234</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42428234</guid></item><item><title><![CDATA[New comment by md_rumpf in "DOJ will push Google to sell off Chrome"]]></title><description><![CDATA[
<p>perplexity's valuation just doubled!</p>
]]></description><pubDate>Tue, 19 Nov 2024 07:48:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=42180915</link><dc:creator>md_rumpf</dc:creator><comments>https://news.ycombinator.com/item?id=42180915</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42180915</guid></item><item><title><![CDATA[Writing Code Just-in-Time]]></title><description><![CDATA[
<p>Article URL: <a href="https://maxrumpf.com/writing/2024-08-21-jit-coding.html">https://maxrumpf.com/writing/2024-08-21-jit-coding.html</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42012999">https://news.ycombinator.com/item?id=42012999</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 01 Nov 2024 00:44:17 +0000</pubDate><link>https://maxrumpf.com/writing/2024-08-21-jit-coding.html</link><dc:creator>md_rumpf</dc:creator><comments>https://news.ycombinator.com/item?id=42012999</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42012999</guid></item><item><title><![CDATA[New comment by md_rumpf in "Addition Is All You Need for Energy-Efficient Language Models"]]></title><description><![CDATA[
<p>The return of the CPU?!</p>
]]></description><pubDate>Wed, 09 Oct 2024 05:53:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=41784898</link><dc:creator>md_rumpf</dc:creator><comments>https://news.ycombinator.com/item?id=41784898</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41784898</guid></item><item><title><![CDATA[New comment by md_rumpf in "Chameleon: Meta’s New Multi-Modal LLM"]]></title><description><![CDATA[
<p>every 3rd sentsnce is "the model was not trained on data from meta's products"</p>
]]></description><pubDate>Tue, 21 May 2024 05:13:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=40424401</link><dc:creator>md_rumpf</dc:creator><comments>https://news.ycombinator.com/item?id=40424401</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40424401</guid></item><item><title><![CDATA[New comment by md_rumpf in "Chameleon: Meta’s New Multi-Modal LLM"]]></title><description><![CDATA[
<p>the modality competition was one of my favorite insights, too!</p>
]]></description><pubDate>Tue, 21 May 2024 05:12:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=40424396</link><dc:creator>md_rumpf</dc:creator><comments>https://news.ycombinator.com/item?id=40424396</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40424396</guid></item></channel></rss>