<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: fooblaster</title><link>https://news.ycombinator.com/user?id=fooblaster</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 16 May 2026 09:53:26 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=fooblaster" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by fooblaster in "Grand Theft Oil Futures: Insider traders keep making a killing at our expense"]]></title><description><![CDATA[
<p>Airlines can't raise the prices of tickets sold months ago. There is still financial reason to hedge.</p>
]]></description><pubDate>Thu, 07 May 2026 13:52:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=48049472</link><dc:creator>fooblaster</dc:creator><comments>https://news.ycombinator.com/item?id=48049472</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48049472</guid></item><item><title><![CDATA[New comment by fooblaster in "Amazon chips no longer just a side dish, they're a $20B biz"]]></title><description><![CDATA[
<p>I think the revenue claims around future trainium spend is complete bullshit.</p>
]]></description><pubDate>Thu, 30 Apr 2026 01:35:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47957019</link><dc:creator>fooblaster</dc:creator><comments>https://news.ycombinator.com/item?id=47957019</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47957019</guid></item><item><title><![CDATA[New comment by fooblaster in "Qwen3.6-35B-A3B: Agentic coding power, now open to all"]]></title><description><![CDATA[
<p>Honestly, this is the AI software I actually look forward to seeing. No hype about it being too dangerous to release. No IPO pumping hype. No subscription fees. I am so pumped to try this!</p>
]]></description><pubDate>Thu, 16 Apr 2026 14:01:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47793114</link><dc:creator>fooblaster</dc:creator><comments>https://news.ycombinator.com/item?id=47793114</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47793114</guid></item><item><title><![CDATA[New comment by fooblaster in "Tesla 'Full Self-Driving' crashed through railroad gate seconds before train"]]></title><description><![CDATA[
<p>Tesla has not pulled the driver. It's just not comparable.</p>
]]></description><pubDate>Wed, 15 Apr 2026 23:28:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47786716</link><dc:creator>fooblaster</dc:creator><comments>https://news.ycombinator.com/item?id=47786716</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47786716</guid></item><item><title><![CDATA[New comment by fooblaster in "Artemis II crew see first glimpse of far side of Moon [video]"]]></title><description><![CDATA[
<p>It's fine to not be interested, but this time one of the astronauts is black</p>
]]></description><pubDate>Sun, 05 Apr 2026 15:52:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47650675</link><dc:creator>fooblaster</dc:creator><comments>https://news.ycombinator.com/item?id=47650675</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47650675</guid></item><item><title><![CDATA[New comment by fooblaster in "Solar and batteries can power the world"]]></title><description><![CDATA[
<p>where are you? that is a massive amount of solar in any place at a reasonably low latitude. Is your house enormous or are you heating your house with resistive heating?</p>
]]></description><pubDate>Fri, 03 Apr 2026 15:21:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47627725</link><dc:creator>fooblaster</dc:creator><comments>https://news.ycombinator.com/item?id=47627725</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47627725</guid></item><item><title><![CDATA[New comment by fooblaster in "First 6 days of Iran war cost $11.3B"]]></title><description><![CDATA[
<p>This thing is far from over. Iran will indefinitely be able to block the straight. The us will be stuck in this defensive position for months, until it pulls out and effectively loses the war.<p>It's clear we are going to lose, because we cannot topple the regime without putting troops on the ground, which we will never do. Setting that as a war aim doomed this whole effort from the start.</p>
]]></description><pubDate>Thu, 12 Mar 2026 16:40:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47353546</link><dc:creator>fooblaster</dc:creator><comments>https://news.ycombinator.com/item?id=47353546</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47353546</guid></item><item><title><![CDATA[New comment by fooblaster in "MacBook Pro with M5 Pro and M5 Max"]]></title><description><![CDATA[
<p>what inference runtime are you using? You mentioned mlx but I didn't think anyone was using that for local llms</p>
]]></description><pubDate>Tue, 03 Mar 2026 15:07:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47233541</link><dc:creator>fooblaster</dc:creator><comments>https://news.ycombinator.com/item?id=47233541</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47233541</guid></item><item><title><![CDATA[New comment by fooblaster in "15 years of FP64 segmentation, and why the Blackwell Ultra breaks the pattern"]]></title><description><![CDATA[
<p>Again, it wasn't exactly a huge sink of resources. There was no genius gamble from jensen like you are suggesting. I suspect your view here is intrinsically tied to your need to feel like you and others who are in your position are responsible for your own success, when in fact it's mostly about luck.</p>
]]></description><pubDate>Sat, 28 Feb 2026 07:39:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47191777</link><dc:creator>fooblaster</dc:creator><comments>https://news.ycombinator.com/item?id=47191777</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47191777</guid></item><item><title><![CDATA[New comment by fooblaster in "15 years of FP64 segmentation, and why the Blackwell Ultra breaks the pattern"]]></title><description><![CDATA[
<p>CUDA was profitable very early because of oil and gas code, like reverse time migration and the like. There was no act of incredible foresight from jensen. In fact, I recall him threatening to kill the program if large projects that made it not profitable failed, like the Titan super computer at oak ridge.</p>
]]></description><pubDate>Thu, 19 Feb 2026 20:38:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47078911</link><dc:creator>fooblaster</dc:creator><comments>https://news.ycombinator.com/item?id=47078911</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47078911</guid></item><item><title><![CDATA[New comment by fooblaster in "15 years of FP64 segmentation, and why the Blackwell Ultra breaks the pattern"]]></title><description><![CDATA[
<p>It was definitely luck, greg. And Nvidia didn't invent deep learning, deep learning found nvidias investment in CUDA.</p>
]]></description><pubDate>Thu, 19 Feb 2026 06:30:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47070576</link><dc:creator>fooblaster</dc:creator><comments>https://news.ycombinator.com/item?id=47070576</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47070576</guid></item><item><title><![CDATA[New comment by fooblaster in "Defining Safe Hardware Design [pdf]"]]></title><description><![CDATA[
<p>I was really happy to see that blue spec was fully open sourced in recent years. Does anyone have experience with a non trivial project with it? Does it have any traction anymore in real silicon development.</p>
]]></description><pubDate>Tue, 03 Feb 2026 18:16:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=46874813</link><dc:creator>fooblaster</dc:creator><comments>https://news.ycombinator.com/item?id=46874813</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46874813</guid></item><item><title><![CDATA[New comment by fooblaster in "Apple picks Gemini to power Siri"]]></title><description><![CDATA[
<p>calling neural engine the best is pretty silly. the best perhaps of what is uniformly a failed class of ip blocks - mobile inference NPU hardware. edge inference on apple is dominated by cpus and metal, which don't use their NPU.</p>
]]></description><pubDate>Mon, 12 Jan 2026 16:27:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=46590665</link><dc:creator>fooblaster</dc:creator><comments>https://news.ycombinator.com/item?id=46590665</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46590665</guid></item><item><title><![CDATA[New comment by fooblaster in "Developing a BLAS Library for the AMD AI Engine [pdf]"]]></title><description><![CDATA[
<p>Looks like they have made some progress on a native model in recent months: <a href="https://github.com/amd/IRON/tree/devel" rel="nofollow">https://github.com/amd/IRON/tree/devel</a></p>
]]></description><pubDate>Wed, 07 Jan 2026 04:54:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46522722</link><dc:creator>fooblaster</dc:creator><comments>https://news.ycombinator.com/item?id=46522722</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46522722</guid></item><item><title><![CDATA[New comment by fooblaster in "CES 2026: Taking the Lids Off AMD's Venice and MI400 SoCs"]]></title><description><![CDATA[
<p>sms is the Nvidia definition of processor, and cuda device properties returns it, not anything else. If you want a marketing number, use cuda cores, it doesn't consistently match to anything in the hardware design.</p>
]]></description><pubDate>Wed, 07 Jan 2026 00:14:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=46520823</link><dc:creator>fooblaster</dc:creator><comments>https://news.ycombinator.com/item?id=46520823</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46520823</guid></item><item><title><![CDATA[New comment by fooblaster in "CES 2026: Taking the Lids Off AMD's Venice and MI400 SoCs"]]></title><description><![CDATA[
<p>b200 is 148 sms, so no</p>
]]></description><pubDate>Tue, 06 Jan 2026 23:56:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=46520666</link><dc:creator>fooblaster</dc:creator><comments>https://news.ycombinator.com/item?id=46520666</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46520666</guid></item><item><title><![CDATA[New comment by fooblaster in "TinyTinyTPU: 2×2 systolic-array TPU-style matrix-multiply unit deployed on FPGA"]]></title><description><![CDATA[
<p>The versal stuff isn't really an FPGA anymore. The chips have PL on them, but many don't. The consumer NPUs from AMD are the same versal aie cores with no PL. They just aren't configurable blocks in fabric anymore and don't have the same programming model. So I'm not contradicting myself here.<p>That being said, versal aie for ml has been a terrible failure. The reasons for why are complicated. One reason is because the memory hierarchy for SRAM is not a unified pool. It's partitioned into tiles and can't be accessed by all cores. additionally, access of this SRAM is only via dma engines and not directly from the cores. Thirdly, the datapaths for feeding the VLIW cores are statically set, and require a software configuration to change at runtime which is slow. Programming this thing makes the cell processor look like a cakewalk. You gotta program dma engines, you program hundreds of VLIW cores, you need to explicitly setup on chip network fabric. I could go on.<p>Anyway, my point is FPGAs aren't getting ML slices. Some FPGAs do have a completely separate thing that can do ML, but what is shipped is terrible. Hopefully that makes sense.</p>
]]></description><pubDate>Tue, 06 Jan 2026 19:46:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46517578</link><dc:creator>fooblaster</dc:creator><comments>https://news.ycombinator.com/item?id=46517578</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46517578</guid></item><item><title><![CDATA[New comment by fooblaster in "Why is the Gmail app 700 MB?"]]></title><description><![CDATA[
<p>10 MB mail app. 690 MB local llm to write snarky emails for you.</p>
]]></description><pubDate>Tue, 06 Jan 2026 17:04:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46514998</link><dc:creator>fooblaster</dc:creator><comments>https://news.ycombinator.com/item?id=46514998</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46514998</guid></item><item><title><![CDATA[New comment by fooblaster in "Corundum – open-source FPGA-based NIC and platform for in-network compute"]]></title><description><![CDATA[
<p>wow, say more..</p>
]]></description><pubDate>Sun, 04 Jan 2026 17:47:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=46490296</link><dc:creator>fooblaster</dc:creator><comments>https://news.ycombinator.com/item?id=46490296</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46490296</guid></item><item><title><![CDATA[New comment by fooblaster in "TinyTinyTPU: 2×2 systolic-array TPU-style matrix-multiply unit deployed on FPGA"]]></title><description><![CDATA[
<p>I'd like to know more. I expect these systems are 8xvh1782. Is that true? What's the theoretical math throughput - my expectation is that it isn't very high per chip. How is performance in the prefill stage when inference is actually math limited?</p>
]]></description><pubDate>Sun, 04 Jan 2026 15:51:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=46489064</link><dc:creator>fooblaster</dc:creator><comments>https://news.ycombinator.com/item?id=46489064</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46489064</guid></item></channel></rss>