<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: cpldcpu</title><link>https://news.ycombinator.com/user?id=cpldcpu</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 02:04:26 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=cpldcpu" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by cpldcpu in "Show HN: I built a tiny LLM to demystify how language models work"]]></title><description><![CDATA[
<p>Love it! Great idea for the dataset.</p>
]]></description><pubDate>Mon, 06 Apr 2026 08:04:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47658155</link><dc:creator>cpldcpu</dc:creator><comments>https://news.ycombinator.com/item?id=47658155</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47658155</guid></item><item><title><![CDATA[New comment by cpldcpu in "Building an FPGA 3dfx Voodoo with Modern RTL Tools"]]></title><description><![CDATA[
<p>+1</p>
]]></description><pubDate>Sun, 22 Mar 2026 15:24:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47478486</link><dc:creator>cpldcpu</dc:creator><comments>https://news.ycombinator.com/item?id=47478486</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47478486</guid></item><item><title><![CDATA[Towards Self-Replication: Claude Opus Designs Hardware to Run Itself]]></title><description><![CDATA[
<p>Article URL: <a href="https://cpldcpu.github.io/smollm.c/">https://cpldcpu.github.io/smollm.c/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47272174">https://news.ycombinator.com/item?id=47272174</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 06 Mar 2026 07:47:54 +0000</pubDate><link>https://cpldcpu.github.io/smollm.c/</link><dc:creator>cpldcpu</dc:creator><comments>https://news.ycombinator.com/item?id=47272174</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47272174</guid></item><item><title><![CDATA[New comment by cpldcpu in "How Taalas “prints” LLM onto a chip?"]]></title><description><![CDATA[
<p>They mentioned that they using strong quantization (iirc 3bit) and that the model was degradeted from that. Also, they don't have to use transistors to store the bits.</p>
]]></description><pubDate>Sun, 22 Feb 2026 16:37:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47112417</link><dc:creator>cpldcpu</dc:creator><comments>https://news.ycombinator.com/item?id=47112417</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47112417</guid></item><item><title><![CDATA[New comment by cpldcpu in "How Taalas “prints” LLM onto a chip?"]]></title><description><![CDATA[
<p>I wonder how well this works with MoE architectures?<p>For dense LLMs, like llama-3.1-8B, you profit a lot from having all the weights available close to the actual multiply-accumulate hardware.<p>With MoE, it is rather like a memory lookup. Instead of a 1:1 pairing of MACs to stored weights, you suddenly are forced to have a large memory block next to a small MAC block. And once this mismatch becomes large enough, there is a huge gain by using a highly optimized memory process for the memory instead of mask ROM.<p>At that point we are back to a chiplet approach...</p>
]]></description><pubDate>Sun, 22 Feb 2026 07:42:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47109118</link><dc:creator>cpldcpu</dc:creator><comments>https://news.ycombinator.com/item?id=47109118</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47109118</guid></item><item><title><![CDATA[New comment by cpldcpu in "How Taalas “prints” LLM onto a chip?"]]></title><description><![CDATA[
<p>It could simply be bit serial. With 4 bit weights you only need four serial addition steps, which is not an issue if the weight are stored nearby in a rom.</p>
]]></description><pubDate>Sun, 22 Feb 2026 07:37:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47109093</link><dc:creator>cpldcpu</dc:creator><comments>https://news.ycombinator.com/item?id=47109093</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47109093</guid></item><item><title><![CDATA[Glowing Polyhedrons – LED filament 3D objects using graph theory]]></title><description><![CDATA[
<p>Article URL: <a href="https://cpldcpu.github.io/2026/01/24/glowing-polyhedrons/">https://cpldcpu.github.io/2026/01/24/glowing-polyhedrons/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46751967">https://news.ycombinator.com/item?id=46751967</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 25 Jan 2026 08:34:19 +0000</pubDate><link>https://cpldcpu.github.io/2026/01/24/glowing-polyhedrons/</link><dc:creator>cpldcpu</dc:creator><comments>https://news.ycombinator.com/item?id=46751967</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46751967</guid></item><item><title><![CDATA[New comment by cpldcpu in "Reproducing DeepSeek's MHC: When Residual Connections Explode"]]></title><description><![CDATA[
<p>Thanks! Would be quite interesting to see how this fares compared to mHC.<p>I noted that LAuReL is cited in the mHC paper, but they refer to it as "expanding the width of the residual stream", which is rather odd.</p>
]]></description><pubDate>Mon, 12 Jan 2026 17:46:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=46591770</link><dc:creator>cpldcpu</dc:creator><comments>https://news.ycombinator.com/item?id=46591770</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46591770</guid></item><item><title><![CDATA[New comment by cpldcpu in "Reproducing DeepSeek's MHC: When Residual Connections Explode"]]></title><description><![CDATA[
<p>May be worth pointing out, that this is not the first residual connection innovation to be in production.<p>Gemma 3n is also using a low-rank projection of the residual stream called LAuReL. Google did not publicize this too much, I noted it when poking around in the model file.<p><a href="https://arxiv.org/pdf/2411.07501v3" rel="nofollow">https://arxiv.org/pdf/2411.07501v3</a><p><a href="https://old.reddit.com/r/LocalLLaMA/comments/1kuy45r/gemma_3n_architectural_innovations_speculation/" rel="nofollow">https://old.reddit.com/r/LocalLLaMA/comments/1kuy45r/gemma_3...</a><p>Seems to be what they call LAuReL-LR in the paper, with D=2048 and R=64</p>
]]></description><pubDate>Mon, 12 Jan 2026 15:49:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=46590030</link><dc:creator>cpldcpu</dc:creator><comments>https://news.ycombinator.com/item?id=46590030</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46590030</guid></item><item><title><![CDATA[New comment by cpldcpu in "China DRAM Maker CXMT Targets $4.2B IPO as It Takes on Samsung, SK Hynix, Micron"]]></title><description><![CDATA[
<p>All of them use ASML lithography, including CXMT.<p>They are, of course, a bit slower in EUV adoption. But its already there:<p><a href="https://www.tomshardware.com/pc-components/dram/micron-samples-ground-breaking-euv-based-memory-new-dram-process-slashes-power-consumption-by-20-percent-and-boosts-performance-by-15-percent" rel="nofollow">https://www.tomshardware.com/pc-components/dram/micron-sampl...</a><p><a href="https://www.techinsights.com/blog/samsung-d1z-lpddr5-dram-euv-lithography-euvl-memory-techstream-blog" rel="nofollow">https://www.techinsights.com/blog/samsung-d1z-lpddr5-dram-eu...</a></p>
]]></description><pubDate>Sat, 03 Jan 2026 23:54:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=46483143</link><dc:creator>cpldcpu</dc:creator><comments>https://news.ycombinator.com/item?id=46483143</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46483143</guid></item><item><title><![CDATA[New comment by cpldcpu in "Who invented the transistor?"]]></title><description><![CDATA[
<p>This lists many transistor patents from oldest to newest.<p><a href="https://patents.google.com/?q=(H03F3%2f16)&sort=old" rel="nofollow">https://patents.google.com/?q=(H03F3%2f16)&sort=old</a><p>The Matare/Welker Patent is missing though<p><a href="https://patents.google.com/patent/US2673948A/en.541" rel="nofollow">https://patents.google.com/patent/US2673948A/en.541</a><p>The entire debate is tiring. It would be better if these reviews would put the actual device physics of the different concepts into context.<p>Is there any report of a reproduction of the device proposed by Lilienfeld in his patents? If he managed to make functional devices back then, it should be possible today? (Note: Cu2S is not a very well controllable semiconductor...)<p>Edit:<p>Gemini Deep Research summary here, its quite informative: <a href="https://docs.google.com/document/d/1jE0wQVeWP9Eiybh_C6zMKeZ5An9E5uLYQSdy2VnqLxE/" rel="nofollow">https://docs.google.com/document/d/1jE0wQVeWP9Eiybh_C6zMKeZ5...</a><p>Also specifically on Cu based TFT:
<a href="https://docs.google.com/document/d/1_B2x2gBPKgGFVgJyQ0qzPdI4BX-4c3x6BFi_uRblG50/" rel="nofollow">https://docs.google.com/document/d/1_B2x2gBPKgGFVgJyQ0qzPdI4...</a><p>From the second document:
"The primary obstacle for $Cu_2S$ TFTs is degeneracy. Spontaneous copper vacancies form with negligible energy cost in the sulfur lattice. As a result, stoichiometric $Cu_2S$ is thermodynamically unstable in air, rapidly oxidizing or losing copper to form substoichiometric phases ($Cu_{2-x}S$) with hole concentrations exceeding $10^{20}-10^{21} \text{ cm}^{-3}$."<p>This explains why there are zero reproductions of Lilienfelds devices. It should be noted that Lilienfeld is one of the inventors of electrolytic capacitors and did therefore know where well how to create extremely thin insulating layers as needed for TFTs. It is not impossible to assume that he could have used other semiconductors (e.g. CdS) with his concept. However, the patents seems to specifically mention Cu2S, which does not yield functional TFTs.</p>
]]></description><pubDate>Wed, 31 Dec 2025 22:18:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46448943</link><dc:creator>cpldcpu</dc:creator><comments>https://news.ycombinator.com/item?id=46448943</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46448943</guid></item><item><title><![CDATA[New comment by cpldcpu in "Show HN: Zero-power photonic language model–code"]]></title><description><![CDATA[
<p>"Zero power" does not include the power needed to translate information between electronic and optical domains and the light source itself.</p>
]]></description><pubDate>Sat, 29 Nov 2025 23:00:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=46091620</link><dc:creator>cpldcpu</dc:creator><comments>https://news.ycombinator.com/item?id=46091620</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46091620</guid></item><item><title><![CDATA[New comment by cpldcpu in "I know we're in an AI bubble because nobody wants me"]]></title><description><![CDATA[
<p>What also cannot be ignored, is that transformer models are a great unifying force. It's basically one architecture that can be used for many purposes.<p>This eliminates the need for more specialized models and the associated engineering and optimizations for their infrastructure needs.</p>
]]></description><pubDate>Sat, 29 Nov 2025 12:03:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46086910</link><dc:creator>cpldcpu</dc:creator><comments>https://news.ycombinator.com/item?id=46086910</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46086910</guid></item><item><title><![CDATA[New comment by cpldcpu in "Google CEO Pushes 'Vibe Coding' – But Real Developers Know It's Not Magic"]]></title><description><![CDATA[
<p>I am not a professional software developer but instead more of multi-domain system architect and I have to say it is absolutely magical!<p>The public discourse about LLM assisted coding is often driven by front end developers or rather non-professionals trying to build web apps, but the value it brings to prototyping system concepts across hardware/software domains can hardly be understated.<p>Instead of trying to find suitable simulation environments and trying to couple them, I can simply whip up a gui based tool to play around with whatever signal chain/optimization problem/control I want to investigate. Usually I would have to find/hire people to do this, but using LLMs I can iterate ideas at a crazy cadence.<p>Later, implementation does of course require proper engineering.<p>That said, it is often confusing how different models are hyped. As mentioned, there is an overt focus on front end design etc. For the work I am doing, I found Claude 4.5 (both models) to be absolutely unchallenged. Gemini 3 Pro is also getting there, but long term agentic capability still needs to catch up. GPT 5.1/codex is excellent for brainstorming in the UX, but I found it too unresponsive and intransparent as a code assistant. It does not even matter if it can solve bugs other llms cannot find, because you should not put yourself into a situation where you don't understand the system you are building.</p>
]]></description><pubDate>Sat, 29 Nov 2025 08:45:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=46086049</link><dc:creator>cpldcpu</dc:creator><comments>https://news.ycombinator.com/item?id=46086049</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46086049</guid></item><item><title><![CDATA[New comment by cpldcpu in "Grok 4.1"]]></title><description><![CDATA[
<p>Not a big fan of emojis becoming the norm in LLM output.<p>It seems Grok 4.1 uses more emojis than 4.<p>Also GPT5.1 thinking is now using emojis, even in math reasoning. 5 didn't do that.</p>
]]></description><pubDate>Mon, 17 Nov 2025 22:29:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=45959139</link><dc:creator>cpldcpu</dc:creator><comments>https://news.ycombinator.com/item?id=45959139</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45959139</guid></item><item><title><![CDATA[MODPlayRISCV – Playing tracker Music on ultra-low-end RISC-V MCUs]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/cpldcpu/ModPlayRISCV">https://github.com/cpldcpu/ModPlayRISCV</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45758896">https://news.ycombinator.com/item?id=45758896</a></p>
<p>Points: 5</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 30 Oct 2025 11:36:46 +0000</pubDate><link>https://github.com/cpldcpu/ModPlayRISCV</link><dc:creator>cpldcpu</dc:creator><comments>https://news.ycombinator.com/item?id=45758896</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45758896</guid></item><item><title><![CDATA[New comment by cpldcpu in "Qualcomm to acquire Arduino"]]></title><description><![CDATA[
<p>At this point in time, the shield headers rather look like a trademark than a useful connecter.</p>
]]></description><pubDate>Tue, 07 Oct 2025 16:43:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=45505453</link><dc:creator>cpldcpu</dc:creator><comments>https://news.ycombinator.com/item?id=45505453</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45505453</guid></item><item><title><![CDATA[New comment by cpldcpu in "Language models pack billions of concepts into 12k dimensions"]]></title><description><![CDATA[
<p>The dimensions should actually be closer to 12000 * (no of tokens*no of layers / x)<p>(where x is a number dependent on architectural features like MLHA, QGA...)<p>There is this thing called KV cache which holds an enormous latent state.</p>
]]></description><pubDate>Mon, 15 Sep 2025 13:11:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=45249253</link><dc:creator>cpldcpu</dc:creator><comments>https://news.ycombinator.com/item?id=45249253</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45249253</guid></item><item><title><![CDATA[New comment by cpldcpu in "SpikingBrain 7B – More efficient than classic LLMs"]]></title><description><![CDATA[
<p>These interfaces use serialized binary encoding.<p>SNNs are more similar to pulse density modulation (PDM), if you are looking for an electronic equivalent.</p>
]]></description><pubDate>Sun, 14 Sep 2025 12:55:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=45239458</link><dc:creator>cpldcpu</dc:creator><comments>https://news.ycombinator.com/item?id=45239458</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45239458</guid></item><item><title><![CDATA[New comment by cpldcpu in "SpikingBrain 7B – More efficient than classic LLMs"]]></title><description><![CDATA[
<p>I believe the argument is that you can also encode information in the time domain.<p>If we just look at spikes as a different numerical representation, then they are clearly inferior. For example, consider that encoding the number 7 will require seven consecutive pulses on a single spiking line. Encoding the number in binary will require one pulse on three parallel lines.<p>Binary encoding wins 7x in speed and 7/3=2.333x in power efficiency...<p>On the other hand, if we assume that we are able to encode information in the gaps between pulses, then things quickly change.</p>
]]></description><pubDate>Sun, 14 Sep 2025 11:10:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=45238965</link><dc:creator>cpldcpu</dc:creator><comments>https://news.ycombinator.com/item?id=45238965</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45238965</guid></item></channel></rss>