<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: easygenes</title><link>https://news.ycombinator.com/user?id=easygenes</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 19:38:03 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=easygenes" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by easygenes in "April 2026 TLDR Setup for Ollama and Gemma 4 26B on a Mac mini"]]></title><description><![CDATA[
<p>LM Studio has been around longer. I’ve used it since three years ago. I’d also agree it is generally a better beginner choice then and now.<p>Unsloth Studio is more featureful (well integrated tool calling, web search, and code execution being headline features), and comes from the people consistently making some of the best GGUF quants of all popular models. It also is well documented, easy to setup, and also has good fine-tuning support.</p>
]]></description><pubDate>Fri, 03 Apr 2026 11:14:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47625340</link><dc:creator>easygenes</dc:creator><comments>https://news.ycombinator.com/item?id=47625340</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47625340</guid></item><item><title><![CDATA[New comment by easygenes in "April 2026 TLDR Setup for Ollama and Gemma 4 26B on a Mac mini"]]></title><description><![CDATA[
<p>Why is ollama so many people’s go-to? Genuinely curious, I’ve tried it but it feels overly stripped down / dumbed down vs nearly everything else I’ve used.<p>Lately I’ve been playing with Unsloth Studio and think that’s probably a much better “give it to a beginner” default.</p>
]]></description><pubDate>Fri, 03 Apr 2026 10:40:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47625115</link><dc:creator>easygenes</dc:creator><comments>https://news.ycombinator.com/item?id=47625115</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47625115</guid></item><item><title><![CDATA[New comment by easygenes in "ESP32-S31: Dual-Core RISC-V SoC with Wi-Fi 6, Bluetooth 5.4, and Advanced HMI"]]></title><description><![CDATA[
<p>A full-module add-on in this power class is about $7 at 1,000 unit scale [0]. It would be around $3 with your own custom PCB design in terms of BoM addon at scale. That’s power only. Add another dollar or two for 10/100 PHY.<p>The trick is as others have said in what adding it to your design does in terms of complicating compliance design.<p>[0] <a href="https://www.digikey.com/en/products/detail/silvertel/AG9705-2BR/21187225" rel="nofollow">https://www.digikey.com/en/products/detail/silvertel/AG9705-...</a></p>
]]></description><pubDate>Fri, 03 Apr 2026 09:28:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47624706</link><dc:creator>easygenes</dc:creator><comments>https://news.ycombinator.com/item?id=47624706</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47624706</guid></item><item><title><![CDATA[Show HN: Sixteen year trends in AI doom on HN]]></title><description><![CDATA[
<p>As an April Fools' project, I've made this.<p>I (ironically, depending your penchant) used an AI judge to score every top HN post and its top comments all the way back to 2010 (over 300,000 individual judgments!), and created a nice interface with the historic charts.<p>The scoring is off of how much disgruntled AI pessimism the post and top comments show. You can use that score to either only show (in Doom Only mode) or hide (in calm mode) all the doom static.<p>This is part joke and partly to answer my own curiosity about how real my feeling that the comments and posts have grown increasingly overbearingly negative towards and focused on the worst of what results from AI in the past year.<p>The trends is stark. My intuition was reinforced by the analysis; disgruntled pessimism about AI is at an all time high in the last few weeks and on track to continue doubling annually since ChatGPT released.<p>Historic charts are at bottom of the page.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47608596">https://news.ycombinator.com/item?id=47608596</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 02 Apr 2026 00:38:21 +0000</pubDate><link>https://hn.ai-doom.cc/</link><dc:creator>easygenes</dc:creator><comments>https://news.ycombinator.com/item?id=47608596</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47608596</guid></item><item><title><![CDATA[New comment by easygenes in "Show HN: HN Remixed to only show AI Doom (or not)"]]></title><description><![CDATA[
<p>The historic charts are at the bottom of the page, btw.</p>
]]></description><pubDate>Wed, 01 Apr 2026 14:32:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47601492</link><dc:creator>easygenes</dc:creator><comments>https://news.ycombinator.com/item?id=47601492</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47601492</guid></item><item><title><![CDATA[Show HN: HN Remixed to only show AI Doom (or not)]]></title><description><![CDATA[
<p>As an "April Fool's" project, I've made this.<p>I (ironically, depending your penchant) used an AI judge to score every top HN post and its top comments all the way back to 2010 (over 300,000 individual judgments!), and created a nice interface with the historic charts.<p>The scoring is off of how needlessly "AI Doom" or AI curmudgeony the post and top comments are. You can use that score to either only show (in Doom Only mode) or hide (in calm mode) all the doom static.<p>This is part joke and partly to answer my own curiosity about how real my feeling that the comments and posts have gotten acceleratingly more overbearingly negative towards and focused on the worst of what results from AI in the past year.<p>I won't spoil the conclusion, you can have a look and judge for yourself now.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47601150">https://news.ycombinator.com/item?id=47601150</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Wed, 01 Apr 2026 14:11:13 +0000</pubDate><link>https://hn.ai-doom.cc/</link><dc:creator>easygenes</dc:creator><comments>https://news.ycombinator.com/item?id=47601150</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47601150</guid></item><item><title><![CDATA[New comment by easygenes in "What came after the 486?"]]></title><description><![CDATA[
<p>I never felt during this era that the information about these chips was hard to come by as the author claims. Retrospectively I appreciate that’s because I grew up living by a large, well funded library in a tech centric town, so they always had all the latest tech publications.</p>
]]></description><pubDate>Thu, 26 Mar 2026 15:43:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47531904</link><dc:creator>easygenes</dc:creator><comments>https://news.ycombinator.com/item?id=47531904</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47531904</guid></item><item><title><![CDATA[New comment by easygenes in "GPT from GPT: de novo microgpt"]]></title><description><![CDATA[
<p>I started this project after watching Andrej Karpathy's recent interview on No Priors where he explained that he had to hand-write microgpt, a 200-line GPT implementation in Python which distills the essence of all the algorithms behind creating Transformers, because the LLMs he asked weren't able to do it.<p>I wanted to test if this is still true: whether a "microgpt" in that spirit could be brought into existence with minimal manual intervention, just clear expression of intent to an LLM. This is an experiment not just in producing a tiny GPT artifact, but in seeing how close you can get to the essence of microgpt just through careful prompting, without writing a single line yourself.</p>
]]></description><pubDate>Tue, 24 Mar 2026 05:13:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47498834</link><dc:creator>easygenes</dc:creator><comments>https://news.ycombinator.com/item?id=47498834</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47498834</guid></item><item><title><![CDATA[GPT from GPT: de novo microgpt]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/Entrpi/microgpt-denovo">https://github.com/Entrpi/microgpt-denovo</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47498833">https://news.ycombinator.com/item?id=47498833</a></p>
<p>Points: 3</p>
<p># Comments: 1</p>
]]></description><pubDate>Tue, 24 Mar 2026 05:13:56 +0000</pubDate><link>https://github.com/Entrpi/microgpt-denovo</link><dc:creator>easygenes</dc:creator><comments>https://news.ycombinator.com/item?id=47498833</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47498833</guid></item><item><title><![CDATA[New comment by easygenes in "Show HN: Oku – One tab to filter out noise from feeds and content sources"]]></title><description><![CDATA[
<p>The year is 2006 and Netvibes is hosting a huge party in San Francisco after raising in the Web 2.0 craze. They are yet to find out they will become a footnote in history to be rediscovered in 20 years’ time.</p>
]]></description><pubDate>Sun, 22 Mar 2026 12:14:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47476713</link><dc:creator>easygenes</dc:creator><comments>https://news.ycombinator.com/item?id=47476713</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47476713</guid></item><item><title><![CDATA[New comment by easygenes in "Quillx is an open standard for disclosing AI involvement in software projects"]]></title><description><![CDATA[
<p>This is very similar to a project I created <a href="https://github.com/Entrpi/autonomy-golf" rel="nofollow">https://github.com/Entrpi/autonomy-golf</a> and have been using as a gamified development process on active projects.<p>The key insight was to not just handwave or guess at how much is automated, but make evaluation and review part of the continuous development loop. I first implemented in <a href="https://github.com/Entrpi/autoresearch-everywhere" rel="nofollow">https://github.com/Entrpi/autoresearch-everywhere</a> where I used it to deliberately automate more, in the spirit of Karpathy's upstream (and to very good effect. I have some of the best autoresearch results anywhere, and the platform is far more robust than it started).</p>
]]></description><pubDate>Mon, 16 Mar 2026 04:57:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47395351</link><dc:creator>easygenes</dc:creator><comments>https://news.ycombinator.com/item?id=47395351</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47395351</guid></item><item><title><![CDATA[New comment by easygenes in "“This is not the computer for you”"]]></title><description><![CDATA[
<p>I liked this not because it's a good story. It is, but that's beside the point. I liked this because it's my story. Not literally so, but the shape of it is. He's struck a nerve at the heart of growing up eager and curious and seeing a computer as a pathway to your dreams.</p>
]]></description><pubDate>Fri, 13 Mar 2026 04:58:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47360840</link><dc:creator>easygenes</dc:creator><comments>https://news.ycombinator.com/item?id=47360840</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47360840</guid></item><item><title><![CDATA[New comment by easygenes in "AutoKernel: Autoresearch for GPU Kernels"]]></title><description><![CDATA[
<p>Cool! I’ve been working on adding the same thing for Apple Silicon within my general “make autoresearch a serious tool” project here: <a href="https://github.com/Entrpi/autoresearch-everywhere" rel="nofollow">https://github.com/Entrpi/autoresearch-everywhere</a></p>
]]></description><pubDate>Wed, 11 Mar 2026 12:52:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47334924</link><dc:creator>easygenes</dc:creator><comments>https://news.ycombinator.com/item?id=47334924</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47334924</guid></item><item><title><![CDATA[New comment by easygenes in "NanoGPT Slowrun: Language Modeling with Limited Data, Infinite Compute"]]></title><description><![CDATA[
<p>This is very much in line with what I found fascinating about optimizing microgpt for speed (0). Or rather, what I was able to do with it after doing so. It's so small and so fast to train, you can really dig deep into the optimization landscape. I've spent all my free time this past week digging into it.<p>0: <a href="https://entrpi.github.io/eemicrogpt/" rel="nofollow">https://entrpi.github.io/eemicrogpt/</a>
(The writeup is from a few days ago, and I'm still running experiments before I do a big rewrite. Slowrun is good food for thought.)</p>
]]></description><pubDate>Thu, 05 Mar 2026 20:56:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47267158</link><dc:creator>easygenes</dc:creator><comments>https://news.ycombinator.com/item?id=47267158</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47267158</guid></item><item><title><![CDATA[New comment by easygenes in "MacBook Pro with M5 Pro and M5 Max"]]></title><description><![CDATA[
<p>Topical. My hobby project this week (0) has been hyper-optimizing microgpt for M5's CPU cores (and comparing to MLX performance). Wonder if anything changes under the regime I've been chasing with these new chips.<p>0: <a href="https://entrpi.github.io/eemicrogpt/" rel="nofollow">https://entrpi.github.io/eemicrogpt/</a></p>
]]></description><pubDate>Tue, 03 Mar 2026 23:23:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47240563</link><dc:creator>easygenes</dc:creator><comments>https://news.ycombinator.com/item?id=47240563</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47240563</guid></item><item><title><![CDATA[New comment by easygenes in "EEmicroGPT: 19,000× faster microgpt training on a laptop CPU (loss vs. time)"]]></title><description><![CDATA[
<p>At scale, teams don’t win by owning more FLOPs; they win by shrinking the distance between hypothesis and measurement. I learned that the expensive way: running large training pipelines where iteration speed was the difference between <i>“we think this works”</i> and <i>“we know”</i> - building some of the most capable open-weights models available while leading the OpenOrca team in 2023. So I took Karpathy’s microgpt - a Transformer small enough to hold in your head - and made it fast enough that you can also throw it around and learn its behavior by feel: change a learning rate, flip a batch size, tweak a layout, rerun, and immediately see what moved; full sweeps at interactive speed.<p>In this toy regime, performance is set by granularity. When the work is a pile of tiny matrix multiplies and elementwise kernels, overhead and launch/scheduling costs can dominate peak throughput. Laptop CPUs can be faster than Blackwell GPUs. That’s a regime inversion: the “faster” machine can lose because it spends too much time on ceremony per step, while a simpler execution path spends a higher fraction of wall time doing useful math. In that corner of the world, a laptop CPU can beat a datacenter GPU <i>for this workload</i> - not because it’s a better chip, but because it’s spending less time dispatching and more time learning. That inversion reshapes the early-time Pareto frontier, loss versus wall-clock, where you’re trading model capacity against steps-per-second under a fixed time budget.<p>Early-time is where most iteration happens. It’s where you decide whether an idea is promising, where you map stability boundaries, where you learn which knobs matter and which are placebo. If you can push the frontier down and left in the first few seconds, you don’t just finish runs faster.. you change what you can notice. You turn “training” into feedback.<p>Inside, I take you on a tour of the AI engine room: how scalar autograd explodes into tens of thousands of tiny ops, how rewriting it as a handful of tight loops collapses overhead, how caches and SIMD lanes dictate what “fast” even means, why skipping useless work beats clever math, and how ISA-specific accelerators like Neon/SME2 shift the cost model again. The result is a ~19,000× speedup on a toy problem - not as a parlor trick, but as a microcosm of the same compounding process that drives real progress: better execution buys more experiments, more experiments buy better understanding, and better understanding buys better execution.</p>
]]></description><pubDate>Tue, 03 Mar 2026 23:13:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47240462</link><dc:creator>easygenes</dc:creator><comments>https://news.ycombinator.com/item?id=47240462</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47240462</guid></item><item><title><![CDATA[EEmicroGPT: 19,000× faster microgpt training on a laptop CPU (loss vs. time)]]></title><description><![CDATA[
<p>Article URL: <a href="https://entrpi.github.io/eemicrogpt/">https://entrpi.github.io/eemicrogpt/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47240461">https://news.ycombinator.com/item?id=47240461</a></p>
<p>Points: 11</p>
<p># Comments: 2</p>
]]></description><pubDate>Tue, 03 Mar 2026 23:13:00 +0000</pubDate><link>https://entrpi.github.io/eemicrogpt/</link><dc:creator>easygenes</dc:creator><comments>https://news.ycombinator.com/item?id=47240461</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47240461</guid></item><item><title><![CDATA[New comment by easygenes in "Microgpt"]]></title><description><![CDATA[
<p>Inspiring. Definitely got nerd sniped by this. Now you can train it in under a second on one CPU core with no dependencies: <a href="https://github.com/Entrpi/eemicrogpt" rel="nofollow">https://github.com/Entrpi/eemicrogpt</a><p>Detailed optimizing journey in the  readme too.</p>
]]></description><pubDate>Mon, 02 Mar 2026 16:21:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47220052</link><dc:creator>easygenes</dc:creator><comments>https://news.ycombinator.com/item?id=47220052</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47220052</guid></item><item><title><![CDATA[New comment by easygenes in "Training microgpt in milliseconds"]]></title><description><![CDATA[
<p>Heavily optimized single C file that can train the same model as Karpathy's microgpt to lower loss in under a second on a single Mac core.</p>
]]></description><pubDate>Mon, 02 Mar 2026 15:58:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47219624</link><dc:creator>easygenes</dc:creator><comments>https://news.ycombinator.com/item?id=47219624</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47219624</guid></item><item><title><![CDATA[Training microgpt in milliseconds]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/Entrpi/eemicrogpt">https://github.com/Entrpi/eemicrogpt</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47219623">https://news.ycombinator.com/item?id=47219623</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 02 Mar 2026 15:58:19 +0000</pubDate><link>https://github.com/Entrpi/eemicrogpt</link><dc:creator>easygenes</dc:creator><comments>https://news.ycombinator.com/item?id=47219623</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47219623</guid></item></channel></rss>