<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: crackalamoo</title><link>https://news.ycombinator.com/user?id=crackalamoo</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 19 Apr 2026 04:28:58 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=crackalamoo" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[Show HN: Interactive first-principles climate physics simulation with explainer]]></title><description><![CDATA[
<p>A 3D visualizer of earth's climate in the browser. Introduces physics step by step so you can watch each process unfold as a piece of the overall climate.<p>I built this over 6 months, almost entirely with AI, mostly Opus 4.6 in Claude Code. SF weather made no sense to me (Barely any seasons? September is the warmest month?) and I wanted to understand it better myself. This is a polished version of the app I'd want for myself, adding physics layer by layer to isolate the impact of each piece, and using an LLM to analyze and explain the data.<p>The models know more about math, physics, and software than I do — but especially on the physics side, they have terrible intuition. Claude can "get the error relative to observations down to 4 °C" just fine, except it'll totally hack and overfit the physics along the way. Subagents to subjectively verify "the physics is sound, no overfitting" didn't really work either. So I had to review the physics code manually.<p>The entire model is first principles; no machine learning or using observed data at all, except fundamental constants like the radiation of the sun and an elevation map. But after a while, it started to feel like "machine learning in slow motion": instead of an ML model training its parameters, Claude and I were choosing parameters by hand. Some amount of tuning parameters (within a physical range of uncertainty) to match observations is inevitable.<p>The in-app LLM layer has a tool to evaluate arbitrary math expressions over the simulated data using an AST, which was also pretty fun to build.<p>Repo: <a href="https://github.com/crackalamoo/building-earth" rel="nofollow">https://github.com/crackalamoo/building-earth</a></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47691681">https://news.ycombinator.com/item?id=47691681</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 08 Apr 2026 15:36:50 +0000</pubDate><link>https://earth.crackalamoo.com</link><dc:creator>crackalamoo</dc:creator><comments>https://news.ycombinator.com/item?id=47691681</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47691681</guid></item><item><title><![CDATA[New comment by crackalamoo in "Show HN: Can an AI model fit on a single pixel?"]]></title><description><![CDATA[
<p>This is super fun! Very elegant idea<p>Has anyone done this for larger neural nets? Is there a way to extract some kind of pattern or is the image just noise no matter how you construct it? I'd be curious to see something like that</p>
]]></description><pubDate>Wed, 08 Apr 2026 09:02:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47687365</link><dc:creator>crackalamoo</dc:creator><comments>https://news.ycombinator.com/item?id=47687365</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47687365</guid></item><item><title><![CDATA[New comment by crackalamoo in "AI CEO – Replace your boss before they replace you"]]></title><description><![CDATA[
<p>Isn't this kind of the same as an AI copilot, just with higher autonomy?<p>I think the limiting factor is that the AI still isn't good enough to be fully autonomous, so it needs your input. That's why it's still in copilot form</p>
]]></description><pubDate>Thu, 27 Nov 2025 19:27:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46072451</link><dc:creator>crackalamoo</dc:creator><comments>https://news.ycombinator.com/item?id=46072451</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46072451</guid></item><item><title><![CDATA[New comment by crackalamoo in "The Bitter Lesson of LLM Extensions"]]></title><description><![CDATA[
<p>This seems like a solvable engineering problem. For example, you could have a lightweight subagent with its own context for reading the skills and determining which to use</p>
]]></description><pubDate>Wed, 26 Nov 2025 15:19:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=46058208</link><dc:creator>crackalamoo</dc:creator><comments>https://news.ycombinator.com/item?id=46058208</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46058208</guid></item><item><title><![CDATA[New comment by crackalamoo in "Personal blogs are back, should niche blogs be next?"]]></title><description><![CDATA[
<p>I also use pure HTML and CSS (and a touch of hand-written JavaScript)</p>
]]></description><pubDate>Sat, 22 Nov 2025 07:53:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46012996</link><dc:creator>crackalamoo</dc:creator><comments>https://news.ycombinator.com/item?id=46012996</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46012996</guid></item><item><title><![CDATA[New comment by crackalamoo in "Personal blogs are back, should niche blogs be next?"]]></title><description><![CDATA[
<p>I'm a little skeptical of AEO. What's the point if AI users just ask the LLM to retrieve the information and never visit your blog? I almost never click the links ChatGPT gives me<p>Maybe it makes sense if you're selling a product or service, but I don't see the appeal of AEO as the new SEO. Maybe I'm missing something?</p>
]]></description><pubDate>Sat, 22 Nov 2025 07:52:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=46012988</link><dc:creator>crackalamoo</dc:creator><comments>https://news.ycombinator.com/item?id=46012988</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46012988</guid></item><item><title><![CDATA[New comment by crackalamoo in "Personal blogs are back, should niche blogs be next?"]]></title><description><![CDATA[
<p>My two cents: if you're not doing anything too political or controversial, it's fine or even beneficial to mix in the occasional personal essay with the professional.<p>After all, many of your readers are also human beings with lives, maybe even lives similar to yours based on your professional content. (The rest of your readers are LLMs.) Your readers might appreciate your perspectives on random life things or just getting to see what their favorite blogger is up to.</p>
]]></description><pubDate>Sat, 22 Nov 2025 07:48:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=46012972</link><dc:creator>crackalamoo</dc:creator><comments>https://news.ycombinator.com/item?id=46012972</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46012972</guid></item><item><title><![CDATA[New comment by crackalamoo in "Claude Memory"]]></title><description><![CDATA[
<p>I make heavy use of the "temporary chat" feature on ChatGPT. It's great whenever I need a fresh context or need to iteratively refine a prompt, and I can use the regular chat when I want it to have memory.<p>Granted, this isn't the best UX because I can't create a fresh context chat without making it temporary. But I'd say it allows enough choice that overall having the memory feature is a big plus.</p>
]]></description><pubDate>Fri, 24 Oct 2025 06:14:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=45691388</link><dc:creator>crackalamoo</dc:creator><comments>https://news.ycombinator.com/item?id=45691388</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45691388</guid></item><item><title><![CDATA[New comment by crackalamoo in "Doing well in your courses: Andrej's advice for success (2013)"]]></title><description><![CDATA[
<p>Grade inflation is common at many schools. And many difficult technical classes grade on a curve, sometimes to the point where you can get an A with an 85%.<p>But yeah, I still don't see how an 85% average would be a 4.0.</p>
]]></description><pubDate>Mon, 20 Oct 2025 06:42:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=45640641</link><dc:creator>crackalamoo</dc:creator><comments>https://news.ycombinator.com/item?id=45640641</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45640641</guid></item><item><title><![CDATA[New comment by crackalamoo in "Show HN: Mcp-use – Connect any LLM to any MCP"]]></title><description><![CDATA[
<p>Why is dependence on LangChain an issue?<p>Not that I disagree necessarily, just wondering if there's a consensus that LangChain is too opinionated/bloated/whatever for real industry applications, or if there's some other reason.</p>
]]></description><pubDate>Thu, 31 Jul 2025 20:48:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=44749947</link><dc:creator>crackalamoo</dc:creator><comments>https://news.ycombinator.com/item?id=44749947</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44749947</guid></item><item><title><![CDATA[New comment by crackalamoo in "GitHub CEO: manual coding remains key despite AI boom"]]></title><description><![CDATA[
<p>The recent Apple paper seemed pretty flawed. Anthropic/Open Philanthropy did a rebuttal paper, The Illusion of the Illusion of Thinking.<p>> Do they reason? No. (Before you complain, please first define reason).<p>Defining reasoning is the problem. No, they don't reason in the same way as humans. But they seem to be able to go through reasoning steps in some important way. It's like asking if submarines swim.</p>
]]></description><pubDate>Tue, 24 Jun 2025 04:34:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=44362883</link><dc:creator>crackalamoo</dc:creator><comments>https://news.ycombinator.com/item?id=44362883</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44362883</guid></item><item><title><![CDATA[New comment by crackalamoo in "GitHub CEO: manual coding remains key despite AI boom"]]></title><description><![CDATA[
<p>Not sure how tools like Cursor work under the hood, but this seems like an easy model context engineering problem to fix.</p>
]]></description><pubDate>Tue, 24 Jun 2025 04:28:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=44362858</link><dc:creator>crackalamoo</dc:creator><comments>https://news.ycombinator.com/item?id=44362858</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44362858</guid></item><item><title><![CDATA[New comment by crackalamoo in "GitHub CEO: manual coding remains key despite AI boom"]]></title><description><![CDATA[
<p>I agree. And if human civilization survives, your concerns about energy and resources will be only short term on the scale of civilization, especially as we make models more efficient.<p>The human brain uses just 20 watts of power, so it seems to me like it is possible to reach human-level intelligence in principle by using much greater power and less of the evolutionary refinement over billions of years that the brain has.</p>
]]></description><pubDate>Tue, 24 Jun 2025 04:26:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=44362851</link><dc:creator>crackalamoo</dc:creator><comments>https://news.ycombinator.com/item?id=44362851</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44362851</guid></item><item><title><![CDATA[New comment by crackalamoo in "GitHub CEO: manual coding remains key despite AI boom"]]></title><description><![CDATA[
<p>I agree humans only rarely step outside the circle, but I do have this intuition that some people sometimes do, whereas LLMs never do. This distinction seems important over long time horizons when thinking about LLM vs human work.<p>But I can't quite articulate why I believe LLMs never step outside the circle, because they are seeded with some random noise via temperature. I could just be wrong.</p>
]]></description><pubDate>Tue, 24 Jun 2025 04:23:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=44362835</link><dc:creator>crackalamoo</dc:creator><comments>https://news.ycombinator.com/item?id=44362835</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44362835</guid></item><item><title><![CDATA[New comment by crackalamoo in "Can your terminal do emojis? How big?"]]></title><description><![CDATA[
<p>Yeah, unfortunately I feel like despite all the advances in Unicode tech, my modern terminal (MacOS) still bugs out badly with emojis and certain special characters.<p>I'm not sure how/when codepoints matter for wcwidth: my terminal handles many characters with more than one codepoint in UTF-8, like é and even Arabic characters, just fine.</p>
]]></description><pubDate>Tue, 24 Jun 2025 04:20:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=44362822</link><dc:creator>crackalamoo</dc:creator><comments>https://news.ycombinator.com/item?id=44362822</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44362822</guid></item><item><title><![CDATA[New comment by crackalamoo in "What happens when people don't understand how AI works"]]></title><description><![CDATA[
<p>Yes, 100% this. And even more so for reasoning models, which have a different kind of RL workflow based on reasoning tokens. I expect to see research labs come out with more ways to use RL with LLMs in the future, especially for coding.<p>I feel it is quite important to dispel this idea given how widespread it is, even though it does gesture at the truth of how LLMs work in a way that's convenient for laypeople.<p><a href="https://www.harysdalvi.com/blog/llms-dont-predict-next-word/" rel="nofollow">https://www.harysdalvi.com/blog/llms-dont-predict-next-word/</a></p>
]]></description><pubDate>Mon, 09 Jun 2025 18:14:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=44227392</link><dc:creator>crackalamoo</dc:creator><comments>https://news.ycombinator.com/item?id=44227392</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44227392</guid></item><item><title><![CDATA[New comment by crackalamoo in "Compiler Explorer and the promise of URLs that last forever"]]></title><description><![CDATA[
<p>I use /foo/bar/ with the trailing slash because it works better with relative URLs for resources like images. I could also use /foo/bar/index.html but I find the former to be cleaner</p>
]]></description><pubDate>Wed, 28 May 2025 17:56:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=44118789</link><dc:creator>crackalamoo</dc:creator><comments>https://news.ycombinator.com/item?id=44118789</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44118789</guid></item><item><title><![CDATA[Someone using AI won't take your job. AI will]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.harysdalvi.com/blog/ai-will-take-your-job/">https://www.harysdalvi.com/blog/ai-will-take-your-job/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44117032">https://news.ycombinator.com/item?id=44117032</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 28 May 2025 15:25:17 +0000</pubDate><link>https://www.harysdalvi.com/blog/ai-will-take-your-job/</link><dc:creator>crackalamoo</dc:creator><comments>https://news.ycombinator.com/item?id=44117032</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44117032</guid></item><item><title><![CDATA[New comment by crackalamoo in "Better at everything: how AI could make human beings irrelevant"]]></title><description><![CDATA[
<p>I don't think this article is paywalled? You can just click "I'll do it later" on the banner</p>
]]></description><pubDate>Mon, 05 May 2025 16:47:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=43897007</link><dc:creator>crackalamoo</dc:creator><comments>https://news.ycombinator.com/item?id=43897007</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43897007</guid></item><item><title><![CDATA[Better at everything: how AI could make human beings irrelevant]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.theguardian.com/books/2025/may/04/the-big-idea-can-we-stop-ai-making-humans-obsolete">https://www.theguardian.com/books/2025/may/04/the-big-idea-can-we-stop-ai-making-humans-obsolete</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43895095">https://news.ycombinator.com/item?id=43895095</a></p>
<p>Points: 2</p>
<p># Comments: 2</p>
]]></description><pubDate>Mon, 05 May 2025 13:48:13 +0000</pubDate><link>https://www.theguardian.com/books/2025/may/04/the-big-idea-can-we-stop-ai-making-humans-obsolete</link><dc:creator>crackalamoo</dc:creator><comments>https://news.ycombinator.com/item?id=43895095</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43895095</guid></item></channel></rss>