<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: heavymemory</title><link>https://news.ycombinator.com/user?id=heavymemory</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 28 Apr 2026 22:14:19 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=heavymemory" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by heavymemory in "Is my blue your blue? (2024)"]]></title><description><![CDATA[
<p>always blue</p>
]]></description><pubDate>Tue, 28 Apr 2026 03:20:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47930131</link><dc:creator>heavymemory</dc:creator><comments>https://news.ycombinator.com/item?id=47930131</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47930131</guid></item><item><title><![CDATA[New comment by heavymemory in "A Boy That Cried Mythos: Verification Is Collapsing Trust in Anthropic"]]></title><description><![CDATA[
<p>ai written</p>
]]></description><pubDate>Wed, 15 Apr 2026 23:02:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47786467</link><dc:creator>heavymemory</dc:creator><comments>https://news.ycombinator.com/item?id=47786467</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47786467</guid></item><item><title><![CDATA[Show HN: Evolved cells navigate a maze with no training or fitness function]]></title><description><![CDATA[
<p>Single file C simulation. Cells on a grid eat soil, fight neighbours, reproduce with mutations. No neural network. No backpropagation. No fitness function. No pathfinding. Evolution runs and behaviour emerges.<p>Left panel is the ecology where evolution happens. Right panel is a maze. I pick an evolved organism and drop one cell into the maze. Some genomes fail. Some explore the whole thing. Zero control after injection.<p>The cells don't have functions like 'move left' or 'eat food'. Each cell runs a small evolved gene network that reads local inputs and writes to registers. Physical consequences follow from the register values. The cell doesn't know it's navigating. Its internal chemistry just happens to produce movement.<p>~2000 lines of C. Single thread. Runs on a laptop</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47457782">https://news.ycombinator.com/item?id=47457782</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 20 Mar 2026 17:27:41 +0000</pubDate><link>https://streamin.me/v/6d53f7f2</link><dc:creator>heavymemory</dc:creator><comments>https://news.ycombinator.com/item?id=47457782</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47457782</guid></item><item><title><![CDATA[New comment by heavymemory in "Show HN: AI agent that runs real browser workflows"]]></title><description><![CDATA[
<p>Prompt injection is the same problem all agents face, ChatGpt Atlas, claude cowork, openclaw, all of them. It's a known unsolved problem across the industry.<p>I mitigate it by giving the agent a fixed action set (no scripts, no direct API calls), and breaking tasks into focused subtasks so no single agent has broad scope. The LLM prioritises its own instructions over page content, but if someone managed to hijack it, the agent can interact with authenticated sessions. Everything's visible in real time though, and all actions are logged, so you can see exactly what it's doing and kill it.<p>Practically speaking, I use it similar to how people use Zapier or n8n, you set up specific workflows and make sure you're only pointing it at sites you trust. If you're sending it to random unknown websites then yeah, there's more risk.<p>But even then, an attacker would need to know what apps you're authenticated with and what data the agent has access to. The chances of something actually happening are pretty low, but the risk is there. No one's fully solved this yet.</p>
]]></description><pubDate>Tue, 10 Mar 2026 16:49:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47325757</link><dc:creator>heavymemory</dc:creator><comments>https://news.ycombinator.com/item?id=47325757</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47325757</guid></item><item><title><![CDATA[New comment by heavymemory in "Show HN: AI agent that runs real browser workflows"]]></title><description><![CDATA[
<p>Interesting. Part of why I built this was to avoid screen capture as the control layer. Once you’re taking screenshots, guessing what to click, moving the mouse, and repeating, it gets slow and brittle fast. Here the workflow is just described in text, executed in the browser, and saved for reuse.</p>
]]></description><pubDate>Tue, 10 Mar 2026 14:59:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47324169</link><dc:creator>heavymemory</dc:creator><comments>https://news.ycombinator.com/item?id=47324169</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47324169</guid></item><item><title><![CDATA[New comment by heavymemory in "Show HN: AI agent that runs real browser workflows"]]></title><description><![CDATA[
<p>Yeah, instruction drift is a real problem in long agent chains. In this case the workflow gets decomposed into steps up front and each step is executed by a separate sub-agent.<p>So the model isn’t carrying the whole instruction chain across multiple steps, it’s just solving the current task. Similar pattern to what tools like Codex CLI or Claude Code do.</p>
]]></description><pubDate>Tue, 10 Mar 2026 13:25:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47322951</link><dc:creator>heavymemory</dc:creator><comments>https://news.ycombinator.com/item?id=47322951</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47322951</guid></item><item><title><![CDATA[New comment by heavymemory in "Show HN: AI agent that runs real browser workflows"]]></title><description><![CDATA[
<p>linux and windows support is on the way, i’ve designed it in a decoupled way, so should be straight forward.<p>Just need to see if people find this version useful</p>
]]></description><pubDate>Tue, 10 Mar 2026 12:18:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47322208</link><dc:creator>heavymemory</dc:creator><comments>https://news.ycombinator.com/item?id=47322208</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47322208</guid></item><item><title><![CDATA[Show HN: AI agent that runs real browser workflows]]></title><description><![CDATA[
<p>I’ve been experimenting with letting an AI agent execute full workflows in a browser.<p>In this demo I gave it my CV and asked it to find matching jobs. It scans my inbox, opens the listings, extracts the details and builds a Google Sheet automatically.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47322046">https://news.ycombinator.com/item?id=47322046</a></p>
<p>Points: 4</p>
<p># Comments: 9</p>
]]></description><pubDate>Tue, 10 Mar 2026 11:59:43 +0000</pubDate><link>https://ghostd.io</link><dc:creator>heavymemory</dc:creator><comments>https://news.ycombinator.com/item?id=47322046</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47322046</guid></item><item><title><![CDATA[New comment by heavymemory in "I Audited Three Vibe Coded Products in a Single Day"]]></title><description><![CDATA[
<p>I audited 3 vibe coded products that were posted on Reddit in a single afternoon. All three had critical security vulnerabilities. One was a live marketplace with real Stripe payments where any logged-in user could grant themselves admin and hijack payment routing with a single request. Another had development endpoints still in production that let anyone mark themselves as a paid user and give themselves unlimited credits. The third had its entire database of 681,000 salary records downloadable by anyone with no authentication at all.<p>I wasn't looking for these. They appeared in my feed. I signed up as a normal user and opened dev tools</p>
]]></description><pubDate>Fri, 20 Feb 2026 04:06:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47083598</link><dc:creator>heavymemory</dc:creator><comments>https://news.ycombinator.com/item?id=47083598</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47083598</guid></item><item><title><![CDATA[I Audited Three Vibe Coded Products in a Single Day]]></title><description><![CDATA[
<p>Article URL: <a href="https://fromtheprism.com/vibe-coding-audit">https://fromtheprism.com/vibe-coding-audit</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47083597">https://news.ycombinator.com/item?id=47083597</a></p>
<p>Points: 3</p>
<p># Comments: 3</p>
]]></description><pubDate>Fri, 20 Feb 2026 04:06:48 +0000</pubDate><link>https://fromtheprism.com/vibe-coding-audit</link><dc:creator>heavymemory</dc:creator><comments>https://news.ycombinator.com/item?id=47083597</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47083597</guid></item><item><title><![CDATA[New comment by heavymemory in "Anthropic Raised $30B. Where Does It Go?"]]></title><description><![CDATA[
<p>This isn't another 'AI bubble bad' post. The article traces a specific financial contagion pathway that hasn't been covered elsewhere in a single piece. Tech companies are moving hundreds of billions in AI debt off their balance sheets into special purpose vehicles. That debt gets rated investment grade, securitised, and sold to pension funds and insurance companies. The Bank of England's December 2025 Financial Stability Report explicitly flags this as a financial stability risk, comparing AI valuations to the dot-com bubble. Mercer, the UK's largest pension advisor, is warning defined benefit schemes about concentration risk and comparing the situation to the early 2000s telecom bust. The collapse-relevant point: nobody can actually quantify how much pension money is exposed, because the entire structure is designed to be opaque. When AI revenue projections fail to materialise, the debt doesn't disappear. It sits in the retirement savings of ordinary workers who have no idea they're exposed. The article traces the full chain from SPV creation to bond index to auto-enrolled workplace pension. This is a documented mechanism by which a tech correction could directly degrade the material conditions of millions of people.</p>
]]></description><pubDate>Tue, 17 Feb 2026 04:55:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47043846</link><dc:creator>heavymemory</dc:creator><comments>https://news.ycombinator.com/item?id=47043846</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47043846</guid></item><item><title><![CDATA[New comment by heavymemory in "Anthropic Raised $30B. Where Does It Go?"]]></title><description><![CDATA[
<p>DHH argued Facebook couldn't monetise. I'm not arguing Anthropic can't monetise. I'm arguing the debt structure financing AI infrastructure creates systemic risk regardless of whether individual companies succeed. Cisco survived the dot-com bust. The bondholders who financed the fibre didn't</p>
]]></description><pubDate>Tue, 17 Feb 2026 04:54:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47043839</link><dc:creator>heavymemory</dc:creator><comments>https://news.ycombinator.com/item?id=47043839</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47043839</guid></item><item><title><![CDATA[Anthropic Raised $30B. Where Does It Go?]]></title><description><![CDATA[
<p>Article URL: <a href="https://fromtheprism.com/anthropic-30-billion">https://fromtheprism.com/anthropic-30-billion</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47040033">https://news.ycombinator.com/item?id=47040033</a></p>
<p>Points: 1</p>
<p># Comments: 3</p>
]]></description><pubDate>Mon, 16 Feb 2026 20:39:09 +0000</pubDate><link>https://fromtheprism.com/anthropic-30-billion</link><dc:creator>heavymemory</dc:creator><comments>https://news.ycombinator.com/item?id=47040033</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47040033</guid></item><item><title><![CDATA[Show HN: A deterministic code-rewrite engine that learns from one example]]></title><description><![CDATA[
<p>I built a tool that learns structural code transformations from a single before/after example.<p>It isn’t a transformer or LLM — it doesn’t generate code.
It extracts the structural pattern between two snippets and compiles a deterministic rewrite rule. Same input → same output, every time.<p>Examples:
 • console.log(x) → logger.info(x) generalises to console.log(anything)
 • require(“x”) → import x from “x”
 • ReactDOM.render → createRoot
 • custom project conventions<p>The rules apply across an entire codebase or through an MCP plugin inside Claude Code, Cursor, or plain CLI.<p>It runs entirely on CPU and learns rules in real time.<p>Tool: <a href="https://hyperrecode.com" rel="nofollow">https://hyperrecode.com</a>
I’d really appreciate feedback on the approach, design, or failure cases.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46202949">https://news.ycombinator.com/item?id=46202949</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 09 Dec 2025 09:18:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=46202949</link><dc:creator>heavymemory</dc:creator><comments>https://news.ycombinator.com/item?id=46202949</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46202949</guid></item><item><title><![CDATA[I built an AI that learns code transformations from examples (not generative)]]></title><description><![CDATA[
<p>I built a tool that learns structural code transformations from before/after examples.<p>Show it console.log(x) -> logger.info(x) and it learns the pattern, then applies it across your entire codebase. Deterministic,  same input, same output, every time.<p>Not a transformer, not generative, not probabilistic. It parses code into AST, extracts the 
structural pattern, and executes exact rewrites.<p>Works as an MCP plugin for Claude Code, Cursor, and Claude Desktop.<p>https://hyperrecode.com<p>Would love feedback.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46199665">https://news.ycombinator.com/item?id=46199665</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 09 Dec 2025 00:20:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46199665</link><dc:creator>heavymemory</dc:creator><comments>https://news.ycombinator.com/item?id=46199665</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46199665</guid></item><item><title><![CDATA[New comment by heavymemory in "Nested Learning: A new ML paradigm for continual learning"]]></title><description><![CDATA[
<p>This still seems like gradient descent wrapped in new terminology. If all learning happens through weight updates, its just rearranging where the forgetting happens</p>
]]></description><pubDate>Mon, 08 Dec 2025 12:10:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=46191332</link><dc:creator>heavymemory</dc:creator><comments>https://news.ycombinator.com/item?id=46191332</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46191332</guid></item><item><title><![CDATA[New comment by heavymemory in "Nested Learning: A new ML paradigm for continual learning"]]></title><description><![CDATA[
<p>The idea is interesting, but I still don’t understand how this is supposed to solve continual learning in practice.<p>You’ve got a frozen transformer and a second module still trained with SGD, so how exactly does that solve forgetting instead of just relocating it?</p>
]]></description><pubDate>Mon, 08 Dec 2025 12:03:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46191281</link><dc:creator>heavymemory</dc:creator><comments>https://news.ycombinator.com/item?id=46191281</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46191281</guid></item><item><title><![CDATA[New comment by heavymemory in "Nested Learning: A new ML paradigm for continual learning"]]></title><description><![CDATA[
<p>Do you have a source for the NVIDIA “diffusion plus autoregression 6x faster” claim? I can’t find anything credible on that.</p>
]]></description><pubDate>Mon, 08 Dec 2025 11:59:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46191256</link><dc:creator>heavymemory</dc:creator><comments>https://news.ycombinator.com/item?id=46191256</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46191256</guid></item><item><title><![CDATA[New comment by heavymemory in "A small neural system that learns structural rewrite rules from 2 examples"]]></title><description><![CDATA[
<p>This started as a small experiment in structural rewriting. The basic idea is
that you give the system two or three before and after examples, press Teach,
and it learns that transformation and applies it to new inputs.<p>It is not an LLM and it is not a template system. There is a small learned
component that decides when a rule applies, and the rest is a deterministic
structural rewrite engine.<p>There are a few demo modes:<p>TEACH: learn a structural rule from examples
COMPOSE: apply several learnt rules together
TRANSFER: use the same rule in different symbolic domains
SIMPLIFY: multi step rewriting
CODEMOD: for example you can teach lodash.get to optional chaining from two examples<p>Once a rule is learnt it generalises to inputs that are deeper or shaped
differently from the examples you gave. Everything runs on CPU and learning
happens in real time.<p>Demo: <a href="https://re.heavyweather.io" rel="nofollow">https://re.heavyweather.io</a></p>
]]></description><pubDate>Sat, 06 Dec 2025 15:47:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46174216</link><dc:creator>heavymemory</dc:creator><comments>https://news.ycombinator.com/item?id=46174216</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46174216</guid></item><item><title><![CDATA[A small neural system that learns structural rewrite rules from 2 examples]]></title><description><![CDATA[
<p>Article URL: <a href="https://re.heavyweather.io">https://re.heavyweather.io</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46174215">https://news.ycombinator.com/item?id=46174215</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Sat, 06 Dec 2025 15:47:43 +0000</pubDate><link>https://re.heavyweather.io</link><dc:creator>heavymemory</dc:creator><comments>https://news.ycombinator.com/item?id=46174215</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46174215</guid></item></channel></rss>