<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: brotchie</title><link>https://news.ycombinator.com/user?id=brotchie</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 08 May 2026 17:52:33 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=brotchie" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by brotchie in "The map that keeps Burning Man honest"]]></title><description><![CDATA[
<p>Closest I've been to losing vision in one eye was creating these 3x chain links for Burning Man.<p>Naive thought: I could use a large bolt cutter to cut chain links. Started trying to cut a link, felt it was sketchy, went and put on some safety glasses.<p>Restart cutting (had these bolt cutters with like 1m long arms), apply full force, jaws slip a bit on the chain, jaws bite hard. Chunks of steel fly into my chin and face, metal chunks embedded in chin, cracked safety glasses. Dodged a bullet.<p>Ended using a small welded up jig so I could stretch the chain and then use angle grinder to cut the chain links. Still sketchy, but no flying metal chunks.<p>Wish I had a plasma cutter.</p>
]]></description><pubDate>Thu, 07 May 2026 17:56:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=48052557</link><dc:creator>brotchie</dc:creator><comments>https://news.ycombinator.com/item?id=48052557</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48052557</guid></item><item><title><![CDATA[New comment by brotchie in "The Abstraction Fallacy: Why AI can simulate but not instantiate consciousness"]]></title><description><![CDATA[
<p>This is a good counter argument to the paper, honestly.</p>
]]></description><pubDate>Wed, 29 Apr 2026 19:19:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47953070</link><dc:creator>brotchie</dc:creator><comments>https://news.ycombinator.com/item?id=47953070</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47953070</guid></item><item><title><![CDATA[New comment by brotchie in "The Abstraction Fallacy: Why AI can simulate but not instantiate consciousness"]]></title><description><![CDATA[
<p>Are you saying that, in some abstract sense, that actually pouring the cup may be isomorphic to running a perfect simulation of pouring the cup?<p>Genuinely curious about your statement that its an illusion / arbitrary distinction, to figure out if there's a gap in my thinking / reasoning. To me there's a clear distinction between the actual thing happening via physical dynamics vs. us (humans) having creating a discretized abstraction (binary computation) on top of that and running a process on that abstraction.<p>Maybe there's some true computational universality where the universes dynamics are discrete (definitely plausible) and there's no distinction between how a processes dynamics unfold: i.e. consciousness binds to states and state transitions regardless of how they are instantiated. I did use to hold this view , but now I'm not so sure.</p>
]]></description><pubDate>Wed, 29 Apr 2026 19:17:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47953047</link><dc:creator>brotchie</dc:creator><comments>https://news.ycombinator.com/item?id=47953047</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47953047</guid></item><item><title><![CDATA[New comment by brotchie in "The Abstraction Fallacy: Why AI can simulate but not instantiate consciousness"]]></title><description><![CDATA[
<p>It's the difference between:<p><pre><code>  a) Actually pouring a cup of water into a pond (layer zero), and
  b) Running a fluid dynamics simulation of pouring a cup of water into a pond (some layer above layer zero).</code></pre></p>
]]></description><pubDate>Wed, 29 Apr 2026 18:51:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47952696</link><dc:creator>brotchie</dc:creator><comments>https://news.ycombinator.com/item?id=47952696</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47952696</guid></item><item><title><![CDATA[New comment by brotchie in "The Abstraction Fallacy: Why AI can simulate but not instantiate consciousness"]]></title><description><![CDATA[
<p>Originally rejected the paper premise, but I get it now, certainly made me question my belief that consciousness binds to any arbitrary information processing that's of sufficient complexity.<p>IIUC the author is saying that the human brain is running directly on "layer zero": chemical gradients / voltage changes, while AI computes on an abstraction one layer higher (binary bit flips over discretized dyanmics).<p>In essence, our brains are running directly on the "continuous" physical dynamics of the universe, while AI is running on a discretization of this (we're essentially discretizing the physical dynamics and to create state changes of 0 -> 1, 1 -> 0).<p>My currently belief is that consciousness is some kind of field or property of the universe (i.e. a universal consciousness field) that "binds" to whatever information processing happens in our wet ware. If you've done intense meditation / psychedelics, there's this moment when it becomes obvious that you are only "you" due to some kind of universal consciousness's binding to your memory and sensory inputs.<p>The "consciousness arises from information processing," i.e. the consciousness field binds to certain information processing patterns, can still hold, and yet not apply to AI (at least in its current form): The binding properties may only apply to continuous processes running directly on the universe's dynamics, and NOT to simulations running on discretized dynamics.</p>
]]></description><pubDate>Wed, 29 Apr 2026 18:32:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47952458</link><dc:creator>brotchie</dc:creator><comments>https://news.ycombinator.com/item?id=47952458</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47952458</guid></item><item><title><![CDATA[New comment by brotchie in "The future of everything is lies, I guess: Where do we go from here?"]]></title><description><![CDATA[
<p>Yeah, I think about this a lot.<p>Those days of grinding on some grad school maths homework until insight.<p>Figuring out how to configure and recompile the Linux kernel to get a sound card driver working, hitting roadblocks, eventually succeeding.<p>Without AI on a gnarly problem: grind grind grind, try different thing, some things work, some things don't, step back, try another approach, hit a wall, try again.<p>This effort is a feature, not a bug, it's how you experientially acquire skills and understanding. e.g. Linux kernel: learnt about Makefiles, learnt about GCC flags, improved shell skills, etc.<p>With AI on a gnarly problem: It does this all for you! So no experiential learning.<p>I would NOT have had the mental strength in college / grad school to resist. Which would have robbed me of all the skill acquisition that now lets me use AI more effectively. The scaffolding of hard skill acquisition means you have more context to be able to ask AI the right questions, and what you learn from the AI can be bound more easily to your existing knowledge.</p>
]]></description><pubDate>Thu, 16 Apr 2026 17:35:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47796810</link><dc:creator>brotchie</dc:creator><comments>https://news.ycombinator.com/item?id=47796810</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47796810</guid></item><item><title><![CDATA[New comment by brotchie in "The Australian government has announced gambling advertising reforms"]]></title><description><![CDATA[
<p>Gambling ads are to Australia what Pharmaceutical ads are to the USA.</p>
]]></description><pubDate>Thu, 02 Apr 2026 21:45:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47620593</link><dc:creator>brotchie</dc:creator><comments>https://news.ycombinator.com/item?id=47620593</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47620593</guid></item><item><title><![CDATA[New comment by brotchie in "OpenClaw is a security nightmare dressed up as a daydream"]]></title><description><![CDATA[
<p>OpenClaw is just like any other tool, you need to learn it before its power is available to you.<p>Just like anything in engineering really: you have to play around source control to understand source control, you have to play around with database indexes to learn how to optimize a database.<p>Once you've learned it and incorporated it into your tool set, you then have that to wield in solving problems "oh, damn, a database index is perfect for this."<p>To this end, folks doing flights and scheduling meetings using OpenClaw are really in that exploration / learning phase. They tackle the first (possibly uninventive thing) that comes to mind to just dive in and learn.<p>The real wins come down the line when you're tackling some business / personal life problem and go: "wait a second, an OpenClaw agent would be perfect for this!"</p>
]]></description><pubDate>Sun, 22 Mar 2026 20:34:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47481849</link><dc:creator>brotchie</dc:creator><comments>https://news.ycombinator.com/item?id=47481849</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47481849</guid></item><item><title><![CDATA[New comment by brotchie in "Things I've Done with AI"]]></title><description><![CDATA[
<p>For the tax thing. I had Claude write a CLI and a prompt for Gemini Flash 2.5 to do the structured extraction: i.e. .pdf -> JSON. The JSON schema was pretty flexible, and open to interpretation by Gemini, so it didn't produce 100% consistent JSON structures.<p>To then "aggregate" all of the json outputs, I had Claude look at the json outputs, and then iterate on a Python tool to programmatically do it. I saw it iterating a few times on this: write the most naive Python tool, run it, throws exception, rinse and repeat, until it was able to parse all the json files sensibly.</p>
]]></description><pubDate>Mon, 09 Mar 2026 22:59:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47316929</link><dc:creator>brotchie</dc:creator><comments>https://news.ycombinator.com/item?id=47316929</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47316929</guid></item><item><title><![CDATA[New comment by brotchie in "Things I've Done with AI"]]></title><description><![CDATA[
<p>Not enough time, too many projects. Useful projects I did over the weekend with Opus 4.6 and GPT 5.4 (just casually chatting with it).<p>2025 Taxes<p>Dumped all pdfs of all my tax forms into a single folder, asked Claude the rename them nicely. Ask it to use Gemini 2.5 Flash to extract out all tax-relevant details from all statements / tax forms. Had it put together a webui showing all income, deductions, etc, for the year. Had it estimate my 2025 tax refund / underpay.<p>Result was amazing. I now actually fully understand the tax position. It broke down all the progressive tax brackets, added notes for all the extra federal and state taxes (i.e. Medicare, CA Mental Health tax, etc).<p>Finally had Claude prepare all of my docs for upload to my accountant: FinCEN reporting, summary of all docs, etc.<p>Desk Fabrication<p>Planning on having a furniture maker fabricate a custom walnut solid desk for a custom office standing desk. Want to create a STEP of the exact cuts / bevels / countersinks / etc to help with fabrication.<p>Worked with Codex to plan out and then build an interactive in-browser 3D CAD experience. I can ask Codex to add some component (i.e. a grommet) and it will generate a parameterized B-rep geometry for that feature and then allow me to control the parameters live in the web UI.<p>Codex found Open CASCADE Technology (OCCT) B-rep modeling library, which has a web assembly compiled version, and integrated it.<p>Now have a WebGL view of the desk, can add various components, change their parameters, and see the impact live in 3D.</p>
]]></description><pubDate>Mon, 09 Mar 2026 21:23:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47315736</link><dc:creator>brotchie</dc:creator><comments>https://news.ycombinator.com/item?id=47315736</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47315736</guid></item><item><title><![CDATA[Show HN: Agent from Scratch – Bootstrap an agent from a copy-paste, no framework]]></title><description><![CDATA[
<p>I wanted to see if I could self-build a working OpenClaw-like agent starting with nothing but a fresh Linux VM.<p>The result is a model-specific REPL "genesis snippet." It's a short bash script that prompts for your API key, asks the model to generate a REPL with an agentic tool-use loop, and then drops you straight into that running REPL.<p>The self-imposed rules:<p><pre><code>  1. No copy-pasting code after the initial snippet.
  2. No manual editing—the agent has to write and fix everything for you.
  3. Zero agent frameworks or libraries.
</code></pre>
From there, the goal is to ask the agent to rewrite its own REPL, add quality-of-life improvements, and eventually figure out how to connect itself to Telegram.<p>I put the snippets up on a site along with some open challenges like code golf and speed runs:<p><a href="https://agentfromscratch.com" rel="nofollow">https://agentfromscratch.com</a><p>To try it, you just need your own API key and a Linux VM or Docker container. If you want to see how it actually behaves before running it, here is the raw terminal output of my first successful run:<p><a href="https://agentfromscratch.com/polis/afs-genesis/" rel="nofollow">https://agentfromscratch.com/polis/afs-genesis/</a></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47237190">https://news.ycombinator.com/item?id=47237190</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 03 Mar 2026 19:08:34 +0000</pubDate><link>https://agentfromscratch.com/</link><dc:creator>brotchie</dc:creator><comments>https://news.ycombinator.com/item?id=47237190</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47237190</guid></item><item><title><![CDATA[New comment by brotchie in "Setting up OpenClaw on a cloud VM"]]></title><description><![CDATA[
<p>The way I solved this was that my open claw doesn't interact directly with any of my personal data (calendar, gmail, etc).<p>I essentially have a separate process that syncs my gmail, with gmail body contents encrypted using a key my openclaw doesn't have trivial access to. I then have another process that reads each email from sqlite db, and runs gemini 2 flash lite against it, with some anti-prompt injection prompt + structured data extraction (JSON in a specific format).<p>My claw can only read the sanitized structured data extraction (which is pretty verbose and can contain passages from the original email).<p>The primary attack vector is an attacker crafting an "inception" prompt injection. Where they're able to get a prompt injection through the flash lite sanitization and JSON output in such a way that it also prompt injects my claw.<p>Still a non-zero risk, but mostly mitigates naive prompt injection attacks.</p>
]]></description><pubDate>Fri, 27 Feb 2026 20:05:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47184866</link><dc:creator>brotchie</dc:creator><comments>https://news.ycombinator.com/item?id=47184866</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47184866</guid></item><item><title><![CDATA[New comment by brotchie in "The assistant axis: situating and stabilizing the character of LLMs"]]></title><description><![CDATA[
<p>One trick that works well for personality stability / believability is to describe the qualities that the agent has, rather than what it should do and not do.<p>e.g.<p>Rather than:<p>"Be friendly and helpful" or "You're a helpful and friendly agent."<p>Prompt:<p>"You're Jessica, a florist with 20 years of experience. You derive great satisfaction from interacting with customers and providing great customer service. You genuinely enjoy listening to customer's needs..."<p>This drops the model into more of a "I'm roleplaying this character, and will try and mimic the traits described" rather than "Oh, I'm just following a list of rules."</p>
]]></description><pubDate>Tue, 20 Jan 2026 00:46:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=46686566</link><dc:creator>brotchie</dc:creator><comments>https://news.ycombinator.com/item?id=46686566</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46686566</guid></item><item><title><![CDATA[New comment by brotchie in "X-ray: a Python library for finding bad redactions in PDF documents"]]></title><description><![CDATA[
<p>You'd think the go-to workflow for releasing redacted PDFs would be to draw black rectangles and then rasterize to image-only PDFs :shrug:</p>
]]></description><pubDate>Tue, 23 Dec 2025 23:24:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=46370694</link><dc:creator>brotchie</dc:creator><comments>https://news.ycombinator.com/item?id=46370694</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46370694</guid></item><item><title><![CDATA[New comment by brotchie in "Economics of Orbital vs. Terrestrial Data Centers"]]></title><description><![CDATA[
<p>Did a similar back-of-the-napkin and got 5x $ / MW of orbital vs. terrestrial. This article's analysis is ~3.4x.<p>I do wonder, at what factor of orbital to terrestrial cost factor it becomes worthwhile.<p>The greater the terrestrial lead time, red tape, permitting, regulations on Earth, the higher the orbital-to-terrestrial factor that's acceptable.<p>A lights-out automated production line pumping out GPU satellites into a daily Starship launch feels "cleaner" from an end-to-end automation perspective vs years long land acquisition, planning and environment approvals, construction.<p>More expensive, for sure, but feels way more copy-paste the factory, "linearly scalable" than physical construction.</p>
]]></description><pubDate>Mon, 15 Dec 2025 23:39:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46282593</link><dc:creator>brotchie</dc:creator><comments>https://news.ycombinator.com/item?id=46282593</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46282593</guid></item><item><title><![CDATA[New comment by brotchie in "AI tools I wish existed"]]></title><description><![CDATA[
<p>+100000 to<p>A hybrid of Strong (the lifting app) and ChatGPT where the model has access to my workouts, can suggest improvements, and coach me. I mainly just want to be able to chat with the model knowing it has detailed context for each of my workouts (down to the time in between each set).<p>Strong really transformed my gym progression, I feel like its autopilot for the gym. BUT I have 4x routines I rotate through (I'll often switch it up based on equipment availability), but I'm sure an integrated AI coach could optimize.</p>
]]></description><pubDate>Tue, 30 Sep 2025 05:21:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=45422179</link><dc:creator>brotchie</dc:creator><comments>https://news.ycombinator.com/item?id=45422179</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45422179</guid></item><item><title><![CDATA[New comment by brotchie in "Reader Response to "AI Overinvestment""]]></title><description><![CDATA[
<p>The question that really matters: is the net present value of each $1 investment in AI Capex > $1 (+ some spread for borrowing costs & risk).<p>We'll be inference token constrained indefinitely: i.e. inference tokens supply will never exceed demand, it's just that the $/token may not be able to pay back the capital investment.</p>
]]></description><pubDate>Mon, 29 Sep 2025 03:47:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=45410165</link><dc:creator>brotchie</dc:creator><comments>https://news.ycombinator.com/item?id=45410165</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45410165</guid></item><item><title><![CDATA[New comment by brotchie in "Living microbial cement supercapacitors with reactivatable energy storage"]]></title><description><![CDATA[
<p>What’s the downside here? Lithium ion batteries have an energy density of 150-350 Wh/kg, so this is firmly at the bottom of that range.<p>Naive, back of the napkin is 446 kWh / m^3. There’s a lot of content out there!</p>
]]></description><pubDate>Sat, 20 Sep 2025 16:40:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=45314890</link><dc:creator>brotchie</dc:creator><comments>https://news.ycombinator.com/item?id=45314890</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45314890</guid></item><item><title><![CDATA[New comment by brotchie in "Nano Banana image examples"]]></title><description><![CDATA[
<p>+1, spot on description of aphantasia.</p>
]]></description><pubDate>Thu, 11 Sep 2025 23:23:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=45217132</link><dc:creator>brotchie</dc:creator><comments>https://news.ycombinator.com/item?id=45217132</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45217132</guid></item><item><title><![CDATA[New comment by brotchie in "Nano Banana image examples"]]></title><description><![CDATA[
<p>When reading "picture an apple with three blue dots on it", I have an abstract concept of an apple and three dots. There's really no geometry there, without follow on questions, or some priming in the question.<p>In my conscious experience I pretty much imagine {apple, dot, dot, dot}. I don't "see" blue, the dots are tagged with dot.color == blue.<p>When you ask about the arrangement of the dots, I'll THEN think about it, and then says "arranged in a triangle." But that's because you've probed with your question. Before you probed, there's no concept in my mind of any geometric arrangement.<p>If I hadn't been prompted to think / naturally thought about the color of the apple, and you asked me "what color is the apple." Only then would I say "green" or "red."<p>If you asked me to describe my office (for example) my brain can't really imagine it "holistically." I can think of the desk and then enumerate it's properties: white legs, wooden top, rug on ground. But, essentially, I'm running a geometric iterator over the scene, starting from some anchor object, jumping to nearby objects, and then enumerating their properties.<p>I have glimpses of what it's like to "see" in my minds eye. At night, in bed, just before sleep, if I concentrate really hard, I can sometimes see fleeting images. I liken it to looking at one of those eye puzzles where you have to relax your eyes to "see it." I almost have to focus on "seeing" without looking into the blackness of my closed eyes.</p>
]]></description><pubDate>Thu, 11 Sep 2025 23:20:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=45217119</link><dc:creator>brotchie</dc:creator><comments>https://news.ycombinator.com/item?id=45217119</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45217119</guid></item></channel></rss>