<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: gwerbin</title><link>https://news.ycombinator.com/user?id=gwerbin</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 29 Apr 2026 09:24:07 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=gwerbin" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by gwerbin in "Regression: malware reminder on every read still causes subagent refusals"]]></title><description><![CDATA[
<p>What's your Pi setup?</p>
]]></description><pubDate>Wed, 29 Apr 2026 02:56:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47943704</link><dc:creator>gwerbin</dc:creator><comments>https://news.ycombinator.com/item?id=47943704</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47943704</guid></item><item><title><![CDATA[New comment by gwerbin in "Regression: malware reminder on every read still causes subagent refusals"]]></title><description><![CDATA[
<p>Revenue-positive bugs are the stickiest features.</p>
]]></description><pubDate>Wed, 29 Apr 2026 02:54:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47943696</link><dc:creator>gwerbin</dc:creator><comments>https://news.ycombinator.com/item?id=47943696</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47943696</guid></item><item><title><![CDATA[New comment by gwerbin in "An AI agent deleted our production database. The agent's confession is below"]]></title><description><![CDATA[
<p>The thing that seems to bring up these extremely unlikely destructive token sequences and it totally seems to be letting agents just run for a long time. I wonder if some kind of weird subliminal chaos signal develops in the context when the AI repeatedly consumes its own output.<p>Personally I don't even let my agent run a single shell command without asking for approval. That's partly because I haven't set up a sandbox yet, but even with a sandbox there is a huge "hazard surface" to be mindful of.<p>I wonder if AI agent harnesses should have some kind of built-in safety measure where instead of simply compacting context and proceeding, they actually shut down the agent and restart it.<p>That said I also think even the most advanced agents generate code that I would never want to base a business on, so the whole thing seems ridiculous to me. This article has the same energy as losing money on NFTs.</p>
]]></description><pubDate>Mon, 27 Apr 2026 01:05:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47916624</link><dc:creator>gwerbin</dc:creator><comments>https://news.ycombinator.com/item?id=47916624</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47916624</guid></item><item><title><![CDATA[New comment by gwerbin in "An AI agent deleted our production database. The agent's confession is below"]]></title><description><![CDATA[
<p>The author definitely deserves a lot of blame here and clearly doesn't understand AI well enough to have a coherent opinion on AI safety.<p>But Railway bears some responsibility too because, at least of the author is to be believed, it looks like they  provide no safety tools for users, regardless of whether they use AI or not. You should be able to generate scoped API tokens. That's just good practice. A human isn't likely to have made this particular mistake, but it doesn't seem out of the question either.</p>
]]></description><pubDate>Mon, 27 Apr 2026 00:56:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47916559</link><dc:creator>gwerbin</dc:creator><comments>https://news.ycombinator.com/item?id=47916559</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47916559</guid></item><item><title><![CDATA[New comment by gwerbin in "An AI agent deleted our production database. The agent's confession is below"]]></title><description><![CDATA[
<p>Yeah that's the typical junior engineer scenario right? Run a command that wasn't meant to be destructive but accidentally destroy something. This is different. AI agent went on some kind of wild goose chase of fixing problems, and eventually the most probable token sequence ended up at "delete this database". This is more like if your senior engineer with extreme ADHD ate a bunch of acid before sitting down to work.</p>
]]></description><pubDate>Mon, 27 Apr 2026 00:42:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47916462</link><dc:creator>gwerbin</dc:creator><comments>https://news.ycombinator.com/item?id=47916462</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47916462</guid></item><item><title><![CDATA[New comment by gwerbin in "An AI agent deleted our production database. The agent's confession is below"]]></title><description><![CDATA[
<p>But at least you have a 5000 LoC project on Github that deletes LinkedIn profiles!</p>
]]></description><pubDate>Mon, 27 Apr 2026 00:34:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47916414</link><dc:creator>gwerbin</dc:creator><comments>https://news.ycombinator.com/item?id=47916414</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47916414</guid></item><item><title><![CDATA[New comment by gwerbin in "An AI agent deleted our production database. The agent's confession is below"]]></title><description><![CDATA[
<p>Call me crazy but does AI not seem like the root cause here? At the beginning of the post they say that the AI agent found a file with what they thought was a narrowly scoped API token, and they very clearly state that they never would have given an AI full access if they realized it had the ability to do stuff like this with that token.<p>So while the AI did something significantly worse than anything a hapless junior engineer might be expected to do, it sounds like the same thing could've resulted from an unsophisticated security breach or accidental source code leak.<p>Is AI a part of the chain of events? Absolutely. Is it the sole root cause? Seems like no.</p>
]]></description><pubDate>Sun, 26 Apr 2026 19:30:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47913201</link><dc:creator>gwerbin</dc:creator><comments>https://news.ycombinator.com/item?id=47913201</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47913201</guid></item><item><title><![CDATA[New comment by gwerbin in "I spent 6 years building my Kanban as I hated how managers run the boards"]]></title><description><![CDATA[
<p>It's source-available proprietary software that happens to be distributed through NPM.</p>
]]></description><pubDate>Sun, 26 Apr 2026 10:37:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47909148</link><dc:creator>gwerbin</dc:creator><comments>https://news.ycombinator.com/item?id=47909148</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47909148</guid></item><item><title><![CDATA[New comment by gwerbin in "A 3D Body from Eight Questions – No Photo, No GPU"]]></title><description><![CDATA[
<p>That's a common phenomenon in model fitting, depending on the type of model. In both old school regression and neural networks, the fitted model does not distinguish between specific training examples and other inputs. So specific input-output pairs from the training data don't get special privilege. In fact it's often a <i>good thing</i> that models don't just memorize inputt-output pairs from training, because that allows them to smooth over uncaptured sources of variation such as people all being slightly different as well as measurement error.<p>In this case they had to customize the model fitting to try to get the error closer to zero specifically on those attributes.</p>
]]></description><pubDate>Sat, 25 Apr 2026 12:50:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47901082</link><dc:creator>gwerbin</dc:creator><comments>https://news.ycombinator.com/item?id=47901082</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47901082</guid></item><item><title><![CDATA[New comment by gwerbin in "Tell HN: Claude 4.7 is ignoring stop hooks"]]></title><description><![CDATA[
<p>This is just goofy prompting.<p>I have good success when I ask the agent to help me debug the harness. "Help me debug why Claude Code is ignoring my hook".</p>
]]></description><pubDate>Fri, 24 Apr 2026 22:55:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47896808</link><dc:creator>gwerbin</dc:creator><comments>https://news.ycombinator.com/item?id=47896808</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47896808</guid></item><item><title><![CDATA[New comment by gwerbin in "The Classic American Diner"]]></title><description><![CDATA[
<p>That's the point. Burgers are more expensive (relative to "all" other goods) compared to back then.</p>
]]></description><pubDate>Fri, 24 Apr 2026 22:50:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47896781</link><dc:creator>gwerbin</dc:creator><comments>https://news.ycombinator.com/item?id=47896781</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47896781</guid></item><item><title><![CDATA[New comment by gwerbin in "I cancelled Claude: Token issues, declining quality, and poor support"]]></title><description><![CDATA[
<p>Or just don't use AI to write code. Use it as a code reviewer assistant along with your usual test-lint development cycle. Use it to help evaluate 3rd party libraries faster. Use it to research new topics. Use it to help draft RFCs and design documents. Use it as a chat buddy when working on hard problems.<p>I think the AI companies all stink to high heaven and the whole thing being built on copyright infringement still makes me squirm. But the latest models are stupidly smart in some cases. It's starting to feel like I really do have a sci-fi AI assistant that I can just reach for whenever I need it, either to support hard thinking or to speed up or entirely avoid drudgery and toil.<p>You don't have to buy into the stupid vibecoding hype to get productivity value out of the technology.<p>You of course don't have to use it at all. And you don't owe your money to any particular company. Heck for non-code tasks the local-capable models are great. But you can't just look at vibecoding and dismiss the entire category of technology.</p>
]]></description><pubDate>Fri, 24 Apr 2026 18:33:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47894119</link><dc:creator>gwerbin</dc:creator><comments>https://news.ycombinator.com/item?id=47894119</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47894119</guid></item><item><title><![CDATA[New comment by gwerbin in "Borrow-checking without type-checking"]]></title><description><![CDATA[
<p>Sorta? Python has fairly strong types but it's no fun debugging a `None has no attribute foo` error deep inside some library function with a call site 1000 LoC away from the actual place where the erroneous None originally arose, due to a typo.<p>It's not just Python too, I've hit the same issue in Common Lisp.<p>Yes one can run contracts and unit tests and static analysis, but what's a type checker anyway other than a very strict static analysis tool?</p>
]]></description><pubDate>Thu, 23 Apr 2026 13:38:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47875641</link><dc:creator>gwerbin</dc:creator><comments>https://news.ycombinator.com/item?id=47875641</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47875641</guid></item><item><title><![CDATA[New comment by gwerbin in "Borrow-checking without type-checking"]]></title><description><![CDATA[
<p>Nim type inference was a joy to use although I haven't touched the language in several years due to the language community seeming to collapse a bit.</p>
]]></description><pubDate>Thu, 23 Apr 2026 13:18:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47875426</link><dc:creator>gwerbin</dc:creator><comments>https://news.ycombinator.com/item?id=47875426</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47875426</guid></item><item><title><![CDATA[New comment by gwerbin in "Anthropic takes $5B from Amazon and pledges $100B in cloud spending in return"]]></title><description><![CDATA[
<p>What setup are you using? What models, what hardware, what agent harness, etc? I have the vague sense that this is all <i>possible</i> right now, but the amount of tinkering required doesn't seem worth it compared to, like, just not using AI and getting stuff done the old fashioned way.</p>
]]></description><pubDate>Tue, 21 Apr 2026 19:26:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47853360</link><dc:creator>gwerbin</dc:creator><comments>https://news.ycombinator.com/item?id=47853360</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47853360</guid></item><item><title><![CDATA[New comment by gwerbin in "Anthropic takes $5B from Amazon and pledges $100B in cloud spending in return"]]></title><description><![CDATA[
<p>It's less about the company leaving the stock market and more about "Private Equity" often being a legalized embezzlement scam designed to suck the company dry and then dump its withered husk in bankruptcy court.</p>
]]></description><pubDate>Tue, 21 Apr 2026 19:24:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47853327</link><dc:creator>gwerbin</dc:creator><comments>https://news.ycombinator.com/item?id=47853327</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47853327</guid></item><item><title><![CDATA[New comment by gwerbin in "Meta to start capturing employee mouse movements, keystrokes for AI training"]]></title><description><![CDATA[
<p>That's not a bug, that's a feature</p>
]]></description><pubDate>Tue, 21 Apr 2026 19:22:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47853307</link><dc:creator>gwerbin</dc:creator><comments>https://news.ycombinator.com/item?id=47853307</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47853307</guid></item><item><title><![CDATA[New comment by gwerbin in "US Bill Mandates On-Device Age Verification"]]></title><description><![CDATA[
<p>And they can offer an age verification product now too.</p>
]]></description><pubDate>Mon, 20 Apr 2026 16:32:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47836688</link><dc:creator>gwerbin</dc:creator><comments>https://news.ycombinator.com/item?id=47836688</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47836688</guid></item><item><title><![CDATA[New comment by gwerbin in "All 12 moonwalkers had "lunar hay fever" from dust smelling like gunpowder (2018)"]]></title><description><![CDATA[
<p>Which incidentally is the shuttle that brought back LDEF.</p>
]]></description><pubDate>Sat, 18 Apr 2026 12:37:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47815446</link><dc:creator>gwerbin</dc:creator><comments>https://news.ycombinator.com/item?id=47815446</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47815446</guid></item><item><title><![CDATA[New comment by gwerbin in "Isaac Asimov: The Last Question (1956)"]]></title><description><![CDATA[
<p>> And what percentage of a house's price is the building?<p>Anecdotally like half, depending on the area. Plots of land go for $500k in Boston suburbs and new construction homes go for $1m.</p>
]]></description><pubDate>Sat, 18 Apr 2026 01:25:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47812395</link><dc:creator>gwerbin</dc:creator><comments>https://news.ycombinator.com/item?id=47812395</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47812395</guid></item></channel></rss>