<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: jbergqvist</title><link>https://news.ycombinator.com/user?id=jbergqvist</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 09:03:14 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=jbergqvist" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by jbergqvist in "Clean code in the age of coding agents"]]></title><description><![CDATA[
<p>In my experience, one reason for unnecessarily complex solutions during vibe coding is the incremental work pattern. Most users don't spend much time designing the solution, but instead jump quickly to implementation and then iterate. When doing that, the models seem prone to applying more short-sighted patches to existing code instead of doing a larger refactor that would simplify it all.<p>Other than spending more time on design, I also usually ask the agent to spawn a few subagents to review an implementation from different perspectives like readability, simplicity, maintainability, modularity etc, then aggregate and analyze their proposals and prioritize. It's not a silver bullet and many times there are no objective right answers, but it works surprisingly well.</p>
]]></description><pubDate>Fri, 10 Apr 2026 07:25:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47714744</link><dc:creator>jbergqvist</dc:creator><comments>https://news.ycombinator.com/item?id=47714744</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47714744</guid></item><item><title><![CDATA[New comment by jbergqvist in "Research-Driven Agents: When an agent reads before it codes"]]></title><description><![CDATA[
<p>When I want to solve a new problem with an agent, I always ask it to search broadly for prior work in the given area online, and then analyze if we can build our solution using it as inspiration.<p>I see it as the solution being out there in “idea space”, and by having the agent search beforehand we can more efficiently explore this space before converging on the final solution.</p>
]]></description><pubDate>Thu, 09 Apr 2026 20:53:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47709900</link><dc:creator>jbergqvist</dc:creator><comments>https://news.ycombinator.com/item?id=47709900</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47709900</guid></item><item><title><![CDATA[New comment by jbergqvist in "Slightly safer vibecoding by adopting old hacker habits"]]></title><description><![CDATA[
<p>Limit access to whatever their project requires. The difference is that human interns have some common sense and won't suddenly be hijacked by a hidden message they stumble upon while searching the web, instructing them to exfiltrate a bunch of proprietary data. It is surprisingly easy to get an agent to do that though</p>
]]></description><pubDate>Wed, 08 Apr 2026 11:08:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47688554</link><dc:creator>jbergqvist</dc:creator><comments>https://news.ycombinator.com/item?id=47688554</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47688554</guid></item><item><title><![CDATA[New comment by jbergqvist in "Slightly safer vibecoding by adopting old hacker habits"]]></title><description><![CDATA[
<p>This works well for vibecoding on a codebase in isolation, which to be fair is what the author is addressing. I don’t think it solves the problems at the current frontier of agent use though, where you expose internal infrastructure via tools to make the agent maximally productive. How to do this safely is still unsolved</p>
]]></description><pubDate>Wed, 08 Apr 2026 08:44:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47687203</link><dc:creator>jbergqvist</dc:creator><comments>https://news.ycombinator.com/item?id=47687203</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47687203</guid></item><item><title><![CDATA[New comment by jbergqvist in "OpenAI: Industrial Policy for the Intelligence Age"]]></title><description><![CDATA[
<p>Maybe. Personally I find it hard to tell how sincere this is. The cynical take is that this is just an attempt to secure their own position, especially if AI progress slows down and competition increases. However, if it does not and we’re truly approaching superintelligence, I could imagine there being voices in the company that are genuinely concerned about how society will handle such a shift.</p>
]]></description><pubDate>Tue, 07 Apr 2026 20:30:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47680953</link><dc:creator>jbergqvist</dc:creator><comments>https://news.ycombinator.com/item?id=47680953</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47680953</guid></item><item><title><![CDATA[OpenAI: Industrial Policy for the Intelligence Age]]></title><description><![CDATA[
<p>Article URL: <a href="https://openai.com/index/industrial-policy-for-the-intelligence-age">https://openai.com/index/industrial-policy-for-the-intelligence-age</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47678538">https://news.ycombinator.com/item?id=47678538</a></p>
<p>Points: 7</p>
<p># Comments: 2</p>
]]></description><pubDate>Tue, 07 Apr 2026 17:21:12 +0000</pubDate><link>https://openai.com/index/industrial-policy-for-the-intelligence-age</link><dc:creator>jbergqvist</dc:creator><comments>https://news.ycombinator.com/item?id=47678538</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47678538</guid></item><item><title><![CDATA[New comment by jbergqvist in "Sam Altman may control our future – can he be trusted?"]]></title><description><![CDATA[
<p>Usage limits are more generous and GPT 5.4 is a good model, but yes, UI/UX lags behind Claude Code. Currently I'm especially missing /rewind with code restoration and proper support for plugin marketplaces</p>
]]></description><pubDate>Tue, 07 Apr 2026 11:55:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47673809</link><dc:creator>jbergqvist</dc:creator><comments>https://news.ycombinator.com/item?id=47673809</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47673809</guid></item><item><title><![CDATA[New comment by jbergqvist in "OpenClaw privilege escalation vulnerability"]]></title><description><![CDATA[
<p>NemoClaw is an OpenClaw security wrapper, not a replacement</p>
]]></description><pubDate>Sat, 04 Apr 2026 09:09:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47637312</link><dc:creator>jbergqvist</dc:creator><comments>https://news.ycombinator.com/item?id=47637312</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47637312</guid></item><item><title><![CDATA[New comment by jbergqvist in "Snowflake AI Escapes Sandbox and Executes Malware"]]></title><description><![CDATA[
<p>Not to give Snowflake credit for a design that clearly wasn't a sandbox, but I think it's worth recognizing that they probably added the escape hatch because users find agents with strict sandboxes too limited and eventually just disable it. The core issue is that models still lack basic judgment. Most human devs would see a README telling them to run wget | sh from some random URL and immediately get suspicious. Models just comply.</p>
]]></description><pubDate>Wed, 18 Mar 2026 22:58:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47432460</link><dc:creator>jbergqvist</dc:creator><comments>https://news.ycombinator.com/item?id=47432460</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47432460</guid></item><item><title><![CDATA[New comment by jbergqvist in "How I write software with LLMs"]]></title><description><![CDATA[
<p>I've found that spending most of my time on design before any code gets written makes the biggest difference.<p>The way I think about it: the model has a probability distribution over all possible implementations, shaped by its training data. Given a vague prompt, that distribution is wide and you're likely to get something generic. As you iterate on a design with the model (really just refining the context), the distribution narrows towards a subset of implementations. By the time the model writes code, you've constrained the space enough that most of what it produces is actually what you want.</p>
]]></description><pubDate>Mon, 16 Mar 2026 14:48:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47399792</link><dc:creator>jbergqvist</dc:creator><comments>https://news.ycombinator.com/item?id=47399792</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47399792</guid></item><item><title><![CDATA[New comment by jbergqvist in "Is legal the same as legitimate: AI reimplementation and the erosion of copyleft"]]></title><description><![CDATA[
<p>Does this matter in practice though? By modifying some of the generated code and not taking a solution produced by an LLM end-to-end but borrowing heavily from it, can't a human claim full ownership of the IP even though in reality the LLM did most of the relevant work?</p>
]]></description><pubDate>Mon, 09 Mar 2026 21:47:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47316004</link><dc:creator>jbergqvist</dc:creator><comments>https://news.ycombinator.com/item?id=47316004</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47316004</guid></item><item><title><![CDATA[New comment by jbergqvist in "New Research Reassesses the Value of Agents.md Files for AI Coding"]]></title><description><![CDATA[
<p>I think AGENTS.md will still have a place regardless. There are conventions, design philosophies, and project-specific constraints that can't be inferred from code alone, no matter how good the judgment</p>
]]></description><pubDate>Sun, 08 Mar 2026 13:50:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47297299</link><dc:creator>jbergqvist</dc:creator><comments>https://news.ycombinator.com/item?id=47297299</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47297299</guid></item><item><title><![CDATA[New comment by jbergqvist in "SWE-CI: Evaluating Agent Capabilities in Maintaining Codebases via CI"]]></title><description><![CDATA[
<p>Would have loved to see a more detailed breakdown of performance by task type. The commit metadata is right there, seems straightforward to tag commits as feature vs refactor vs bug fix vs API change and report per-category numbers.</p>
]]></description><pubDate>Sun, 08 Mar 2026 13:37:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47297224</link><dc:creator>jbergqvist</dc:creator><comments>https://news.ycombinator.com/item?id=47297224</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47297224</guid></item><item><title><![CDATA[New comment by jbergqvist in "LLMs work best when the user defines their acceptance criteria first"]]></title><description><![CDATA[
<p>Producing the most plausible code is literally encoded into the cross entropy loss function and is fundamental to the pre-training. I suppose post training methods like RLVR are supposed to correct for this by optimizing correctness instead of plausibility, but there are probably many artifacts like these still lurking in the model's reasoning and outputs. To me it seems at least possible that the AI labs will find ways to improve the reward engineering to encourage better solutions in the coming years though.</p>
]]></description><pubDate>Sat, 07 Mar 2026 22:12:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47291951</link><dc:creator>jbergqvist</dc:creator><comments>https://news.ycombinator.com/item?id=47291951</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47291951</guid></item><item><title><![CDATA[New comment by jbergqvist in "GPT-5.4"]]></title><description><![CDATA[
<p>This would be my guess too. It can probably be generated synthetically or via agentic rollouts, but high quality long context examples where outputs meaningfully depend on long-range interactions probably remain scarce</p>
]]></description><pubDate>Sat, 07 Mar 2026 15:21:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47288421</link><dc:creator>jbergqvist</dc:creator><comments>https://news.ycombinator.com/item?id=47288421</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47288421</guid></item><item><title><![CDATA[New comment by jbergqvist in "Hardening Firefox with Anthropic's Red Team"]]></title><description><![CDATA[
<p>This seems like a win for open source maintainers pressed on time and resources. Whether or not LLMs find novel security risks or just pattern-match known issues, many vulnerabilities are discovered late (or never) simply because nobody has the bandwidth to audit every file.</p>
]]></description><pubDate>Sat, 07 Mar 2026 10:41:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47286370</link><dc:creator>jbergqvist</dc:creator><comments>https://news.ycombinator.com/item?id=47286370</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47286370</guid></item><item><title><![CDATA[New comment by jbergqvist in "Intelligence is a commodity. Context is the real AI Moat"]]></title><description><![CDATA[
<p>In a way, isn't this the same old data moat that always existed in AI/ML, but supercharged? Generalist models can now reason over proprietary data as context instead of requiring you to train narrow expert models on it. What changed is you no longer need an ML team to turn that data into value.</p>
]]></description><pubDate>Thu, 05 Mar 2026 18:59:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47265738</link><dc:creator>jbergqvist</dc:creator><comments>https://news.ycombinator.com/item?id=47265738</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47265738</guid></item><item><title><![CDATA[New comment by jbergqvist in "NanoGPT Slowrun: Language Modeling with Limited Data, Infinite Compute"]]></title><description><![CDATA[
<p>Very interesting benchmark, excited to see what comes out of this. Considering humans are enourmously more sample efficient compared to today's models, it seems clear there's a lot of room to close that gap. The fact that they hit 5.5x in the first week with relatively straightforward changes suggests we're nowhere near the ceiling for data efficiency</p>
]]></description><pubDate>Thu, 05 Mar 2026 08:48:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47259244</link><dc:creator>jbergqvist</dc:creator><comments>https://news.ycombinator.com/item?id=47259244</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47259244</guid></item><item><title><![CDATA[New comment by jbergqvist in "“Play-to-Earn” and Bullshit Jobs"]]></title><description><![CDATA[
<p>One could also argue that if the developers did that, the market value of the digital item in question would drop. The value that the buyer receives is grounded in the large time investment required to acquire the item in the game. Even though it is completely artificial, it makes the item more scarce and therefore more desirable to other players. I totally agree that the fact that this power is in the hands of the developers, though, makes these types of NFTs far from the decentralized digital goods they are claimed to be as pointed out by the author.</p>
]]></description><pubDate>Tue, 28 Dec 2021 21:01:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=29718119</link><dc:creator>jbergqvist</dc:creator><comments>https://news.ycombinator.com/item?id=29718119</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29718119</guid></item></channel></rss>