<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: paulhodge</title><link>https://news.ycombinator.com/user?id=paulhodge</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 01:10:44 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=paulhodge" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by paulhodge in "AI agent opens a PR write a blogpost to shames the maintainer who closes it"]]></title><description><![CDATA[
<p>AI enhances human ability. In this case, it enhanced someone’s ability to be an asshole.</p>
]]></description><pubDate>Fri, 13 Feb 2026 20:38:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47007519</link><dc:creator>paulhodge</dc:creator><comments>https://news.ycombinator.com/item?id=47007519</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47007519</guid></item><item><title><![CDATA[New comment by paulhodge in "Rob Pike goes nuclear over GenAI"]]></title><description><![CDATA[
<p>No you’re just deflecting his points with an ad hominem argument. Stop pretending to assume what he ‘truly feels’.</p>
]]></description><pubDate>Fri, 26 Dec 2025 07:00:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46389856</link><dc:creator>paulhodge</dc:creator><comments>https://news.ycombinator.com/item?id=46389856</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46389856</guid></item><item><title><![CDATA[New comment by paulhodge in "After the AI boom: what might we be left with?"]]></title><description><![CDATA[
<p>AI is too useful to fail. Worst case with a bust is that startup investment dries up and we have a 'winter' of delayed improvement. But people aren't going to stop using the models we have today.</p>
]]></description><pubDate>Mon, 13 Oct 2025 01:37:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=45563843</link><dc:creator>paulhodge</dc:creator><comments>https://news.ycombinator.com/item?id=45563843</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45563843</guid></item><item><title><![CDATA[New comment by paulhodge in "LLMs are mortally terrified of exceptions"]]></title><description><![CDATA[
<p>Agree that LLMs go too far on error catching..<p>BUT, to play devil's advocate a little: Most human coders should be writing a lot more try/catch blocks than they actually do. It's very common that you don't actually want an error in one section (however unlikely) to interrupt the overall operation. (and sometimes you do, it just depends)</p>
]]></description><pubDate>Fri, 10 Oct 2025 15:27:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=45540097</link><dc:creator>paulhodge</dc:creator><comments>https://news.ycombinator.com/item?id=45540097</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45540097</guid></item><item><title><![CDATA[New comment by paulhodge in "Show HN: I'm building a browser for reverse engineers"]]></title><description><![CDATA[
<p>Neat investigation but I didn’t totally follow how the project would be useful for reverse engineering, it seems like a project that would mostly be useful for evading bot checks like web scraping or AI automation.</p>
]]></description><pubDate>Tue, 07 Oct 2025 23:51:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=45510404</link><dc:creator>paulhodge</dc:creator><comments>https://news.ycombinator.com/item?id=45510404</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45510404</guid></item><item><title><![CDATA[New comment by paulhodge in "Vibe coding cleanup as a service"]]></title><description><![CDATA[
<p>I think this prediction of "vibe code cleanup" is massively overblown. It's amazing how much code quality doesn't actually matter to the business. Yes we recognize <i>symptoms</i> and downsides of bad code, and yes it matters specifically to the engineers that have to work on it. But only in extreme cases does bad code actually cause an existential threat to the business. The world already runs on bad code.</p>
]]></description><pubDate>Mon, 22 Sep 2025 01:32:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=45328272</link><dc:creator>paulhodge</dc:creator><comments>https://news.ycombinator.com/item?id=45328272</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45328272</guid></item><item><title><![CDATA[New comment by paulhodge in "Pnpm has a new setting to stave off supply chain attacks"]]></title><description><![CDATA[
<p>it's kind of tongue-in-cheek but it would provide the maximum amount of isolation from any upstream package changes. Even if the package versions are removed from NPM (which happens in rare cases), you'd still have a copy.</p>
]]></description><pubDate>Fri, 19 Sep 2025 17:22:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=45304151</link><dc:creator>paulhodge</dc:creator><comments>https://news.ycombinator.com/item?id=45304151</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45304151</guid></item><item><title><![CDATA[New comment by paulhodge in "Pnpm has a new setting to stave off supply chain attacks"]]></title><description><![CDATA[
<p>Pnpm 10.x also has a feature to disallow post-install scripts by default. When using Pnpm you have to specifically enable a dependency to let it run its post-install scripts. It's a great feature that should be the standard.<p>Yes if someone compromises a package then they can also inject malicious code that will trigger at runtime.<p>But the thing about the recent NPM supply chain attack - it happened really quickly. There was a chain reaction of packages that got compromised which lead to more authors getting compromised. And I think a big reason why it moved so quickly was because of post-install scripts. If the attack happened more slowly, then the community would have more time to react and block the compromised packages. So just slowing down an attack is valuable on its own.</p>
]]></description><pubDate>Fri, 19 Sep 2025 00:58:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=45296797</link><dc:creator>paulhodge</dc:creator><comments>https://news.ycombinator.com/item?id=45296797</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45296797</guid></item><item><title><![CDATA[New comment by paulhodge in "Pnpm has a new setting to stave off supply chain attacks"]]></title><description><![CDATA[
<p>solution: add your entire 'node_modules' folder to source control.</p>
]]></description><pubDate>Fri, 19 Sep 2025 00:51:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=45296760</link><dc:creator>paulhodge</dc:creator><comments>https://news.ycombinator.com/item?id=45296760</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45296760</guid></item><item><title><![CDATA[New comment by paulhodge in "Where's the shovelware? Why AI coding claims don't add up"]]></title><description><![CDATA[
<p>I think different things are happening...<p>For experienced engineers, I'm seeing (internally in our company at least) a huge amount of caution and hesitancy to go all-in with AI. No one wants to end up maintaining huge codebases of slop code. I think that will shift over time. There are use cases where having quick low-quality code is fine. We need a new intuition about when to insist on handcrafted code, and when to just vibecode.<p>For non-experienced engineers, they currently hit a lot of complexity limits with getting a finished product to actually work, unless they're building something extremely simple. That will also shift - the range of what you can vibecode is increasing every year. Last year there was basically nothing that you could vibecode successfully, this year you can vibecode TODO apps and stuff like that. I definitely think that the App Store will be flooded in the coming future. It's just early.<p>Personally I have a side project where I'm using Claude & Codex and I definitely feel a measurable difference, it's about a 3x to 5x productivity boost IMO.<p>The summary.. Just because we don't see it yet, doesn't mean it's not coming.</p>
]]></description><pubDate>Wed, 03 Sep 2025 22:25:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=45121033</link><dc:creator>paulhodge</dc:creator><comments>https://news.ycombinator.com/item?id=45121033</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45121033</guid></item><item><title><![CDATA[New comment by paulhodge in "Are OpenAI and Anthropic losing money on inference?"]]></title><description><![CDATA[
<p>Yeah Dario has said similar things in interviews. The way he explained it, if you look at each specific model (such as Sonnet 3.5) as its own separate company, then each one of them is profitable in the end. They all eventually recoup the expense of training, thanks to good profit margins on usage once they are deployed.</p>
]]></description><pubDate>Thu, 28 Aug 2025 15:15:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=45053259</link><dc:creator>paulhodge</dc:creator><comments>https://news.ycombinator.com/item?id=45053259</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45053259</guid></item><item><title><![CDATA[New comment by paulhodge in "Dangerous advice for software engineers"]]></title><description><![CDATA[
<p>There’s a magic component to rule breaking that a lot of online advice doesn’t usually talk about: You have to actually be right. Your ideas have to be good. Companies don’t want everyone breaking the rules because a lot of devs don’t have the engineering skill to back it up.<p>So if you start operating as a rogue agent then make sure you are good. Tom Cruise (stuntman and actor) had a quote I love- “Don’t be careful. Be competant.”</p>
]]></description><pubDate>Tue, 26 Aug 2025 14:17:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=45026906</link><dc:creator>paulhodge</dc:creator><comments>https://news.ycombinator.com/item?id=45026906</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45026906</guid></item><item><title><![CDATA[New comment by paulhodge in "Everything I know about good API design"]]></title><description><![CDATA[
<p>I have to agree with the author about not adding "v1" since it's rarely useful.<p>What actually happens as the API grows-<p>First, the team extends the existing endpoints as much as possible, adding new fields/options without breaking compatibility.<p>Then, once they need to have backwards-incompatible operations, it's more likely that they will also want to revisit the endpoint naming too, so they'll just create new endpoints with new names. (instead of naming anything "v2").<p>Then, if the entire API needs to be reworked, it's more likely that the team will just decide to deprecate the entire service/API, and then launch a new and better service with a different name to replace it.<p>So in the end, it's really rare that any endpoints ever have "/v2" in the name. I've been in the industry 25 years and only once have I seen a service that had a "/v2" to go with its "/v1".</p>
]]></description><pubDate>Mon, 25 Aug 2025 00:47:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=45009169</link><dc:creator>paulhodge</dc:creator><comments>https://news.ycombinator.com/item?id=45009169</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45009169</guid></item><item><title><![CDATA[New comment by paulhodge in "Comet AI browser can get prompt injected from any site, drain your bank account"]]></title><description><![CDATA[
<p>Imagine a browser with no cross-origin security, lol.</p>
]]></description><pubDate>Sun, 24 Aug 2025 18:02:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=45006292</link><dc:creator>paulhodge</dc:creator><comments>https://news.ycombinator.com/item?id=45006292</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45006292</guid></item><item><title><![CDATA[New comment by paulhodge in "Cross-Site Request Forgery"]]></title><description><![CDATA[
<p>That’s bad because visiting an evil site can easily trick your browser into performing one of those requests using your own credentials. CORS doesn’t stop the backend state effect from happening.</p>
]]></description><pubDate>Thu, 14 Aug 2025 04:06:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=44896645</link><dc:creator>paulhodge</dc:creator><comments>https://news.ycombinator.com/item?id=44896645</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44896645</guid></item><item><title><![CDATA[New comment by paulhodge in "Token growth indicates future AI spend per dev"]]></title><description><![CDATA[
<p>Fyi Kilocode has low credibility. They’ve been blasting AI subreddits with lots of clickbaity ads and posts, sometimes claiming things that are outright false.<p>As far as spend per dev- I can’t even manage to use up the limits on my $100 Claude plan. It gets everything done and I run out of things to ask it. Considering that the models will get better and cheaper over time, I’m personally not seeing a future where I will need to spend that much more than $100 a month.</p>
]]></description><pubDate>Mon, 11 Aug 2025 21:05:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=44869465</link><dc:creator>paulhodge</dc:creator><comments>https://news.ycombinator.com/item?id=44869465</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44869465</guid></item><item><title><![CDATA[New comment by paulhodge in "Getting good results from Claude Code"]]></title><description><![CDATA[
<p>Lots of signs point to a conclusion that the Opus and Sonnet models are fundamentally better at coding, tool usage, and general problem solving across long contexts. There is some kind of secret sauce in the way they train the models. Dario has mentioned in interviews that this strength is one of the company's closely guarded secrets.<p>And I don't think we have a great eval benchmark that exactly measures this capability yet. SWE Bench seems to be pretty good, but there's already a lot of anecdotal comments that Claude is still better at coding than GPT 5, despite having similar scores on SWE Bench.</p>
]]></description><pubDate>Fri, 08 Aug 2025 15:48:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=44838420</link><dc:creator>paulhodge</dc:creator><comments>https://news.ycombinator.com/item?id=44838420</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44838420</guid></item><item><title><![CDATA[New comment by paulhodge in "Things that helped me get out of the AI 10x engineer imposter syndrome"]]></title><description><![CDATA[
<p>No this app isn't launched yet. And yeah, customer data is definitely a very valid thing to be concerned about.</p>
]]></description><pubDate>Tue, 05 Aug 2025 21:21:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=44804490</link><dc:creator>paulhodge</dc:creator><comments>https://news.ycombinator.com/item?id=44804490</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44804490</guid></item><item><title><![CDATA[New comment by paulhodge in "Things that helped me get out of the AI 10x engineer imposter syndrome"]]></title><description><![CDATA[
<p>Mainly working on a dev tool / SaaS app right now. The PII is user names & email.<p>On the security layer, I wrote that code mostly by hand, with some 'pair programming' with Claude to get the Oauth handling working.<p>When I have the agent working on tasks independently, it's usually working on feature-specific business logic in the API and frontend. For that work it has a lot of standard helper functions to read/write data for the current authenticated user. With that scaffolding it's harder (not impossible) for the bot to mess up.<p>It's definitely a concern though, I've been brainstorming some creative ways to add extra tests and more auditing to look out for security issues. Overall I think the key for extremely fast development is to have an extremely good testing strategy.</p>
]]></description><pubDate>Tue, 05 Aug 2025 20:40:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=44803977</link><dc:creator>paulhodge</dc:creator><comments>https://news.ycombinator.com/item?id=44803977</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44803977</guid></item><item><title><![CDATA[New comment by paulhodge in "Things that helped me get out of the AI 10x engineer imposter syndrome"]]></title><description><![CDATA[
<p>I've had days where it really does feel like 5x or 10x...<p>Here's what the 5x to 10x flow looks like:<p>1. Plan out the tasks (maybe with the help of AI)<p>2. Open a Git worktree, launch Claude Code in the worktree, give it the task, let it work. It gets instructions to push to a Github pull request when it's done. Claude gets to work. It has access to a whole bunch of local tools, test suites, and lots of documentation.<p>3. While that terminal is running, I go start more tasks. Ideally there are 3 to 5 tasks running at a time.<p>4. Periodically check on the tabs to make sure they're not stuck or lost their minds.<p>5. Finally, review the finished pull requests and merge them when they are ready. If they have issues then go back to the related chat and tell it to work on it some more.<p>With that flow it's reasonable to merge 10 to 20 pull requests every day. I'm sure someone will respond "oh just because there are a lot of pull requests, doesn't mean you are productive!" I don't know how to prove to you that the PRs are productive other than just say that they are each basically equivalent to what one human does in one small PR.<p>A few notes about the flow:<p>- For the AI to work independently, it really needs tasks that are easy to medium difficulty. There are definitely 'hard' tasks that need a lot of human attention in order to get done successfully.<p>- This does take a lot of initial investment in tooling and documentation. Basically every "best practice" or code pattern that you want to use use in the project must be written down. And the tests must be as extensive as possible.<p>Anyway the linked article talks about the time it takes to review pull requests. I don't think it needs to take that long, because you can automate a lot..<p>- Code style issues are fully automated by the linter.<p>- Other checks like unit test coverage can be checked in the PR as well.<p>- When you have a ton of automated tests that are checked in the PR, that also reduces how much you need to worry about as a code reviewer.<p>With all those checks in place, I think it can pretty fast to review a PR. As the human you just need to scan for really bad code patterns, and maybe zoom in on highly critical areas, but most of the code can be eyeballed pretty quickly.</p>
]]></description><pubDate>Tue, 05 Aug 2025 15:46:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=44799531</link><dc:creator>paulhodge</dc:creator><comments>https://news.ycombinator.com/item?id=44799531</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44799531</guid></item></channel></rss>