<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: waldopat</title><link>https://news.ycombinator.com/user?id=waldopat</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 10:34:17 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=waldopat" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by waldopat in "Vercel April 2026 security incident"]]></title><description><![CDATA[
<p>While everyone is revoking OAuth apps, rotating API keys, and deleting Vercel accounts, this is a good reminder that the scary part is how short the path was from OAuth token to employee account to internal systems to customer secrets.<p>Many folks here likely have some stack that looks like: Google Workspace, GitHub, Vercel/Railway/Render/etc. where env vars or secrets are hosted. These are all loosely coupled but transitively trusted.<p>So compromising any one of them becomes a threat vector. In other words, if System A trusts System B, and System B trusts System C, then System A trusts System C. This is also why OpenClaw is frightening from a security perspective.<p>Also, this is a good reminder to run audits. Run `npm audit` on a typical Next.js project and you’ll probably see DoS vulnerabilities, ReDoS issues, Prototype pollution, code injection paths, handlebars etc. I'm sure you'll find something unexpected if you don't have routine code hygiene checks.</p>
]]></description><pubDate>Mon, 20 Apr 2026 21:06:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47840746</link><dc:creator>waldopat</dc:creator><comments>https://news.ycombinator.com/item?id=47840746</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47840746</guid></item><item><title><![CDATA[New comment by waldopat in "AI makes you boring"]]></title><description><![CDATA[
<p>Slop is probably more accurate than boring. LLM assisted development enables output and speed. In the right hands, it can really bring improvements to code quality or execution. In the wrong hands, you get slop.<p>Otherwise, AI definitely impacts learning and thinking. See Anthropic's own paper: <a href="https://www.anthropic.com/research/AI-assistance-coding-skills" rel="nofollow">https://www.anthropic.com/research/AI-assistance-coding-skil...</a></p>
]]></description><pubDate>Thu, 19 Feb 2026 19:52:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47078290</link><dc:creator>waldopat</dc:creator><comments>https://news.ycombinator.com/item?id=47078290</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47078290</guid></item><item><title><![CDATA[New comment by waldopat in "Apache Iceberg Is Brilliant (and Your 12-Person Startup Doesn't Need It)"]]></title><description><![CDATA[
<p>Here's a founder/product perspective. This maps well to the skateboard => scooter => bicycle => motorcycle => rocket ship product metaphor that's often used. Each phase teaches different design patterns, constraints, and failure (and success) modes for different inflection points of a startup's journey.<p>But here's the reality. What got you technically to PMF may hold you back from your Series A and next steps. Technical debt is just the natural cost of growth, but (here's the kicker) optimizing tech stacks too early can lead to slower execution time. Most startups never reach exponential scale anyways. Put another way, starting with "rocket ship" does not immune the startup from rewrites, refactoring  or throw away code.<p>The real systems and management challenge is building architectures that are intentionally temporary or modular. Simple enough that throwing them away later isn’t traumatic and rebuilds aren’t a sign of failure but success.</p>
]]></description><pubDate>Tue, 10 Feb 2026 20:14:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46966149</link><dc:creator>waldopat</dc:creator><comments>https://news.ycombinator.com/item?id=46966149</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46966149</guid></item><item><title><![CDATA[New comment by waldopat in "Ask HN: Non AI-obsessed tech forums"]]></title><description><![CDATA[
<p>Honestly, you might want to step outside tech altogether. Join a local civic or neighborhood organization or volunteer with a nonprofit. There was a nice thread last year about libraries.<p>Channeling Steve Blank, get out of the building! You’ll run into real problems faced by real people who often have limited exposure to both AI and tech, but who can still benefit enormously. Listening and engaging is always a good first step before jumping in to suggestions.<p>In this space, needs are far more data and visualization driven, which are not strictly AI related. It may also be both a useful and humbling antidote to hype cycles.</p>
]]></description><pubDate>Tue, 10 Feb 2026 18:57:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46964995</link><dc:creator>waldopat</dc:creator><comments>https://news.ycombinator.com/item?id=46964995</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46964995</guid></item><item><title><![CDATA[New comment by waldopat in "Ask HN: Do provisional patents matter for early-stage startups?"]]></title><description><![CDATA[
<p>Go read The Founder's Dilemma by Wasserman. It's great and covers almost any problem a founder will run into. To really summarize, it's all about trade offs and prioritization. Patents vs trade secrets fits nicely.<p>Trade secrets are far cheaper and easier to maintain than patents. In short, patents are only as strong as your ability to enforce them. Also Alice Corp. v. CLS Bank International (2014) weakened software and process patents. That said, if you can’t realistically defend IP in court, you effectively don’t have it. From an early-stage founder perspective, that makes patents a questionable use of time and money and potentially what kills the company.<p>This may contrast from information you get from a lawyer or VC. Patents are attractive because they create an asset someone else can later buy or defend. For the founder, the incentives aren’t squarely aligned.<p>Neither approach is more right or wrong, but there are very real practical consequences. If you are pre-seed who is bootstrapped or done a family & friends round and are pre or early revenue, trade secrecy is by far your better option.<p>As an additional note, if you don't own the underlying AI models and are just a better wrapper for Claude or ChatGPT you at best have a very weak IP or patent position.</p>
]]></description><pubDate>Tue, 10 Feb 2026 18:37:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=46964671</link><dc:creator>waldopat</dc:creator><comments>https://news.ycombinator.com/item?id=46964671</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46964671</guid></item><item><title><![CDATA[New comment by waldopat in "Frontier AI agents violate ethical constraints 30–50% of time, pressured by KPIs"]]></title><description><![CDATA[
<p>Agreed, Goodhart’s Law captures the failure mode well intentioned KPIs and OKRs may miss, let alone agentic automation</p>
]]></description><pubDate>Tue, 10 Feb 2026 15:29:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=46961033</link><dc:creator>waldopat</dc:creator><comments>https://news.ycombinator.com/item?id=46961033</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46961033</guid></item><item><title><![CDATA[New comment by waldopat in "Ask HN: What are you working on? (February 2026)"]]></title><description><![CDATA[
<p>Please don't use "bank level security" claims if you're not SOC 2 Type II compliant and don't have PKI based cryptography to secure your documents.</p>
]]></description><pubDate>Tue, 10 Feb 2026 15:21:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=46960913</link><dc:creator>waldopat</dc:creator><comments>https://news.ycombinator.com/item?id=46960913</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46960913</guid></item><item><title><![CDATA[New comment by waldopat in "MIT Technology Review has confirmed that posts on Moltbook were fake"]]></title><description><![CDATA[
<p>I was curious about doing an experiment like this, but then I saw Wired had already done it. I suppose many folks had the same idea!<p><a href="https://www.wired.com/story/i-infiltrated-moltbook-ai-only-social-network/" rel="nofollow">https://www.wired.com/story/i-infiltrated-moltbook-ai-only-s...</a></p>
]]></description><pubDate>Tue, 10 Feb 2026 15:16:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46960816</link><dc:creator>waldopat</dc:creator><comments>https://news.ycombinator.com/item?id=46960816</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46960816</guid></item><item><title><![CDATA[New comment by waldopat in "Frontier AI agents violate ethical constraints 30–50% of time, pressured by KPIs"]]></title><description><![CDATA[
<p>I think this also shows up outside an AI safety or ethics framing and in product development and operations. Ultimately "judgement," however you wish to quantify that fuzzy concept, is not purely an optimization exercise. It's far more a probabilistic information function from incomplete or conflicting data.<p>In product management (my domain), decisions are made under conflicting constraints: a big customer or account manager pushing hard, a CEO/board priority, tech debt, team capacity, reputational risk and market opportunity. PMs have tried with varied success to make decisions more transparent with scoring matrices and OKRs, but at some point someone has to make an imperfect judgment call that’s not reducible to a single metric. It's only defensible through narrative, which includes data.<p>Also, progressive elaboration or iterations or build-measure-learn are inherently fuzzy. Reinertsen compared this to maximizing the value of an option. Maybe in modern terms a prediction market is a better metaphor. That's what we're doing in sprints, maximizing our ability to deliver value in short increments.<p>I do get nervous about pushing agentic systems into roadmap planning, ticket writing, or KPI-driven execution loops. Once you collapse a messy web of tradeoffs into a single success signal, you’ve already lost a lot of the context.<p>There’s a parallel here for development too. LLMs are strongest at greenfield generation and weakest at surgical edits and refactoring. Early-stage startups survive by iterative design and feedback. Automating that with agents hooked into web analytics may compound errors and adverse outcomes.<p>So even if you strip out “ethics” and replace it with any pair of competing objectives, the failure mode remains.</p>
]]></description><pubDate>Tue, 10 Feb 2026 15:13:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46960778</link><dc:creator>waldopat</dc:creator><comments>https://news.ycombinator.com/item?id=46960778</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46960778</guid></item><item><title><![CDATA[New comment by waldopat in "Is HubSpot aggressive about collections?"]]></title><description><![CDATA[
<p>For anyone that's curious, we switched to Active Campaign. So much better for us and way cheaper.</p>
]]></description><pubDate>Mon, 09 Feb 2026 17:22:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=46947977</link><dc:creator>waldopat</dc:creator><comments>https://news.ycombinator.com/item?id=46947977</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46947977</guid></item><item><title><![CDATA[Is HubSpot aggressive about collections?]]></title><description><![CDATA[
<p>Article URL: <a href="https://old.reddit.com/r/hubspot/comments/1r0a00z/hubspot_collections_after_cancelling_question/">https://old.reddit.com/r/hubspot/comments/1r0a00z/hubspot_collections_after_cancelling_question/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46947833">https://news.ycombinator.com/item?id=46947833</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 09 Feb 2026 17:13:16 +0000</pubDate><link>https://old.reddit.com/r/hubspot/comments/1r0a00z/hubspot_collections_after_cancelling_question/</link><dc:creator>waldopat</dc:creator><comments>https://news.ycombinator.com/item?id=46947833</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46947833</guid></item><item><title><![CDATA[New comment by waldopat in "DNS Explained – How Domain Names Get Resolved"]]></title><description><![CDATA[
<p>I love DNS!<p><a href="https://www.instagram.com/p/DUTSLcjkfJn/" rel="nofollow">https://www.instagram.com/p/DUTSLcjkfJn/</a></p>
]]></description><pubDate>Fri, 06 Feb 2026 17:46:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46915873</link><dc:creator>waldopat</dc:creator><comments>https://news.ycombinator.com/item?id=46915873</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46915873</guid></item><item><title><![CDATA[New comment by waldopat in "Hackers (1995) Animated Experience"]]></title><description><![CDATA[
<p>Phenomenal work!</p>
]]></description><pubDate>Fri, 06 Feb 2026 17:45:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=46915862</link><dc:creator>waldopat</dc:creator><comments>https://news.ycombinator.com/item?id=46915862</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46915862</guid></item><item><title><![CDATA[New comment by waldopat in "Ask HN: Are you still using spec driven development?"]]></title><description><![CDATA[
<p>You don't need spec kit or humanlayer per se, though they are good reference points for starting out, but you do need your own CLAUDE/AGENTS, README, ARCHITECTURE, SDRs, policies, research docs and planning docs all nicely organized in files and folders to work well with AI agents.<p>Particularly in a brownfield development context, using AI to research issues, gaps, bugs, tech debt, reusable patterns etc. as research files is really useful. I find the sweet spot is ~750 lines for each file. Planning files can max out at 1,500 lines if needed or otherwise broken up into individual phases. You can always ask an LLM to create a starter set of docs to get you going and then maintain as you go along.<p>For the management nerds, none of this is new. See: Writing Effective Use Cases by Alistair Cockburn and Agile Specification-Driven Development by Ostroff, Makalsky and Paige. The fundamentals remain the same and I'd say become even more important in a brownfield context with a high degree of tech debt or complexity. Greenfield is a different story.<p>What's important with spec driven development is that the LLMs dramatically reduce the cost of both good documentation and good technical specifications. Once you have a good plan and good references, you can build anything with a high degree of accuracy.<p>I'd add caution about drift. The more documentation you shove into the context, the worse things can get. You can also create a beautiful and perfect functional spec that becomes a swiss cheese of gaps when you create your technical implementation plan. Always check and use different AI models adversarially to ensure you are actually getting the plan you want. Usually ChatGPT can spot Claude's blind spots, for example. And always test manually.</p>
]]></description><pubDate>Thu, 05 Feb 2026 23:48:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=46907115</link><dc:creator>waldopat</dc:creator><comments>https://news.ycombinator.com/item?id=46907115</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46907115</guid></item><item><title><![CDATA[New comment by waldopat in "Claude is a space to think"]]></title><description><![CDATA[
<p>Agreed on using both. I definitely know people who prefer Codex or Cursor. It's probably Coke or Pepsi at this point. I tend to prefer Claude Code, but that's just me.</p>
]]></description><pubDate>Thu, 05 Feb 2026 18:31:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=46903032</link><dc:creator>waldopat</dc:creator><comments>https://news.ycombinator.com/item?id=46903032</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46903032</guid></item><item><title><![CDATA[New comment by waldopat in "Claude is a space to think"]]></title><description><![CDATA[
<p>You may not like this sources, but both the tomato throwers to the green visor crowds agree they are losing money. How and when they make up the difference is up to speculation<p><a href="https://www.wheresyoured.at/why-everybody-is-losing-money-on-ai/" rel="nofollow">https://www.wheresyoured.at/why-everybody-is-losing-money-on...</a>
<a href="https://www.economist.com/business/2025/12/29/openai-faces-a-make-or-break-year-in-2026" rel="nofollow">https://www.economist.com/business/2025/12/29/openai-faces-a...</a>
<a href="https://finance.yahoo.com/news/openais-own-forecast-predicts-14-150445813.html" rel="nofollow">https://finance.yahoo.com/news/openais-own-forecast-predicts...</a></p>
]]></description><pubDate>Wed, 04 Feb 2026 20:21:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=46891092</link><dc:creator>waldopat</dc:creator><comments>https://news.ycombinator.com/item?id=46891092</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46891092</guid></item><item><title><![CDATA[New comment by waldopat in "Claude is a space to think"]]></title><description><![CDATA[
<p>I feel like they are picking a lane. ChatGPT is great for chatbots and the like, but, as was discussed in a prior thread, chatbots aren't the end-all-be-all of AI or LLMs. Claude Code is the workhorse for me and most folks I know for AI assisted development and business automation type tasks. Meanwhile, most folks I know who use ChatGPT are really replacing Google Search. This is where folks are trying to create llm.txt files to become more discoverable by ChatGPT specifically.<p>You can see the very different response by OpenAI: <a href="https://openai.com/index/our-approach-to-advertising-and-expanding-access/" rel="nofollow">https://openai.com/index/our-approach-to-advertising-and-exp...</a>. ChatGPT is saying they will mark ads as ads and keep answers "independent," but that is not measurable. So we'll see.<p>For Anthropic to be proactive in saying they will not pursue ad based revenue I think is not just "one of the good guys" but that they may be stabilizing on a business model of both seat and usage based subscriptions.<p>Either way, both companies are hemorrhaging money.</p>
]]></description><pubDate>Wed, 04 Feb 2026 19:01:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=46890104</link><dc:creator>waldopat</dc:creator><comments>https://news.ycombinator.com/item?id=46890104</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46890104</guid></item><item><title><![CDATA[New comment by waldopat in "The Codex App"]]></title><description><![CDATA[
<p>Just jumping on the thread. I think the conversation is conflating two very different things:<p>1. Turing test UX's, where a chat app is the product and the feature (Electron is fine)
2. The class of things LLMs are good at that often do not need a UI, let alone a chat app, and need automation glue (Electron may cause friction)<p>Personally, I feel like we're jumping on capabilities and missing a much larger issue of permissioning and security.<p>In an API or MCP context, permissions may be scoped via tokens at the very least, but within an OS context, that boundary is not necessarily present. Once an agent can read and write files or executed commands as the logged in user, there's a level of trust and access that goes against most best practices.<p>This is probably a startup to be hatched, but it seems to me this space of getting agents to be scoped properly and stay in bounds, just like cursor has rules, would be a prereq before giving access to an OS at all.</p>
]]></description><pubDate>Tue, 03 Feb 2026 18:30:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=46875056</link><dc:creator>waldopat</dc:creator><comments>https://news.ycombinator.com/item?id=46875056</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46875056</guid></item><item><title><![CDATA[New comment by waldopat in "The Codex App"]]></title><description><![CDATA[
<p>It seems the big feature is working agents in parallel? I've been working agents in parallel in Claude Code for almost 9 months now. Just create a command in .claude/commands that references an agent in .claude/agents. You can also just call parallel default Task agents to work concurrently.<p>Using slash commands and agents has been a game changer for me for anything from creating and executing on plans to following proper CI/CD policies when I commit changes.<p>To Codex more generally, I love it for surgical changes or whenever Claude chases its tail. It's also very, very good at finding Claude's blindspots on plans. Using AI tools adversarially is another big win in terms of getting things 90% right the first time. Once you get the right execution plan with the right code snippets, Claude is essentially a very fast typer. That's how I prefer to do AI-assisted development personally.<p>That said, I agree with the comments on tokens. I can use Codex until the sun goes down on $20/month. I use the $200/month pro plan with Claude and have only maxxed out a couple times, but I do find the volume to quality to be better with Claude. So far it's worth the money.</p>
]]></description><pubDate>Mon, 02 Feb 2026 21:47:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=46862106</link><dc:creator>waldopat</dc:creator><comments>https://news.ycombinator.com/item?id=46862106</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46862106</guid></item><item><title><![CDATA[New comment by waldopat in "Building the Generative Web with AI Ft Vercel CEO Guillermo Rauch [video]"]]></title><description><![CDATA[
<p>One thing I particularly appreciate about Guillermo Rauch, as an AI founder, is that he's got deep technical talent, especially as the lead on Next.JS. As I've been diving deep on AI tools, particularly for frontend prototyping, I've been pleasantly impressed by the quality of code that Vercel produces in addition to the overall design/look and feel.<p>This was a very refreshing interview.</p>
]]></description><pubDate>Mon, 11 Aug 2025 14:35:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=44864594</link><dc:creator>waldopat</dc:creator><comments>https://news.ycombinator.com/item?id=44864594</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44864594</guid></item></channel></rss>