<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: toniantunovi</title><link>https://news.ycombinator.com/user?id=toniantunovi</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 05 Apr 2026 23:55:13 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=toniantunovi" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by toniantunovi in "Tell HN: Anthropic no longer allowing Claude Code subscriptions to use OpenClaw"]]></title><description><![CDATA[
<p>This is a useful forcing function for distinguishing two architectural categories of Claude Code adjacent tooling. Tools that route your API traffic through a third-party harness are definitionally dependent on Anthropic's policy toward that harness. Tools that run locally and integrate via MCP, without touching the API subscription path at all, are outside this restriction entirely because they are just another tool in your local environment. The local-first architecture was always the right one for teams with compliance or privacy requirements. This week is a good illustration of why it is also the right one for teams that just want to avoid dependency on a vendor-intermediary relationship they cannot control.</p>
]]></description><pubDate>Sun, 05 Apr 2026 17:00:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47651375</link><dc:creator>toniantunovi</dc:creator><comments>https://news.ycombinator.com/item?id=47651375</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47651375</guid></item><item><title><![CDATA[New comment by toniantunovi in "The threat is comfortable drift toward not understanding what you're doing"]]></title><description><![CDATA[
<p>The coding-specific version of this is worth naming precisely. The drift does not happen because you stop writing code. It happens because you stop reading the output carefully. With AI-generated code, there is a particular failure mode: the code is plausible enough to pass a quick review and tests pass, so you ship it. The understanding degradation is cumulative and invisible until it is not. The partial fix is making automated checks independent of the developer's attention level: type checking, SAST, dependency analysis, and coverage gates that run regardless of how carefully you reviewed the diff. These are not a substitute for understanding, but they create a floor below which "comfortable drift" cannot silently carry you. The question worth asking of any AI coding workflow is whether that floor exists and where it is.</p>
]]></description><pubDate>Sun, 05 Apr 2026 17:00:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47651373</link><dc:creator>toniantunovi</dc:creator><comments>https://news.ycombinator.com/item?id=47651373</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47651373</guid></item><item><title><![CDATA[New comment by toniantunovi in "Claude Code's source code has been leaked via a map file in their NPM registry"]]></title><description><![CDATA[
<p>Very nice contribution to OSS</p>
]]></description><pubDate>Wed, 01 Apr 2026 11:04:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47599262</link><dc:creator>toniantunovi</dc:creator><comments>https://news.ycombinator.com/item?id=47599262</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47599262</guid></item><item><title><![CDATA[New comment by toniantunovi in "Shipping a Week's Work in a Day using parallel Claude agents"]]></title><description><![CDATA[
<p>The CLAUDE.md approach for enforcing standards is solid. One thing we ran into with parallel agent workflows: the quality gates are only as good as what you catch before merge, and across multiple worktrees it's easy for one branch to quietly introduce a dependency issue or a secrets-in-code pattern while you're reviewing another. We built LucidShark partly for this — it runs linting, SAST, dependency checks, and type checking locally before anything hits CI, so the per-branch review overhead stays manageable. Works well as the "pre-merge step" in a worktree-heavy workflow.</p>
]]></description><pubDate>Sun, 29 Mar 2026 18:06:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47565543</link><dc:creator>toniantunovi</dc:creator><comments>https://news.ycombinator.com/item?id=47565543</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47565543</guid></item><item><title><![CDATA[New comment by toniantunovi in "Ask HN: How do you handle PR density (and slop) in open source"]]></title><description><![CDATA[
<p>The cURL situation is a canary. The real fix isn't gate-keeping humans out, it's making quality enforcement automatic before a PR is ever opened. I built LucidShark specifically for this: it's a local CLI quality gate that runs SAST, SCA, linting, type checks, coverage, and duplication analysis in one shot on AI-generated code.</p>
]]></description><pubDate>Tue, 24 Mar 2026 21:21:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47509524</link><dc:creator>toniantunovi</dc:creator><comments>https://news.ycombinator.com/item?id=47509524</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47509524</guid></item><item><title><![CDATA[New comment by toniantunovi in "The Myth of Never Giving Up"]]></title><description><![CDATA[
<p>You've got to know when to hold 'em, know when to fold 'em.</p>
]]></description><pubDate>Wed, 18 Mar 2026 19:41:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47430467</link><dc:creator>toniantunovi</dc:creator><comments>https://news.ycombinator.com/item?id=47430467</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47430467</guid></item><item><title><![CDATA[New comment by toniantunovi in "AI Quality Paradox: How Code Complexity Drives Rework in AI-Assisted Development"]]></title><description><![CDATA[
<p>The saddle-node bifurcation framing is genuinely useful for making the case to engineering leadership. "Your QA capacity is a control variable, and if it doesn't scale with generation volume, you're past the tipping point" is a much more compelling argument than "AI code is sometimes bad."<p>The practical implication that stands out to me: the solution isn't to slow down AI generation - it's to automate QA interception so it scales at the same rate. That's why there's been a wave of tools focused on automated checks that run immediately post-generation (linting, SAST, SCA) rather than relying on human review to absorb the volume increase.<p>We've been building in this space with LucidShark (lucidshark.com) - the core hypothesis is exactly what your model suggests: the constraint isn't the generation side, it's the validation side, and automation is the only way to keep validation capacity proportional to throughput. Would love to see your model applied to teams that add an automated gate - does it change the bifurcation threshold significantly?</p>
]]></description><pubDate>Sat, 14 Mar 2026 12:18:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47375920</link><dc:creator>toniantunovi</dc:creator><comments>https://news.ycombinator.com/item?id=47375920</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47375920</guid></item><item><title><![CDATA[New comment by toniantunovi in "The Dopamine Trap of Vibe Coding"]]></title><description><![CDATA[
<p>The dopamine loop framing is spot-on. The "it compiles and the tests pass" feedback cycle is genuinely intoxicating, and it's very different from the slower, more uncertain feeling of writing careful code yourself.<p>What's interesting is that the same AI tools that create the loop can partially break it if you add a mandatory quality gate between "agent generates code" and "you merge it." The friction of seeing a linting/security report before you hit merge forces a moment of actual review that the vibe coding flow otherwise eliminates.</p>
]]></description><pubDate>Sat, 14 Mar 2026 12:15:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47375890</link><dc:creator>toniantunovi</dc:creator><comments>https://news.ycombinator.com/item?id=47375890</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47375890</guid></item><item><title><![CDATA[New comment by toniantunovi in "Launch HN: Sentrial (YC W26) – Catch AI agent failures before your users do"]]></title><description><![CDATA[
<p>Congrats on the launch! The production monitoring angle is genuinely underserved. Most teams only realize AI agent failures exist once users are complaining.<p>The most common failure mode we see: AI agents write code that passes all existing tests and looks fine in review, but has subtle IDOR issues, hardcoded secrets, or hallucinated package imports with vulnerable versions. Those don't surface at runtime until conditions are just right.</p>
]]></description><pubDate>Sat, 14 Mar 2026 12:12:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47375866</link><dc:creator>toniantunovi</dc:creator><comments>https://news.ycombinator.com/item?id=47375866</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47375866</guid></item><item><title><![CDATA[New comment by toniantunovi in "Show HN: The Mog Programming Language"]]></title><description><![CDATA[
<p>This is a fascinating approach, solving the problem at the language level. The capability-based permission model is elegant.<p>A complementary angle we've been exploring with LucidShark (lucidshark.com) is attacking the same problem from the workflow layer rather than the language layer: instead of constraining what the LLM can write, you run SAST, SCA, and linting automatically after every generation step, before anything touches CI or production.<p>The nice thing about that approach is it works with existing languages today — Python, TypeScript, Go, etc. and plugs directly into Claude Code or Cursor as a pre-commit gate. The downside is it's catching issues after generation vs. preventing them structurally like Mog aims to.<p>I suspect the long-term solution is both layers: safer languages for greenfield AI-native projects + robust static analysis for the 99% of existing codebases where you can't change the language.</p>
]]></description><pubDate>Sat, 14 Mar 2026 12:09:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47375847</link><dc:creator>toniantunovi</dc:creator><comments>https://news.ycombinator.com/item?id=47375847</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47375847</guid></item><item><title><![CDATA[New comment by toniantunovi in "Code Review for Claude Code"]]></title><description><![CDATA[
<p>When a tool flags 8 issues on clean code and 8 issues on broken code, it's not a reviewer, it's a random number generator with a UI. The approach we've found more tractable is to separate concerns: let deterministic tools (linters, SAST, SCA) handle what they're definitively good at - style, known vuln patterns, dependency CVEs, secrets and reserve the AI layer for things humans actually need help reasoning about. Running this locally as a pre-push or CI step means you catch the boring 80% before it ever reaches a $25 AI review. You're not paying Claude to tell you your import is unused - you're paying it to reason about whether your auth flow has a TOCTOU issue. That's a very different and much more valuable question.</p>
]]></description><pubDate>Fri, 13 Mar 2026 22:28:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47370832</link><dc:creator>toniantunovi</dc:creator><comments>https://news.ycombinator.com/item?id=47370832</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47370832</guid></item></channel></rss>