<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: iepathos</title><link>https://news.ycombinator.com/user?id=iepathos</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 14 Apr 2026 21:35:17 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=iepathos" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by iepathos in "Ask HN: Who wants to be hired? (April 2026)"]]></title><description><![CDATA[
<p>Location: Mountain View, CA<p>Remote: Yes<p>Willing to relocate: No<p>Technologies: Python, Rust, Typescript, Go, Infrastructure, DevSecOps<p>Résumé/CV: <a href="https://www.linkedin.com/in/glenbbaker" rel="nofollow">https://www.linkedin.com/in/glenbbaker</a><p>Email: iepathos@gmail.com<p>Hi, I'm a Staff engineer with 14+ years of experience. Spent the last 8.5 years building an early stage startup and following it through acquisition. I supported the post-acquisition transition by training and mentoring offshore engineers and helping maintain continuity as the company shifted from startup execution to standardized enterprise delivery.<p>My background spans backend engineering, infrastructure, and security, with hands-on work in Python, Rust, and TypeScript. I maintain open source projects and contribute when I can to core Rust projects such as Clap and Cargo. I’m a good fit for teams that need pragmatic execution and someone comfortable owning hard problems across systems, platform, and DevSecOps.</p>
]]></description><pubDate>Wed, 01 Apr 2026 17:05:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47603566</link><dc:creator>iepathos</dc:creator><comments>https://news.ycombinator.com/item?id=47603566</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47603566</guid></item><item><title><![CDATA[New comment by iepathos in "Malus – Clean Room as a Service"]]></title><description><![CDATA[
<p>This is essentially 'License Laundering as a Service.' The 'Firewall' they describe is an illusion because the contamination happens at the training phase, not the inference phase. You can't claim independent creation when your 'independent developer' (the commercial LLM) already has the original implementation's patterns and edge cases baked into its weights.<p>In order to really do this, they would need to train LLMs from scratch that had no exposure whatsoever to open source code which they may be asked to reproduce. Those models in turn would be terrible at coding given how much of the training corpus is open source code.</p>
]]></description><pubDate>Thu, 12 Mar 2026 17:10:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47354045</link><dc:creator>iepathos</dc:creator><comments>https://news.ycombinator.com/item?id=47354045</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47354045</guid></item><item><title><![CDATA[New comment by iepathos in "Ars Technica fires reporter after AI controversy involving fabricated quotes"]]></title><description><![CDATA[
<p>If the Ars Technica editorial process requires assuming reporters don't fabricate quotes, then their process is inadequate. That's like a software company letting junior engineers release directly to production with just a spellcheck and no real process to catch errors. Major publications like The New Yorker, The Atlantic, etc. have a dedicated fact-checking department that is part of the process and needs to give the ok before any article is published. Why is their process so deficient by comparison? Why wasn't there any fact checking?</p>
]]></description><pubDate>Tue, 03 Mar 2026 15:19:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47233721</link><dc:creator>iepathos</dc:creator><comments>https://news.ycombinator.com/item?id=47233721</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47233721</guid></item><item><title><![CDATA[New comment by iepathos in "The United States and Israel have launched a major attack on Iran"]]></title><description><![CDATA[
<p>The idea that China hasn't 'attacked anyone' in 40 years is factually incorrect. In 1988, they engaged in a deadly naval skirmish with Vietnam over the Johnson South Reef. More recently, the PLA engaged in fatal border clashes with India in the Galwan Valley (2020). On top of direct skirmishes, they have engaged in constant gray-zone aggression: violently ramming Philippine and Vietnamese vessels in the South China Sea, firing water cannons at supply ships, and surrounding Taiwan with live-fire military blockades. That doesn't even touch on the internal human rights abuses against the Uyghurs in Xinjiang. Multiple international bodies and governments have recognized what they are doing to Uyghurs since 2014 as genocide. Finally, it's hard to ignore their devastating handling of COVID-19. The active suppression of information, punishment of early whistleblowers, and refusal to cooperate with international investigations resulted in unprecedented worldwide damage, amounting to an act of gross global endangerment.</p>
]]></description><pubDate>Sat, 28 Feb 2026 14:53:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47196016</link><dc:creator>iepathos</dc:creator><comments>https://news.ycombinator.com/item?id=47196016</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47196016</guid></item><item><title><![CDATA[New comment by iepathos in "Addressing Antigravity Bans and Reinstating Access"]]></title><description><![CDATA[
<p>Refreshing response from Google especially given the incompetence with which Anthropic has handled bans.</p>
]]></description><pubDate>Sat, 28 Feb 2026 14:22:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47195726</link><dc:creator>iepathos</dc:creator><comments>https://news.ycombinator.com/item?id=47195726</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47195726</guid></item><item><title><![CDATA[New comment by iepathos in "The Pentagon threatens Anthropic"]]></title><description><![CDATA[
<p>The old path of 'military invents it, civilians eventually get it' (like the Space Race or early ARPANET) hasn't been true for decades. Today, almost all major technological leaps like the modern internet, search engines, smartphones, commercial drones, etc. start in the commercial consumer sector first. The global consumer market dwarfs the defense market, which means the private sector has vastly more capital for R&D. Government payscale caps out ~$190k-$200k/year for specialized roles without some congressional workaround.  The top AI researchers at OpenAI, Anthropic, Google etc. make ~$1m-$5m+/year for total compensation. The government couldn't afford to hire the right talent and the right talent likely would refuse based on moral, ethical, and rational principles with the current government.</p>
]]></description><pubDate>Wed, 25 Feb 2026 20:13:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47157176</link><dc:creator>iepathos</dc:creator><comments>https://news.ycombinator.com/item?id=47157176</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47157176</guid></item><item><title><![CDATA[New comment by iepathos in "Minions: Stripe’s one-shot, end-to-end coding agents"]]></title><description><![CDATA[
<p>"1000 PRs/week" with no breakdown of complexity or value is a vanity metric. If these are mostly migrations, boilerplate, and bug fixes on previous Minion PRs that were bug ridden, then you've just created 1000 code reviews/week to waste human time rubber-stamping. That's not productivity, that's busywork with extra steps.<p>It's like measuring productivity by how many people you pull into meetings each week. The CIA's Simple Sabotage Field Manual literally recommends holding as many meetings as possible with as many people as possible. The CIA should add "open as many PRs with AI as possible" to their list. Bonus sabotage points if the PRs are made from ambiguous "one-shot" attempts described in Slack with no follow up clarification.</p>
]]></description><pubDate>Sun, 22 Feb 2026 14:52:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47111453</link><dc:creator>iepathos</dc:creator><comments>https://news.ycombinator.com/item?id=47111453</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47111453</guid></item><item><title><![CDATA[New comment by iepathos in "Discord/Twitch/Snapchat age verification bypass"]]></title><description><![CDATA[
<p>The hole is closed with per-site pseudonyms. Your wallet generates a unique cryptographic key pair for each site so same person + same site = same pseudonym, same person + different sites = different, unlinkable pseudonyms.<p>"The actual correct way" is an overstatement that misses jfaganel99's point. There are always tradeoffs. EUDI is no exception. It sacrifices full anonymity to prevent credential sharing so the site can't learn your identity, but it can recognize you across visits and build a behavioral profile under your pseudonym.</p>
]]></description><pubDate>Thu, 12 Feb 2026 17:31:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46991895</link><dc:creator>iepathos</dc:creator><comments>https://news.ycombinator.com/item?id=46991895</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46991895</guid></item><item><title><![CDATA[New comment by iepathos in "We mourn our craft"]]></title><description><![CDATA[
<p>If AI is good enough that juniors wielding it outproduce seniors, then the juniors are just... overhead. The company would cut them out and let AI report to a handful of senior architects who actually understand what's being built. You don't pay humans to be a slow proxy for a better tool.<p>If the tools get good enough to not need senior oversight, they're good enough to not need junior intermediaries either. The "juniors with jetpacks outpacing seniors" future is unrealistic and unstable—it either collapses into "AI + a few senior architects" or "AI isn't actually that reliable yet."</p>
]]></description><pubDate>Sun, 08 Feb 2026 15:02:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=46934783</link><dc:creator>iepathos</dc:creator><comments>https://news.ycombinator.com/item?id=46934783</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46934783</guid></item><item><title><![CDATA[New comment by iepathos in "TikTok's 'addictive design' found to be illegal in Europe"]]></title><description><![CDATA[
<p>Apparent hypocrisy and injustice in government policy is an ugly thing in the world that should be pointed out and eliminated through public awareness and scrutiny.</p>
]]></description><pubDate>Fri, 06 Feb 2026 13:15:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=46912443</link><dc:creator>iepathos</dc:creator><comments>https://news.ycombinator.com/item?id=46912443</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46912443</guid></item><item><title><![CDATA[New comment by iepathos in "TikTok's 'addictive design' found to be illegal in Europe"]]></title><description><![CDATA[
<p>Get a life that's more interesting than dish washing 4-8 hours a day.</p>
]]></description><pubDate>Fri, 06 Feb 2026 13:14:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=46912425</link><dc:creator>iepathos</dc:creator><comments>https://news.ycombinator.com/item?id=46912425</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46912425</guid></item><item><title><![CDATA[New comment by iepathos in "A sane but bull case on Clawdbot / OpenClaw"]]></title><description><![CDATA[
<p>Thought the same thing. There is no legal recourse if the bot drains the account and donates to charity. The legal system's response to that is don't give non-deterministic bots access to your bank account and 2FA. There is no further recourse. No bank or insurance company will cover this and rightfully so. If he wanted to guard himself somewhat he'd only give the bot a credit card he could cancel or stop payments on, the exact minimum he gives the human assistant.</p>
]]></description><pubDate>Wed, 04 Feb 2026 15:12:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46886794</link><dc:creator>iepathos</dc:creator><comments>https://news.ycombinator.com/item?id=46886794</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46886794</guid></item><item><title><![CDATA[New comment by iepathos in "Ask HN: Do you have any evidence that agentic coding works?"]]></title><description><![CDATA[
<p>The default output from AI is much like the default output from experienced devs prioritizing speed over architecture to meet business objectives. Just like experienced devs, LLMs accept technical debt as leverage for velocity. This isn't surprising - most code in the world carries technical debt, so that's what the models trained on and learned to optimize for.<p>Technical debt, like financial debt, is a tool. The problem isn't its existence, it's unmanaged accumulation.<p>A few observations from my experience:<p>1. One-shotting - if you're prompting once and shipping, you're getting the "fast and working" version, not the "well-architected" version. Same as asking an experienced dev for a quick prototype.<p>2. AI can output excellent code - but it takes iteration, explicit architectural constraints, and often specialized tooling. The models have seen clean code too; they just need steering toward it.<p>3. The solution isn't debt-free commits. The solution is measuring, prioritizing, and reducing only the highest risk tech debt - the equivalent of focusing on bottlenecks with performance profiling. Which code is high-risk? Where's the debt concentrated? Poorly-factored code with good test coverage is low-risk. Poorly-tested code in critical execution paths is high-risk. Your CI pipeline needs to check the debt automatically for you just like it needs to lint and check your tests pass.<p>I built <a href="https://github.com/iepathos/debtmap" rel="nofollow">https://github.com/iepathos/debtmap</a> to solve this systematically for my projects. It measures technical debt density to prioritize risk, but more importantly for this discussion: it identifies the right context for an LLM to understand a problem without looking through the whole codebase. The output is designed to be used with an LLM for automated technical debt reduction. And because we're measuring debt before and after, we have a feedback loop - enabling the LLM to iterate effectively and see whether its refactoring had a positive impact or made things worse. That's the missing piece in most agentic workflows: measurement that closes the loop.<p>To your specific concern about shipping unreviewed code: I agree it's risky, but the review focus should shift from "is every line perfect" to "where are the structural risks, and are those paths well-tested?" If your code has low complexity everywhere, is well tested (always review tests), and passing everything, then ask yourself what you actually gain at that point from further investing your time over-engineering the lesser tech debt away? You can't eliminate all tech debt, but you can keep it from compounding in the places that matter.</p>
]]></description><pubDate>Wed, 21 Jan 2026 17:08:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=46708435</link><dc:creator>iepathos</dc:creator><comments>https://news.ycombinator.com/item?id=46708435</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46708435</guid></item><item><title><![CDATA[New comment by iepathos in "The Code-Only Agent"]]></title><description><![CDATA[
<p>The "code witness" concept falls apart under scrutiny. In practice, the agent isn't replacing ripgrep with pure Python, it's generating a Python wrapper that calls ripgrep via subprocess. So you get:<p>- Extra tokens to generate the wrapper<p>- New failure modes (encoding issues, exit code handling, stderr bugs)<p>- The same underlying tool call anyway<p>- No stronger guarantees - actually weaker ones, since you're now trusting both the tool AND the generated wrapper<p>The theoretical framing about "proofs as programs" and "semantic guarantees" sounds impressive, but the generated wrapper
doesn't provide stronger semantics than rg alone, it actually provides strictly weaker ones. This is true for pretty much any CLI tool you're having the AI wrap python code around to do instead of calling battle tested tools directly.<p>For actual development work, the artifact that matters is the code you're building, which we're already tracking in source control. Nobody needs a "witness" of how the agent found the right file to edit and if they do agents have parseable logs. Direct tool calls are faster, more reliable, and the intermediate exploration steps are ephemeral scaffolding anyway.</p>
]]></description><pubDate>Mon, 19 Jan 2026 15:00:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46679748</link><dc:creator>iepathos</dc:creator><comments>https://news.ycombinator.com/item?id=46679748</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46679748</guid></item><item><title><![CDATA[New comment by iepathos in "You Need to Ditch VS Code"]]></title><description><![CDATA[
<p>Research on calculator use in early math education (notably the Hembree & Dessart meta-analysis of 79 studies) found that students given calculators performed better at math - including on paper-and-pencil tests without calculators. The hypothesis is that calculators handle computation, freeing cognitive bandwidth and time for problem-solving and conceptual understanding. Problem solving and higher level concepts matter far more than memorizing multiplication and division tables.<p>I think about this often when discussing AI adoption with people. It's also relevant to this VS Code discussion which is tangential to the broader AI assisted development discussion. This post conflates tool proficiency with understanding. You can deeply understand Git's DAG model while never typing git reflog. Conversely, you can memorize every terminal command and still design terrible systems.<p>The scarce resource for most developers isn't "knows terminal commands" - it's "can reason about complex systems under uncertainty." If a tool frees up bandwidth for that, that's a net win. Not to throw shade at hyper efficient terminal users, I live in the terminal and recommend it, but it isn't going to make you a better programmer just by using it instead of an IDE for writing code. It isn't reasoning and understanding about complex systems that you gain from living in a terminal. You gain efficiency, flexibility, and nerd cred - all valuable, but none of them are systems thinking.<p>The auto-complete point in the post is particularly ironic given how critical it is for terminal users and that most vim users also rely heavily on auto-complete. Auto-complete does not limit your effectiveness, it's provably the opposite.</p>
]]></description><pubDate>Tue, 30 Dec 2025 11:50:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=46432374</link><dc:creator>iepathos</dc:creator><comments>https://news.ycombinator.com/item?id=46432374</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46432374</guid></item><item><title><![CDATA[New comment by iepathos in "Rob Pike goes nuclear over GenAI"]]></title><description><![CDATA[
<p>Thanks for the thoughtful reply.<p>The objections to non-profits, OSFs, education, healthcare, and small companies all boil down to: they don't pay enough or they're inconvenient. Those are valid personal reasons, but not moral justifications. You decided you wanted the money big tech delivers and are willing to exchange ethics for that. That's fine, but own it. It's not some inevitable prostitution everyone must do. Plenty of people make the other choice.<p>The Google/AI distinction still doesn't hold. Anthropic and OpenAI also created products with clear utility. If Google gets "mixed bag" status because of Docs and Maps (products that exist largely just to feed their ad machine), why is AI "unquestionable cancer"? You're claiming Google's useful products excuse their harms, but AI companies' useful products don't. That's not a principled line, it's just where you've personally decided to draw it.</p>
]]></description><pubDate>Fri, 26 Dec 2025 22:03:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=46396800</link><dc:creator>iepathos</dc:creator><comments>https://news.ycombinator.com/item?id=46396800</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46396800</guid></item><item><title><![CDATA[New comment by iepathos in "Rob Pike goes nuclear over GenAI"]]></title><description><![CDATA[
<p>> As IT workers, we all have to prostitute ourselves to some extent.<p>No, we really don't. There are plenty of places to work that aren't morally compromised - non-profits, open source foundations, education, healthcare tech, small companies solving real problems. The "we all have to" framing is a convenient way to avoid examining your own choices.<p>And it's telling that this framing always seems to appear when someone is defending their own employer. You've drawn a clear moral line between Google ("mixed bag") and AI companies ("unquestionably cancer") - so you clearly believe these distinctions matter even though Google itself is an AI company.</p>
]]></description><pubDate>Fri, 26 Dec 2025 20:37:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46395973</link><dc:creator>iepathos</dc:creator><comments>https://news.ycombinator.com/item?id=46395973</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46395973</guid></item><item><title><![CDATA[New comment by iepathos in "Europeans' health data sold to US firm run by ex-Israeli spies"]]></title><description><![CDATA[
<p>What specific legal recourse beyond what exists? You can already sue for breach of contract if a company violates their privacy policy. The real problems are: (1) detecting violations in the first place, and (2) proving/quantifying damages. A 'guarantee' doesn't solve either.</p>
]]></description><pubDate>Sun, 14 Dec 2025 18:35:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=46265514</link><dc:creator>iepathos</dc:creator><comments>https://news.ycombinator.com/item?id=46265514</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46265514</guid></item><item><title><![CDATA[New comment by iepathos in "Why are 38 percent of Stanford students saying they're disabled?"]]></title><description><![CDATA[
<p>I agree with you that cheating is a loaded word, but the question at the end here that the rules or standards enable users to work around it therefore it's not cheating is a bad semantic argument.  We can use the exact same argument to excuse every kind of rule breaking that people do.  If a hacker drains a billion dollars out of a smart contract, then they literally were only able to do so because the coded rules of the smart contract itself enabled it through whatever flaw the hacker identified.  That doesn't make it less illegal or not cheating for the hacker.  It feels like victim blaming to point the finger at the institution being exploited or people who get hacked and say its their problem not the individuals intentionally exploiting them.</p>
]]></description><pubDate>Thu, 04 Dec 2025 19:56:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=46152124</link><dc:creator>iepathos</dc:creator><comments>https://news.ycombinator.com/item?id=46152124</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46152124</guid></item><item><title><![CDATA[New comment by iepathos in "I don't care how well your "AI" works"]]></title><description><![CDATA[
<p>These hyper paranoid statements like "I personally don’t touch LLMs with a stick. I don’t let them near my brain", are fairly worrisome for a technical person who claims to have any understanding of AI and undermines the credibility of the critique.  There is some truth in here but it's beneath a lot of paranoia that's hard to sift through.</p>
]]></description><pubDate>Wed, 26 Nov 2025 18:03:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=46060439</link><dc:creator>iepathos</dc:creator><comments>https://news.ycombinator.com/item?id=46060439</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46060439</guid></item></channel></rss>