<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: jpollock</title><link>https://news.ycombinator.com/user?id=jpollock</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 11:50:04 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=jpollock" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by jpollock in "Async Python Is Secretly Deterministic"]]></title><description><![CDATA[
<p>That's deterministic dispatch, as soon as it forks or communicates, it is non deterministic again?<p>Don't you need something like a network clock to get deterministic replay?<p>It can't use immediate return on replay, or else the order will change.<p>This makes me twitchy.  The dependencies should be better modelled, and idempotency used instead of logging and caching.</p>
]]></description><pubDate>Fri, 03 Apr 2026 20:11:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47631601</link><dc:creator>jpollock</dc:creator><comments>https://news.ycombinator.com/item?id=47631601</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47631601</guid></item><item><title><![CDATA[New comment by jpollock in "The Claude Code Source Leak: fake tools, frustration regexes, undercover mode"]]></title><description><![CDATA[
<p>Yes, it sets the reviewer's expectations around how much effort was spent reviewing the code before it was sent.<p>I regularly have tool-generated commits. I send them out with a reference to the tool, what the process is, how much it's been reviewed and what the expectation is of the reviewer.<p>Otherwise, they all assume "human authored" and "human sponsored". Reviewers will then send comments (instead of proposing the fix themselves). When you're wrangling several hundred changes, that becomes unworkable.</p>
]]></description><pubDate>Wed, 01 Apr 2026 00:13:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47595184</link><dc:creator>jpollock</dc:creator><comments>https://news.ycombinator.com/item?id=47595184</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47595184</guid></item><item><title><![CDATA[New comment by jpollock in "Android Developer Verification"]]></title><description><![CDATA[
<p>That can have some very extreme legal ramifications.<p>Consider - it's a voip dialing client which has a requirement to provide location for E911 support.<p>If the OS vendor starts providing invalid data, it's the OS vendor which ends up being liable for the person's death.<p>e.g. <a href="https://www.cnet.com/home/internet/texas-sues-vonage-over-911-problem/" rel="nofollow">https://www.cnet.com/home/internet/texas-sues-vonage-over-91...</a><p>which is from 2005, but gives you an idea of the liability involved.</p>
]]></description><pubDate>Tue, 31 Mar 2026 04:43:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47582844</link><dc:creator>jpollock</dc:creator><comments>https://news.ycombinator.com/item?id=47582844</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47582844</guid></item><item><title><![CDATA[New comment by jpollock in "We should revisit literate programming in the agent era"]]></title><description><![CDATA[
<p>Documentation rots a lot more quickly than the code - it doesn't need to be correct for the code to work. You are usually better off ignoring the comments (even more so the design document) and going straight to the code.</p>
]]></description><pubDate>Sun, 08 Mar 2026 22:09:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47302108</link><dc:creator>jpollock</dc:creator><comments>https://news.ycombinator.com/item?id=47302108</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47302108</guid></item><item><title><![CDATA[New comment by jpollock in "Labor market impacts of AI: A new measure and early evidence"]]></title><description><![CDATA[
<p>The last time I tried AI, I tested it with a stopwatch.<p>The group used feature flags...<p><pre><code>    if (a) {
       // new code
    } else {
       // old code
    }

    void testOff() {
       disableFlag(a);
       // test it still works
    }
    
    void testOn() {
        enableFlag(a);
        // test it still works
    }
</code></pre>
However, as with any cleanup, it doesn't happen. We have thousands of these things lying around taking up space. I thought "I can give this to the AI, it won't get bored or complain."<p>I can do one flag in ~3minutes. Code edit, pr prepped and sent.<p>The AI can do one in 10mins, but I couldn't look away. It kept trying to use find/grep to search through a huge repo to find symbols (instead of the MCP service).<p>Then it ignored instructions and didn't clean up one or the other test, left unused fields or parameters and generally made a mess.<p>Finally, I needed to review and fix the results, taking another 3-5 minutes, with no guarantee that it compiled.<p>At that point, a task that takes me 3 minutes has taken me 15.<p>Sure, it made code changes, and felt "cool", but it cost the company 5x the cost of not using the AI (before considering the token cost).<p>Even worse, the CI/CD system couldn't keep up the my individual velocity of cleaning these up, using an automated tool? Yeah, not going to be pleasant.<p>However, I need to try again, everyone's saying there was a step change in December.</p>
]]></description><pubDate>Fri, 06 Mar 2026 08:16:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47272353</link><dc:creator>jpollock</dc:creator><comments>https://news.ycombinator.com/item?id=47272353</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47272353</guid></item><item><title><![CDATA[New comment by jpollock in "Nobody gets promoted for simplicity"]]></title><description><![CDATA[
<p>Won't that show up in roi numbers?</p>
]]></description><pubDate>Wed, 04 Mar 2026 19:24:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47252507</link><dc:creator>jpollock</dc:creator><comments>https://news.ycombinator.com/item?id=47252507</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47252507</guid></item><item><title><![CDATA[New comment by jpollock in "Nobody gets promoted for simplicity"]]></title><description><![CDATA[
<p>There are definite discontinuities in there. What works for a team of 5 is different to 50 is different to 500.<p>Even just taking fault incidence rates, assuming constant injection per dev hour...</p>
]]></description><pubDate>Wed, 04 Mar 2026 19:23:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47252499</link><dc:creator>jpollock</dc:creator><comments>https://news.ycombinator.com/item?id=47252499</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47252499</guid></item><item><title><![CDATA[New comment by jpollock in "When AI writes the software, who verifies it?"]]></title><description><![CDATA[
<p>If the llm is able to code it, there is enough training data that youight be better off in a different language that removes the boilerplate.</p>
]]></description><pubDate>Tue, 03 Mar 2026 22:51:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47240235</link><dc:creator>jpollock</dc:creator><comments>https://news.ycombinator.com/item?id=47240235</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47240235</guid></item><item><title><![CDATA[New comment by jpollock in "What does " 2>&1 " mean?"]]></title><description><![CDATA[
<p>There are a couple of ways to figure out.<p>open a terminal (OSX/Linux) and type:<p><pre><code>    man dup
</code></pre>
open a browser window and search for:<p><pre><code>    man dup
</code></pre>
Both will bring up the man page for the function call.<p>To get recursive, you can try:<p><pre><code>    man man unix
</code></pre>
(the unix is important, otherwise it gives you manly men)</p>
]]></description><pubDate>Thu, 26 Feb 2026 23:52:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47174093</link><dc:creator>jpollock</dc:creator><comments>https://news.ycombinator.com/item?id=47174093</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47174093</guid></item><item><title><![CDATA[New comment by jpollock in "Turn Dependabot off"]]></title><description><![CDATA[
<p>There is _always_ fraud, and you can't stop it all. All you can do is try to minimize the cost of the fraud.<p>There is an "acceptable" fraud rate from a payment processor. This explains why there are different rates for "card present" and "card not present" transactions, and why things like Apple Pay and Google Pay are popular with merchants.</p>
]]></description><pubDate>Sun, 22 Feb 2026 08:51:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47109461</link><dc:creator>jpollock</dc:creator><comments>https://news.ycombinator.com/item?id=47109461</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47109461</guid></item><item><title><![CDATA[New comment by jpollock in "Turn Dependabot off"]]></title><description><![CDATA[
<p>Literal blackmailing, same as ransomware.</p>
]]></description><pubDate>Sat, 21 Feb 2026 08:27:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47098674</link><dc:creator>jpollock</dc:creator><comments>https://news.ycombinator.com/item?id=47098674</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47098674</guid></item><item><title><![CDATA[New comment by jpollock in "Turn Dependabot off"]]></title><description><![CDATA[
<p>If the majority of your customers are good, failing closed will cost more than the fraud during the anti-fraud system's downtime.</p>
]]></description><pubDate>Fri, 20 Feb 2026 23:47:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47095707</link><dc:creator>jpollock</dc:creator><comments>https://news.ycombinator.com/item?id=47095707</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47095707</guid></item><item><title><![CDATA[New comment by jpollock in "Turn Dependabot off"]]></title><description><![CDATA[
<p>The severity of the DoS depends on the system being attacked, and how it is configured to behave on failure.<p>If the system is configured to "fail open", and it's something validating access (say anti-fraud), then the DoS becomes a fraud hole and profitable to exploit. Once discovered, this runs away _really_ quickly.<p>Treating DoS as affecting availability converts the issue into a "do I want to spend $X from a shakedown, or $Y to avoid being shaken down in the first place?"<p>Then, "what happens when people find out I pay out on shakedowns?"</p>
]]></description><pubDate>Fri, 20 Feb 2026 23:11:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47095348</link><dc:creator>jpollock</dc:creator><comments>https://news.ycombinator.com/item?id=47095348</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47095348</guid></item><item><title><![CDATA[New comment by jpollock in "Software factories and the agentic moment"]]></title><description><![CDATA[
<p>Have these people done the math on how many engineers they can hire in other countries for USD$200k/yr? If you choose the timezone properly, they will even work overnight (your time) and have things ready in the morning for you.<p>USD$200k is 3 engineers in New Zealand.<p><a href="https://www.levels.fyi/t/software-engineer/locations/new-zealand" rel="nofollow">https://www.levels.fyi/t/software-engineer/locations/new-zea...</a></p>
]]></description><pubDate>Sun, 08 Feb 2026 03:17:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46931009</link><dc:creator>jpollock</dc:creator><comments>https://news.ycombinator.com/item?id=46931009</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46931009</guid></item><item><title><![CDATA[New comment by jpollock in "I spent 5 years in DevOps – Solutions engineering gave me what I was missing"]]></title><description><![CDATA[
<p>In my career, DevOps was never a separate organization. It was a role assumed by the code owners. SRE (is it up, is the hardware working, is the network working?) was separate, and had different metrics.<p>Having separate teams makes it adversarial because both orgs end up reporting into separate hierarchies with independent goals.<p>Think about the metrics each team is measured on. Who resolves conflicts between them? How high up the org chart is it necessary to go to resolve the conflict? Can one team make different tradeoffs on code quality vs speed from another, or is it company-wide?</p>
]]></description><pubDate>Sat, 07 Feb 2026 01:44:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46920481</link><dc:creator>jpollock</dc:creator><comments>https://news.ycombinator.com/item?id=46920481</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46920481</guid></item><item><title><![CDATA[New comment by jpollock in "I spent 5 years in DevOps – Solutions engineering gave me what I was missing"]]></title><description><![CDATA[
<p>I think there are different definitions of DevOps.<p>I see a difference between a more definite operations team (SRE) vs an engineering team having responsibility for how their service works in production (DevOps).<p>DevOps is something that all teams should be doing - there's no point in writing code that spends it's life generating problems for customers or other teams, and having the problems arrive at the owners results in them being properly prioritized.<p>In smaller orgs, DevOps and SRE might be together, but it should still be a rotation instead of a fulltime role, and everyone should be doing it.<p>Engineers who don't do devops write code that looks like:<p><pre><code>  if (should_never_happen) {
    log.error("owner=wombat@example.com it happened again");
  }

</code></pre>
Where the one who does do devops writes code that avoids the error condition entirely (usually possible), or decides what the code should do in that situation (not log).</p>
]]></description><pubDate>Sat, 07 Feb 2026 00:14:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=46919931</link><dc:creator>jpollock</dc:creator><comments>https://news.ycombinator.com/item?id=46919931</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46919931</guid></item><item><title><![CDATA[New comment by jpollock in "I spent 5 years in DevOps – Solutions engineering gave me what I was missing"]]></title><description><![CDATA[
<p>I have a different opinion. :) DevOps is great feedback to the engineering team.<p>Too many alarms or alarms at unsocial hours? The engineering team should feel that pain.<p>Too hard to push? The engineering team should feel that pain.<p>Strange hard to diagnose alarms? Yep, the engineering team should feel that pain!<p>The feedback is very important to keeping the opex costs under control.<p>However, I think the author and I have different opinions on what DevOps is. DevOps isn't a full time role. It's what the engineer does to get their software into production.</p>
]]></description><pubDate>Fri, 06 Feb 2026 23:19:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=46919523</link><dc:creator>jpollock</dc:creator><comments>https://news.ycombinator.com/item?id=46919523</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46919523</guid></item><item><title><![CDATA[New comment by jpollock in "Show HN: Autonomous recovery for distributed training jobs"]]></title><description><![CDATA[
<p>Measurement and alerting is usually done in business metrics, not the causes.  That way you catch classes of problems.<p>Not sure about expected loss, that's a decay rate?<p>But stuck jobs are via tasks being processed and average latency.</p>
]]></description><pubDate>Thu, 29 Jan 2026 21:40:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46817029</link><dc:creator>jpollock</dc:creator><comments>https://news.ycombinator.com/item?id=46817029</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46817029</guid></item><item><title><![CDATA[New comment by jpollock in "LLM-as-a-Courtroom"]]></title><description><![CDATA[
<p>Is the llm an expensive way to solve this?  Would a more predictive model type be better?  Then the llm summarizes the PR and the model predicts the likelihood of needing to update the doc?<p>Does using a llm help avoid the cost of training a more specific model?</p>
]]></description><pubDate>Tue, 27 Jan 2026 23:33:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=46788726</link><dc:creator>jpollock</dc:creator><comments>https://news.ycombinator.com/item?id=46788726</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46788726</guid></item><item><title><![CDATA[New comment by jpollock in "ICE using Palantir tool that feeds on Medicaid data"]]></title><description><![CDATA[
<p>One way to use this data is to increase the success rate of random stops.<p>1) Take the medicaid data.<p>2) Join that with rental/income data.<p>3) Look for neighborhoods with cheap rents/low income and low medicaid rates.<p>Dragnet those neighborhoods.</p>
]]></description><pubDate>Mon, 26 Jan 2026 01:22:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=46760623</link><dc:creator>jpollock</dc:creator><comments>https://news.ycombinator.com/item?id=46760623</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46760623</guid></item></channel></rss>