<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: idopmstuff</title><link>https://news.ycombinator.com/user?id=idopmstuff</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 13 May 2026 18:17:00 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=idopmstuff" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by idopmstuff in "AI is breaking two vulnerability cultures"]]></title><description><![CDATA[
<p>If we assume that there will be an AI that is perfect in terms of ability to find vulnerabilities, cheap to run and widely available to everyone, then anyone can run it on any piece of software before deploying it. All vulnerabilities get found before they can be exploited.<p>One of the big challenges with cybersecurity is that attackers only need to find one exploit, while defenders need to stop everything. When you have a large surface area and limited resources, it's much easier to be the side that only has to succeed once. AI eliminates the limited resources problem.</p>
]]></description><pubDate>Fri, 08 May 2026 21:43:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=48069145</link><dc:creator>idopmstuff</dc:creator><comments>https://news.ycombinator.com/item?id=48069145</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48069145</guid></item><item><title><![CDATA[Mining WhatsApp, WeChat, Alibaba, Gmail to Create a Unified Supplier Dashboard]]></title><description><![CDATA[
<p>Article URL: <a href="https://theautomatedoperator.substack.com/p/mining-whatsapp-wechat-alibaba-and">https://theautomatedoperator.substack.com/p/mining-whatsapp-wechat-alibaba-and</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=48050888">https://news.ycombinator.com/item?id=48050888</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 07 May 2026 15:51:20 +0000</pubDate><link>https://theautomatedoperator.substack.com/p/mining-whatsapp-wechat-alibaba-and</link><dc:creator>idopmstuff</dc:creator><comments>https://news.ycombinator.com/item?id=48050888</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48050888</guid></item><item><title><![CDATA[New comment by idopmstuff in "OpenAI’s o1 correctly diagnosed 67% of ER patients vs. 50-55% by triage doctors"]]></title><description><![CDATA[
<p>Nope, wrong again - SOTA LLMs get the car wash thing right.<p>If you're going to have strong opinions on a topic, it would behoove you to keep up with it.</p>
]]></description><pubDate>Wed, 06 May 2026 18:49:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=48040044</link><dc:creator>idopmstuff</dc:creator><comments>https://news.ycombinator.com/item?id=48040044</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48040044</guid></item><item><title><![CDATA[New comment by idopmstuff in "OpenAI’s o1 correctly diagnosed 67% of ER patients vs. 50-55% by triage doctors"]]></title><description><![CDATA[
<p>I'm not. I understand the difference and also that through improvements to the core models as well as harnesses, LLMs are able to handle an increasing share of tasks. I also understand that these things will continue to improve until AI can automate entire jobs.<p>You, on the other hand, are confusing LLMs from the past with current SOTA LLMs, which can tell how many rs are in strawberry.</p>
]]></description><pubDate>Mon, 04 May 2026 22:37:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=48015878</link><dc:creator>idopmstuff</dc:creator><comments>https://news.ycombinator.com/item?id=48015878</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48015878</guid></item><item><title><![CDATA[New comment by idopmstuff in "OpenAI’s o1 correctly diagnosed 67% of ER patients vs. 50-55% by triage doctors"]]></title><description><![CDATA[
<p>It seems likely to me that doctors whose job is almost or entirely about making diagnoses and prescribing treatments won't be able to keep up in the long run, where those who are more patient facing will still be around even after AI is better than us at just about everything.<p>If I were picking a specialty now, I'd go with pediatrics or psychiatry over something like oncology.</p>
]]></description><pubDate>Sun, 03 May 2026 21:58:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=48001983</link><dc:creator>idopmstuff</dc:creator><comments>https://news.ycombinator.com/item?id=48001983</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48001983</guid></item><item><title><![CDATA[New comment by idopmstuff in "AI Self-preferencing in Algorithmic Hiring: Empirical Evidence and Insights"]]></title><description><![CDATA[
<p>Interesting, thanks for testing.<p>I feel like a more detailed prompt and/or some scaffolding to have it extract experience, put it in a structured format, give numerical ratings against specific criteria then use all of that would be able to consistently get the right result, but I am too lazy to actually test.</p>
]]></description><pubDate>Sat, 02 May 2026 19:08:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47989412</link><dc:creator>idopmstuff</dc:creator><comments>https://news.ycombinator.com/item?id=47989412</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47989412</guid></item><item><title><![CDATA[New comment by idopmstuff in "LLMs consistently pick resumes they generate over ones by humans or other models"]]></title><description><![CDATA[
<p>Even if we take this to be true, I'm not sure that it really matters?<p>It's comparing two resumes with the same information and picking one of the two. That's obviously a situation that would never occur in actual hiring. This doesn't demonstrate anything at all that indicates that LLMs would incorrectly preference LLM-written resumes in the real world.<p>It'd be interesting to do the same thing but with two resumes that are almost identical. One is slightly better (an extra year of experience or a specific note of some skill that is relevant to the role), and the other slightly worse one is written by an LLM. If the reviewing LLM picks the worse one in that case, you're potentially establishing a bias that would matter. As it stands this experiment just seems contrived and pointless.</p>
]]></description><pubDate>Sat, 02 May 2026 16:36:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47987873</link><dc:creator>idopmstuff</dc:creator><comments>https://news.ycombinator.com/item?id=47987873</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47987873</guid></item><item><title><![CDATA[Four Ways ChatGPT Images 2.0 Can Be Useful for Your Business]]></title><description><![CDATA[
<p>Article URL: <a href="https://theautomatedoperator.substack.com/p/three-ways-chatgpt-images-20-can">https://theautomatedoperator.substack.com/p/three-ways-chatgpt-images-20-can</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47963301">https://news.ycombinator.com/item?id=47963301</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 30 Apr 2026 14:42:06 +0000</pubDate><link>https://theautomatedoperator.substack.com/p/three-ways-chatgpt-images-20-can</link><dc:creator>idopmstuff</dc:creator><comments>https://news.ycombinator.com/item?id=47963301</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47963301</guid></item><item><title><![CDATA[Creating a Dashboard with Claude Design]]></title><description><![CDATA[
<p>Article URL: <a href="https://theautomatedoperator.substack.com/p/creating-a-dashboard-with-claude">https://theautomatedoperator.substack.com/p/creating-a-dashboard-with-claude</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47955667">https://news.ycombinator.com/item?id=47955667</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 29 Apr 2026 22:40:36 +0000</pubDate><link>https://theautomatedoperator.substack.com/p/creating-a-dashboard-with-claude</link><dc:creator>idopmstuff</dc:creator><comments>https://news.ycombinator.com/item?id=47955667</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47955667</guid></item><item><title><![CDATA[New comment by idopmstuff in "Anonymous request-token comparisons from Opus 4.6 and Opus 4.7"]]></title><description><![CDATA[
<p>Some people talk like skill atrophy is inevitable when you use LLMs, which strikes me as pretty absurd given that you are talking about a tool that will answer an infinite number of questions with infinite patience.<p>I usually learn way more by having Claude do a task and then quizzing it about what it did than by figuring out how to do it myself. When I have to figure out how to do the thing, it takes much more time, so when I'm done I have to move on immediately. When Claude does the task in ten minutes I now have several hours I can dedicate entirely to understanding.</p>
]]></description><pubDate>Sat, 18 Apr 2026 18:34:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47818290</link><dc:creator>idopmstuff</dc:creator><comments>https://news.ycombinator.com/item?id=47818290</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47818290</guid></item><item><title><![CDATA[15 Ways I'm Using AI to Manage My Small Business]]></title><description><![CDATA[
<p>Article URL: <a href="https://theautomatedoperator.substack.com/p/15-ways-im-using-ai-to-manage-my">https://theautomatedoperator.substack.com/p/15-ways-im-using-ai-to-manage-my</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47794496">https://news.ycombinator.com/item?id=47794496</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 16 Apr 2026 15:18:56 +0000</pubDate><link>https://theautomatedoperator.substack.com/p/15-ways-im-using-ai-to-manage-my</link><dc:creator>idopmstuff</dc:creator><comments>https://news.ycombinator.com/item?id=47794496</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47794496</guid></item><item><title><![CDATA[New comment by idopmstuff in "Small models also found the vulnerabilities that Mythos found"]]></title><description><![CDATA[
<p>> 'Or none' is ruled out since it found the same vulnerability<p>It's not, though. It wasn't asked to find vulnerabilities over 10,000 files - it was asked to find a vulnerability in the one particular place in which the researchers knew there was a vulnerability. That's not proof that it would have found the vulnerability if it had been given a much larger surface area to search.</p>
]]></description><pubDate>Sat, 11 Apr 2026 20:33:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47733778</link><dc:creator>idopmstuff</dc:creator><comments>https://news.ycombinator.com/item?id=47733778</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47733778</guid></item><item><title><![CDATA[New comment by idopmstuff in "Who is Satoshi Nakamoto? My quest to unmask Bitcoin's creator"]]></title><description><![CDATA[
<p>The funny thing is that the author uses your exact logic when he finds evidence that goes against his hypothesis. He made posts that asked questions about things that Satoshi definitely would've known? Misdirection! Somebody else does it? Strong evidence against them!</p>
]]></description><pubDate>Wed, 08 Apr 2026 23:57:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47697699</link><dc:creator>idopmstuff</dc:creator><comments>https://news.ycombinator.com/item?id=47697699</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47697699</guid></item><item><title><![CDATA[Amazon Ads with Claude Pt. 2: Making Skills]]></title><description><![CDATA[
<p>Article URL: <a href="https://theautomatedoperator.substack.com/p/amazon-ads-with-claude-pt-2-making">https://theautomatedoperator.substack.com/p/amazon-ads-with-claude-pt-2-making</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47616187">https://news.ycombinator.com/item?id=47616187</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 02 Apr 2026 15:55:48 +0000</pubDate><link>https://theautomatedoperator.substack.com/p/amazon-ads-with-claude-pt-2-making</link><dc:creator>idopmstuff</dc:creator><comments>https://news.ycombinator.com/item?id=47616187</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47616187</guid></item><item><title><![CDATA[New comment by idopmstuff in "Founder of GitLab battles cancer by founding companies"]]></title><description><![CDATA[
<p>I am watching DTF St. Louis (which is not a terrible reality show about a third tier city like the title implies, actually a Jason Bateman kind of dark comedy/whodunit), and Peyronie's features in the story. The show also has the first commercials I've ever seen for a Peyronie's treatment, and apparently it's an official ad partner of the show. I wonder if some enterprising show exec decided to go pitch the perfect sponsorship or if the company making a treatment commissioned a show...</p>
]]></description><pubDate>Sat, 28 Mar 2026 22:23:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47558633</link><dc:creator>idopmstuff</dc:creator><comments>https://news.ycombinator.com/item?id=47558633</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47558633</guid></item><item><title><![CDATA[New comment by idopmstuff in "Reports of code's death are greatly exaggerated"]]></title><description><![CDATA[
<p>Hah, thanks but unfortunately I quit and started a business a couple of years ago, in no small part because I didn't want to spend my time maneuvering to kill stupid ideas.</p>
]]></description><pubDate>Mon, 23 Mar 2026 15:51:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47491178</link><dc:creator>idopmstuff</dc:creator><comments>https://news.ycombinator.com/item?id=47491178</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47491178</guid></item><item><title><![CDATA[New comment by idopmstuff in "Reports of code's death are greatly exaggerated"]]></title><description><![CDATA[
<p>Hahaha yes this is absolutely true but often times so much more work.</p>
]]></description><pubDate>Sun, 22 Mar 2026 18:16:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47480433</link><dc:creator>idopmstuff</dc:creator><comments>https://news.ycombinator.com/item?id=47480433</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47480433</guid></item><item><title><![CDATA[New comment by idopmstuff in "Reports of code's death are greatly exaggerated"]]></title><description><![CDATA[
<p>Yeah, I don't really accept the argument that AI makes mistakes and therefore cannot be trusted to write production code (in general, at least - obviously depends on the types of mistakes, which code, etc.).<p>The reality is we have built complex organizational structures around the fact that humans also make mistakes, and there's no real reason you can't use the same structures for AI. You have someone write the code, then someone does code review, then someone QAs it.<p>Even after it goes out to production, you have a customer support team and a process for them to file bug tickets. You have customer success managers to smooth over the relationships with things go wrong. In really bad cases, you've got the CEO getting on a plane to go take the important customer out for drinks.<p>I've worked at startups that made a conscious decision to choose speed of development over quality. Whether or not it was the right decision is arguable, but the reality is they did so knowing that meant customers would encounter bugs. A couple of those startups are valuable at multiple billions of dollars now. Bugs just aren't the end of the world (again, most cases - I worked on B2B SaaS, not medical devices or what have you).</p>
]]></description><pubDate>Sun, 22 Mar 2026 18:06:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47480316</link><dc:creator>idopmstuff</dc:creator><comments>https://news.ycombinator.com/item?id=47480316</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47480316</guid></item><item><title><![CDATA[New comment by idopmstuff in "Reports of code's death are greatly exaggerated"]]></title><description><![CDATA[
<p>As a former PM, I will say that if you want to stop something from happening at your company, the best route is to come off very positive about it initially. This is critical because it gives you credibility. After my first few years of PMing, I developed a reflex that any time I heard a deeply stupid proposal, I would enthusiastically ask if I could take the lead on scoping it out.<p>I would do the initial research/planning/etc. mostly honestly and fairly. I'd find the positives, build a real roadmap and lead meetings where I'd work to get people onboard.<p>Then I'd find the fatal flaw. "Even though I'm very excited about this, as you know, dear leadership, I have to be realistic that in order to do this, we'd need many more resources than the initial plan because of these devastating unexpected things I have discovered! Drat!"<p>I would then propose options. Usually three, which are: Continue with the full scope but expand the resources (knowing full well that the additional resources required cannot be spared), drastically cut scope and proceed, or shelve it until some specific thing changes. You want to give the specific thing because that makes them feel like there's a good, concrete reason to wait and you're not just punting for vague, hand-wavy reasons.<p>Then the thing that we were waiting on happens, and I forget to mention it. Leadership's excited about something else by that point anyway, so we never revisit dumb project again.<p>Some specific thoughts for you:<p>1. Treat their arguments seriously. If they're handwaving your arguments away, don't respond by handwaving their arguments away, even if you think they're dumb. Even if they don't fully grasp what they're talking about, you can at least concede that agents and models will improve and that will help with some issues in the future.<p>2. Having conceded that, they're now more likely to listen to you when you tell them that while it's definitely important to think about a future where agents are better, you've got to deal with the codebase right now.<p>3. Put the problems in terms they'll understand. They see the agent that wrote this feature really quickly, which is good. You need to pull up the tickets that the senior developers on the team had to spend time on to fix the code that the agent wrote. Give the tradeoff - what new features were those developers not working on because they were spending time here?<p>4. This all works better if you can position yourself as the AI expert. I'd try to pitch a project of creating internal evals for the stuff that matters in your org to try with new models when they come out. If you've volunteered to take something like that on and can give them the honest take that GPT-5.5 is good at X but terrible at Y, they're probably going to listen to that much more than if they feel like you're reflexively against AI.</p>
]]></description><pubDate>Sun, 22 Mar 2026 18:02:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47480261</link><dc:creator>idopmstuff</dc:creator><comments>https://news.ycombinator.com/item?id=47480261</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47480261</guid></item><item><title><![CDATA[New comment by idopmstuff in "Reports of code's death are greatly exaggerated"]]></title><description><![CDATA[
<p>I don't know that people are saying code is dead (or at least the ones who have even a vague understanding of AI's role) - more that humans are moving up a level of abstraction in their inputs. Rather than writing code, they can write specs in English and have AI write the code, much in the same way that humans moved from writing assembly to writing higher-level code.<p>But of course writing code directly will always maintain the benefit of specificity. If you want to write instructions to a computer that are completely unambiguous, code will always be more useful than English. There are probably a lot of cases where you could write an instruction unambiguously in English, but it'd end up being much longer because English is much less precise than any coding language.<p>I think we'll see the same in photo and video editing as AI gets better at that. If I need to make a change to a photo, I'll be able to ask a computer, and it'll be able to do it. But if I need the change to be pixel-perfect, it'll be much more efficient to just do it in Photoshop than to describe the change in English.<p>But much like with photo editing, there'll be a lot of cases where you just don't need a high enough level of specificity to use a coding language. I build tools for myself using AI, and as long as they do what I expect them to do, they're fine. Code's probably not the best, but that just doesn't matter for my case.<p>(There are of course also issues of code quality, tech debt, etc., but I think that as AI gets better and better over the next few years, it'll be able to write reliable, secure, production-grade code better than humans anyway.)</p>
]]></description><pubDate>Sun, 22 Mar 2026 17:40:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47480015</link><dc:creator>idopmstuff</dc:creator><comments>https://news.ycombinator.com/item?id=47480015</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47480015</guid></item></channel></rss>