<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: SaucyWrong</title><link>https://news.ycombinator.com/user?id=SaucyWrong</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 09 May 2026 09:28:17 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=SaucyWrong" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by SaucyWrong in "Programming Still Sucks"]]></title><description><![CDATA[
<p>This was beautiful. I also appreciated the backlink to Peter Welch’s spiritual ancestor to this essay, which I had forgotten how to find, and had the joy of reading again.</p>
]]></description><pubDate>Wed, 06 May 2026 23:33:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=48043289</link><dc:creator>SaucyWrong</dc:creator><comments>https://news.ycombinator.com/item?id=48043289</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48043289</guid></item><item><title><![CDATA[New comment by SaucyWrong in "Our agent found a bug with WireGuard in Google Kubernetes Engine"]]></title><description><![CDATA[
<p>Came here to say this. It’s a shame that I’m so exhausted reading slop that I’m probably missing many interesting stories from the industry</p>
]]></description><pubDate>Fri, 01 May 2026 14:03:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47974955</link><dc:creator>SaucyWrong</dc:creator><comments>https://news.ycombinator.com/item?id=47974955</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47974955</guid></item><item><title><![CDATA[New comment by SaucyWrong in "Claude-powered AI coding agent deletes company database in 9 seconds"]]></title><description><![CDATA[
<p>Reckless engineering team deletes their own production DB. Blames everyone else. Old news.</p>
]]></description><pubDate>Tue, 28 Apr 2026 00:56:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47929238</link><dc:creator>SaucyWrong</dc:creator><comments>https://news.ycombinator.com/item?id=47929238</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47929238</guid></item><item><title><![CDATA[New comment by SaucyWrong in "Tired of AI When will this era end?"]]></title><description><![CDATA[
<p>You aren’t alone. If you look at my favorite submissions you’ll find some articles written by others that feel the same way.<p>My advice, keep your skills and brain sharp, use LLMs in your workflow a little to stay under management’s radar. Try to find a line where you can still have fun in this job while sprinkling some LLM use in. Avoid the hype on this site and others.<p>It is not wrong to enjoy coding for the sake of the craft. Suspect anybody who tells you otherwise.<p>How well has this advice worked for me? I can’t say yet, I’m still trying to figure it out too. But I’m with you. I’ve been called a Luddite and worse, but if I’m forced to give away all the parts of this job I love the most, I’d rather do something else with my life. Maybe it will come to that. I wish you the best of luck.</p>
]]></description><pubDate>Wed, 25 Mar 2026 23:27:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47524684</link><dc:creator>SaucyWrong</dc:creator><comments>https://news.ycombinator.com/item?id=47524684</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47524684</guid></item><item><title><![CDATA[New comment by SaucyWrong in "Thoughts on slowing the fuck down"]]></title><description><![CDATA[
<p>This is a great point, and I routinely use it as an argument for why seasoned professionals should work hard to keep their skills and why new professionals should build them in the first place. I would never be comfortable leasing my ability to perform detailed knowledge work from one of these companies.<p>Sometimes the argument lands, very often it doesn't. As you said, a common refrain is, "but prices won't go up, cost to serve is the highest it will ever be." Or, "inference is already massively profitable and will become more so in the future--I read so on a news site."<p>And that remark, for me, is unfortunately a discussion-ender. I just haven't ever had a productive conversation with somebody about this after they make these remarks. Somebody saying these things has placed their bets already and are about to throw the dice.</p>
]]></description><pubDate>Wed, 25 Mar 2026 18:46:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47521511</link><dc:creator>SaucyWrong</dc:creator><comments>https://news.ycombinator.com/item?id=47521511</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47521511</guid></item><item><title><![CDATA[New comment by SaucyWrong in "Claude Code weekly rate limits"]]></title><description><![CDATA[
<p>One way I've seen personally is that folks are using tools that drive many Claude Code sessions at once via something like git-worktree as a way of multitasking in a single codebase. Even with garden-variety model use, these folks are hitting the existing 5-hourly rate limits routinely.</p>
]]></description><pubDate>Mon, 28 Jul 2025 19:41:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=44714685</link><dc:creator>SaucyWrong</dc:creator><comments>https://news.ycombinator.com/item?id=44714685</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44714685</guid></item><item><title><![CDATA[New comment by SaucyWrong in "Why email startups fail"]]></title><description><![CDATA[
<p>I’m with you. I learned about the concept of “implicit opt-in / consent” while I was building an email marketing feature on a platform and I found the concept disgusting, but was told that because it’s technically legal, our customers considered it table stakes.</p>
]]></description><pubDate>Wed, 02 Jul 2025 23:15:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=44449832</link><dc:creator>SaucyWrong</dc:creator><comments>https://news.ycombinator.com/item?id=44449832</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44449832</guid></item><item><title><![CDATA[New comment by SaucyWrong in "Why I Won't Use AI"]]></title><description><![CDATA[
<p>The summarization of this article cannot be reduced to, "Technology. Bad." You just said it--they are a software engineer. They would not be in that vocation if they did not understand on some level that technology is useful, good, and valuable. I'll give you the benefit of the doubt by assuming you skipped the first three paragraphs where they were more than clear that this was simply an essay on why they have decided to avoid this particular technology.</p>
]]></description><pubDate>Thu, 19 Jun 2025 03:23:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=44315167</link><dc:creator>SaucyWrong</dc:creator><comments>https://news.ycombinator.com/item?id=44315167</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44315167</guid></item><item><title><![CDATA[New comment by SaucyWrong in "Why I Won't Use AI"]]></title><description><![CDATA[
<p>In what way? Are software engineers not laborers? Is it not possible for laborers that use technology in their labor to be exploited by capital?</p>
]]></description><pubDate>Wed, 18 Jun 2025 23:37:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=44314174</link><dc:creator>SaucyWrong</dc:creator><comments>https://news.ycombinator.com/item?id=44314174</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44314174</guid></item><item><title><![CDATA[New comment by SaucyWrong in "Why I Won't Use AI"]]></title><description><![CDATA[
<p>Comparing typesetting to using AI to do knowledge work is about as Apples to Oranges as it can get and I think you know it.</p>
]]></description><pubDate>Wed, 18 Jun 2025 23:31:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=44314143</link><dc:creator>SaucyWrong</dc:creator><comments>https://news.ycombinator.com/item?id=44314143</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44314143</guid></item><item><title><![CDATA[New comment by SaucyWrong in "We are destroying software"]]></title><description><![CDATA[
<p>The implication is that you’re fronting. It’s fine, I’m a technical founder of an AI company. The business demands that what you say is true. But for me, and many others, the joy of programming is in doing the programming. There is not a more outcome-driven modality that can bring us joy. And we either reject the premise or are grieving that it might eventually be true.</p>
]]></description><pubDate>Sun, 09 Feb 2025 02:03:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=42987862</link><dc:creator>SaucyWrong</dc:creator><comments>https://news.ycombinator.com/item?id=42987862</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42987862</guid></item><item><title><![CDATA[New comment by SaucyWrong in "We are destroying software"]]></title><description><![CDATA[
<p>You must have realized that by, “going outside,” the parent meant “doing something that makes you happy,” and not necessarily literally being outdoors.  They find joy writing code. You realized that, and still chose to demean them with this reply.</p>
]]></description><pubDate>Sun, 09 Feb 2025 01:04:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=42987591</link><dc:creator>SaucyWrong</dc:creator><comments>https://news.ycombinator.com/item?id=42987591</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42987591</guid></item><item><title><![CDATA[New comment by SaucyWrong in "DeepSeek v2.5 – open-source LLM comparable to GPT-4, but 95% less expensive"]]></title><description><![CDATA[
<p>A researcher I work with tried doing both of these (months ago, using Deepseek-V2-chat FWIW).<p>When asked “Where is Taiwan?” it prefaced its answer with “Taiwan is an inalienable part of China. <rest of answer>”<p>When asked if anything significant ever happened in Tiananmen Square, it deleted the question.</p>
]]></description><pubDate>Wed, 30 Oct 2024 23:48:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=42001882</link><dc:creator>SaucyWrong</dc:creator><comments>https://news.ycombinator.com/item?id=42001882</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42001882</guid></item><item><title><![CDATA[New comment by SaucyWrong in "Using AI Generated Code Will Make You a Bad Programmer"]]></title><description><![CDATA[
<p>I can’t tell if this is an argument against the parent or just a semantic correction. Assuming the former, I’ll point out that every tool classification you’ve mentioned has expected correct and incorrect behavior, and LLM tools…don’t. When LLMs produce incorrect or unexpected results, the refrain is, inevitably, “LLMs just be that way sometimes.” Which doesn’t invalidate them as a tool, but they are in a class of their own in that regard.</p>
]]></description><pubDate>Wed, 23 Oct 2024 23:56:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=41930457</link><dc:creator>SaucyWrong</dc:creator><comments>https://news.ycombinator.com/item?id=41930457</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41930457</guid></item></channel></rss>