<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: almostdeadguy</title><link>https://news.ycombinator.com/user?id=almostdeadguy</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 05:38:59 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=almostdeadguy" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by almostdeadguy in "Sam Altman may control our future – can he be trusted?"]]></title><description><![CDATA[
<p>This was an excellent piece with many new pieces of information in it. Thanks to you and your coauthor for getting it released.</p>
]]></description><pubDate>Mon, 06 Apr 2026 20:03:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47666236</link><dc:creator>almostdeadguy</dc:creator><comments>https://news.ycombinator.com/item?id=47666236</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47666236</guid></item><item><title><![CDATA[New comment by almostdeadguy in "Sam Altman may control our future – can he be trusted?"]]></title><description><![CDATA[
<p>Seems this got buried from the front page very quickly</p>
]]></description><pubDate>Mon, 06 Apr 2026 16:53:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47663489</link><dc:creator>almostdeadguy</dc:creator><comments>https://news.ycombinator.com/item?id=47663489</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47663489</guid></item><item><title><![CDATA[New comment by almostdeadguy in "Slop is not necessarily the future"]]></title><description><![CDATA[
<p>I have yet to meet anyone whose problem with AI is that the code is not aesthetically pleasing, but that would actually be an indicator to me that people are using these things responsibly.<p>My own two cents: there's an inherent tension with assistants and agents as productivity tools. The more you "let them rip", the higher the potential productivity benefits. And the less you will understand the outputs, or even if they built the "correct thing", which in many cases is something you can only crystalize an understanding about by doing the thing.<p>So I'm happy for all the people who don't care about code quality in terms of its aesthetic properties who are really enjoying the AI-era, that's great. But if your workload is not shifting from write-heavy to read-heavy, you inevitably will be responsible for a major outage or quality issue. And moreso, anyone like this should ask why anyone should feel the need to employ you for your services in the future, since your job amounts to "telling the LLM what to do and accepting it's output uncritically".</p>
]]></description><pubDate>Tue, 31 Mar 2026 18:50:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47591776</link><dc:creator>almostdeadguy</dc:creator><comments>https://news.ycombinator.com/item?id=47591776</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47591776</guid></item><item><title><![CDATA[New comment by almostdeadguy in "Claude Code's source code has been leaked via a map file in their NPM registry"]]></title><description><![CDATA[
<p>Yes, anthropic is not the only company in the world with some shitty code, and yet I feel no pangs of guilt over laughing about it.</p>
]]></description><pubDate>Tue, 31 Mar 2026 18:22:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47591454</link><dc:creator>almostdeadguy</dc:creator><comments>https://news.ycombinator.com/item?id=47591454</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47591454</guid></item><item><title><![CDATA[New comment by almostdeadguy in "Claude Code's source code has been leaked via a map file in their NPM registry"]]></title><description><![CDATA[
<p>There's a reputational filtering that happens when using dependencies. Stars, downloads, last release, who the developer is, etc.<p>Yeah we get supply chain attacks (like the axios thing today) with dependencies, but on the whole I think this is much safer than YOLO git-push-force-origin-main-ing some vibe-coded trash that nobody has ever run before.<p>I also think this isn't really true for the FAANGs, who ostensibly vendor and heavily review many of their dependencies because of the potential impacts they face from them being wrong. For us small potatoes I think "reviewing the code in your repository" is a common sense quality check.</p>
]]></description><pubDate>Tue, 31 Mar 2026 18:20:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47591427</link><dc:creator>almostdeadguy</dc:creator><comments>https://news.ycombinator.com/item?id=47591427</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47591427</guid></item><item><title><![CDATA[New comment by almostdeadguy in "Claude Code's source code has been leaked via a map file in their NPM registry"]]></title><description><![CDATA[
<p>A defining work of the "just vibes" era.</p>
]]></description><pubDate>Tue, 31 Mar 2026 16:14:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47589616</link><dc:creator>almostdeadguy</dc:creator><comments>https://news.ycombinator.com/item?id=47589616</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47589616</guid></item><item><title><![CDATA[New comment by almostdeadguy in "Should QA Exist"]]></title><description><![CDATA[
<p>I think the unstated (but highly prevalent) view among executives in large swathes of this industry is that they don't really care to spend any time or money on user testing or quality assurance, and if this role exists at companies it is usually under-compensated and straddles both these functions to have some party be accountable. It is sometimes a check on product teams and vision-driven executive teams who don't prototype/test their ideas (or empower their teams to do so), and sometimes a check on engineers and engineering managers who don't want to be accountable to gaps in quality.</p>
]]></description><pubDate>Fri, 27 Mar 2026 11:22:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47541338</link><dc:creator>almostdeadguy</dc:creator><comments>https://news.ycombinator.com/item?id=47541338</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47541338</guid></item><item><title><![CDATA[New comment by almostdeadguy in "The gold standard of optimization: A look under the hood of RollerCoaster Tycoon"]]></title><description><![CDATA[
<p>The pathfinder algorithm is a great example of why constraints are so important for creativity and creative development.<p>If AI has any benefit to creative endeavors at all it will be because of the challenges of coaxing a machine defined to produce an averaging of a large corpus of work (producing inherently mediocre slop) provides novel limitations, not because it makes art any more "accessible".</p>
]]></description><pubDate>Mon, 23 Mar 2026 13:30:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47489303</link><dc:creator>almostdeadguy</dc:creator><comments>https://news.ycombinator.com/item?id=47489303</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47489303</guid></item><item><title><![CDATA[New comment by almostdeadguy in "Push events into a running session with channels"]]></title><description><![CDATA[
<p>Odious is one of the most reserved words you could use to describe Telegram, which is primarily a host for scams that the influencers and other bottom feeders aren't allowed to monetize on the big social networks.</p>
]]></description><pubDate>Fri, 20 Mar 2026 02:03:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47449489</link><dc:creator>almostdeadguy</dc:creator><comments>https://news.ycombinator.com/item?id=47449489</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47449489</guid></item><item><title><![CDATA[New comment by almostdeadguy in "After outages, Amazon to make senior engineers sign off on AI-assisted changes"]]></title><description><![CDATA[
<p>If we can't spend that much time reviewing code, what are we exactly doing with this AI stuff?<p>I don't disagree, I think reviewing is laborious, I just don't see how this causes any unintended consequences that aren't effectively baked into using an AI assistant.</p>
]]></description><pubDate>Tue, 10 Mar 2026 17:30:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47326301</link><dc:creator>almostdeadguy</dc:creator><comments>https://news.ycombinator.com/item?id=47326301</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47326301</guid></item><item><title><![CDATA[New comment by almostdeadguy in "After outages, Amazon to make senior engineers sign off on AI-assisted changes"]]></title><description><![CDATA[
<p>I'm sorry what? Junior engineers can't learn anything without using AI assistants (or is the implication that having seniors review their code makes them incapable of learning?) and senior engineer would hate their jobs reviewing more code from their teammates? What reality do people live in now?</p>
]]></description><pubDate>Tue, 10 Mar 2026 16:02:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47325080</link><dc:creator>almostdeadguy</dc:creator><comments>https://news.ycombinator.com/item?id=47325080</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47325080</guid></item><item><title><![CDATA[New comment by almostdeadguy in "Fed's Cook says AI triggering big changes, sees possible unemployment rise"]]></title><description><![CDATA[
<p>As any president should in that scenario? I'm sorry, we're going to nuke professional class workers and let tech executives keep their 2026 money from the proceeds and let the losers go jobless? Not likely if you don't want a bloodbath. Let me be clear: fuck Trump, but any president who doesn't do that is out of their mind.</p>
]]></description><pubDate>Wed, 25 Feb 2026 00:51:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47145837</link><dc:creator>almostdeadguy</dc:creator><comments>https://news.ycombinator.com/item?id=47145837</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47145837</guid></item><item><title><![CDATA[New comment by almostdeadguy in "Fed's Cook says AI triggering big changes, sees possible unemployment rise"]]></title><description><![CDATA[
<p>How do you plan an exit strategy for something that may or may not obsolete a whole field in a matter of months? Not sure there's a real way to do such a thing.</p>
]]></description><pubDate>Wed, 25 Feb 2026 00:44:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47145774</link><dc:creator>almostdeadguy</dc:creator><comments>https://news.ycombinator.com/item?id=47145774</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47145774</guid></item><item><title><![CDATA[New comment by almostdeadguy in "AI is not a coworker, it's an exoskeleton"]]></title><description><![CDATA[
<p>Both abundance and scarcity can be bad. If you can't imagine a world where abundance of software is a very bad thing, I'd suggest you have a limited imagination?</p>
]]></description><pubDate>Thu, 19 Feb 2026 22:22:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47080449</link><dc:creator>almostdeadguy</dc:creator><comments>https://news.ycombinator.com/item?id=47080449</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47080449</guid></item><item><title><![CDATA[New comment by almostdeadguy in "AI makes you boring"]]></title><description><![CDATA[
<p>I can't believe the mods at /r/screenprinting took down my post on the CustomInk shirt I ordered.</p>
]]></description><pubDate>Thu, 19 Feb 2026 18:24:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47077124</link><dc:creator>almostdeadguy</dc:creator><comments>https://news.ycombinator.com/item?id=47077124</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47077124</guid></item><item><title><![CDATA[New comment by almostdeadguy in "OpenClaw is dangerous"]]></title><description><![CDATA[
<p>> 1. How will you know it's a bot?
> 2. How will you know the model?<p>Sounds like a problem for the platforms and model vendors to figure out!<p>> Do you want to target the model authors or the LLM providers? If company X is serving an LLM created by academic researchers at University Y, will you go after Y or X? Or both?<p>I mean providers are obviously my primary concern as the people selling something to the public, but sure, why not both.<p>> Ouch. Throw due process out the door!<p>There's lots of prior art for this, let's not pretend like this is something new. The NLRB adjudicates labor complaints and disputes, the DoT adjudicates complaints about airlines, etc.<p>> This is more reasonable, but for the fact that the bots can simply state the wrong model, or change it daily.<p>Once again, sounds like a problem for the platforms to figure out! How do they handle spammers and abusers today? Throw up their hands? Guess they won't be able to do that for long!<p>> Unfortunately, the simple reason your proposal will fail is that if country X does it, they'll be left far behind country Y that doesn't. It's national suicide to regulate in this fashion.<p>Sounds like a diplomatic problem, if it actually is a problem. In reality the social harms of AI may exceed any supposed benefits. The optimistic case seems to be that AI becomes so powerful it causes a massive hemorrhaging of jobs in knowledge work (and later other forms of work). Still waiting to see any social benefits!</p>
]]></description><pubDate>Thu, 19 Feb 2026 13:55:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47073739</link><dc:creator>almostdeadguy</dc:creator><comments>https://news.ycombinator.com/item?id=47073739</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47073739</guid></item><item><title><![CDATA[New comment by almostdeadguy in "OpenClaw is dangerous"]]></title><description><![CDATA[
<p>At a minimum, every single who has been slandered, bullied, blackmailed, tricked, has suffered psychological damage, etc. as a result of a bot or chat interface should be entitled to damages from the company authoring the model. These should be processed extremely quickly, without a court appearance by any of the parties, as the problem is so blatantly obvious and widespread there's no reason to tie up the courts with this garbage or force claimants to seek representation.<p>Bots must advertise their model provider to every person they interact with, and platforms must restrict bots that do not or cannot abide by this. If they can't do this, the penalties must be severe.<p>There are many ways to put the externalities back on model providers, this is just the kernel of a suggestion for a path forward, but all the people pretending like this is impossible are just wrong.</p>
]]></description><pubDate>Wed, 18 Feb 2026 21:19:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47066567</link><dc:creator>almostdeadguy</dc:creator><comments>https://news.ycombinator.com/item?id=47066567</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47066567</guid></item><item><title><![CDATA[New comment by almostdeadguy in "OpenClaw is dangerous"]]></title><description><![CDATA[
<p>I don't care about the "positive" uses. Whatever convenience they grant is more than tarnished by skill and thought degeneration, lack of control and agency, etc. We've spent two decades learning about all the negative cognitive effects of social media, LLMs are speed running further brain damage. I know two people who've been treated for AI psychosis. Enough.</p>
]]></description><pubDate>Wed, 18 Feb 2026 20:49:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47066182</link><dc:creator>almostdeadguy</dc:creator><comments>https://news.ycombinator.com/item?id=47066182</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47066182</guid></item><item><title><![CDATA[New comment by almostdeadguy in "OpenClaw is dangerous"]]></title><description><![CDATA[
<p>There's absolutely no way to contain people who want to use this for misdeeds. They are just getting starting now and will make the web utter fucking hell if they are allowed to continue.</p>
]]></description><pubDate>Wed, 18 Feb 2026 20:38:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47066056</link><dc:creator>almostdeadguy</dc:creator><comments>https://news.ycombinator.com/item?id=47066056</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47066056</guid></item><item><title><![CDATA[New comment by almostdeadguy in "OpenClaw is dangerous"]]></title><description><![CDATA[
<p>My perspective is all AI needs to have way more legal controls around use and accountability, so I’m not particularly sympathetic to “rapidly growing new public ill is unsafe, but there’s no safer option”</p>
]]></description><pubDate>Wed, 18 Feb 2026 20:35:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47066025</link><dc:creator>almostdeadguy</dc:creator><comments>https://news.ycombinator.com/item?id=47066025</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47066025</guid></item></channel></rss>