<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: SpicyLemonZest</title><link>https://news.ycombinator.com/user?id=SpicyLemonZest</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 11 Apr 2026 11:23:41 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=SpicyLemonZest" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by SpicyLemonZest in "AI assistance when contributing to the Linux kernel"]]></title><description><![CDATA[
<p>Corporations are required to have human directors with full operational authority over the corporation's actions. This allows a court to summon them and compel them to do or not do things in the physical world. There's no reason a corporation can't choose to have an AI operate their accounts, but this won't affect the copyright status, and if the directors try to claim they can't override the AI's control of the accounts they'll find themselves in jail for contempt the first time the corporation faces a lawsuit.</p>
]]></description><pubDate>Sat, 11 Apr 2026 03:28:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47727057</link><dc:creator>SpicyLemonZest</dc:creator><comments>https://news.ycombinator.com/item?id=47727057</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47727057</guid></item><item><title><![CDATA[New comment by SpicyLemonZest in "Has Mythos just broken the deal that kept the internet safe?"]]></title><description><![CDATA[
<p>Really? I think that's pretty much accurate. If you've ever visited a website whose authors you don't know and trust, you've exposed yourself to potential attacks and trusted in sandboxing to keep your computer safe.</p>
]]></description><pubDate>Sat, 11 Apr 2026 00:34:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47725783</link><dc:creator>SpicyLemonZest</dc:creator><comments>https://news.ycombinator.com/item?id=47725783</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47725783</guid></item><item><title><![CDATA[New comment by SpicyLemonZest in "Has Mythos just broken the deal that kept the internet safe?"]]></title><description><![CDATA[
<p>Anthropic is saying exactly what you're saying. They don't believe that software security is permanently ruined. They just want to ensure that good defensive techniques like the ones you describe are developed <i>before</i> large numbers of attackers get their hands on the technology.</p>
]]></description><pubDate>Sat, 11 Apr 2026 00:25:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47725712</link><dc:creator>SpicyLemonZest</dc:creator><comments>https://news.ycombinator.com/item?id=47725712</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47725712</guid></item><item><title><![CDATA[New comment by SpicyLemonZest in "Sam Altman's response to Molotov cocktail incident"]]></title><description><![CDATA[
<p>The idea that firing you or stealing your wages is the worst a CEO can do to you is itself a product of the taboo against physical violence. There are a number of famous incidents from the late 1800s and early 1900s, when the taboo was weaker, of CEOs sending private armies to shoot inconvenient labor movements. It's not an equilibrium you should defect from lightly.</p>
]]></description><pubDate>Sat, 11 Apr 2026 00:16:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47725620</link><dc:creator>SpicyLemonZest</dc:creator><comments>https://news.ycombinator.com/item?id=47725620</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47725620</guid></item><item><title><![CDATA[New comment by SpicyLemonZest in "Sam Altman's response to Molotov cocktail incident"]]></title><description><![CDATA[
<p>No, I don't think that's accurate. Altman has repeatedly and loudly demanded for these to be created, including a new detailed policy proposal just this month (<a href="https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial%20Policy%20for%20the%20Intelligence%20Age.pdf" rel="nofollow">https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440...</a>).</p>
]]></description><pubDate>Sat, 11 Apr 2026 00:01:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47725524</link><dc:creator>SpicyLemonZest</dc:creator><comments>https://news.ycombinator.com/item?id=47725524</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47725524</guid></item><item><title><![CDATA[New comment by SpicyLemonZest in "Sam Altman's response to Molotov cocktail incident"]]></title><description><![CDATA[
<p>I think it's good for CEOs of powerful companies to make statements about how they don't want too much personal power and it's important to ensure everyone does well, even and perhaps especially if there's reason to suspect they don't believe it. Saying it doesn't solve the problem, but it helps create a permission structure for the rest of us to get it to actually happen.</p>
]]></description><pubDate>Fri, 10 Apr 2026 23:34:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47725241</link><dc:creator>SpicyLemonZest</dc:creator><comments>https://news.ycombinator.com/item?id=47725241</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47725241</guid></item><item><title><![CDATA[New comment by SpicyLemonZest in "US summons bank bosses over cyber risks from Anthropic's latest AI model"]]></title><description><![CDATA[
<p>I'm definitely optimistic that the long-term trajectory is positive. All important software can undergo extensive penetration testing with cutting-edge vulnerability research techniques before launch? Sounds great. The problem is what goes wrong on the pathway to there.</p>
]]></description><pubDate>Fri, 10 Apr 2026 16:41:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47720669</link><dc:creator>SpicyLemonZest</dc:creator><comments>https://news.ycombinator.com/item?id=47720669</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47720669</guid></item><item><title><![CDATA[New comment by SpicyLemonZest in "US summons bank bosses over cyber risks from Anthropic's latest AI model"]]></title><description><![CDATA[
<p>Which is exactly what Anthropic understands the situation to be. They state at the beginning of the Glasswing blogpost that Mythos is not better than the best vulnerability researchers. But it doesn't have to be to become a tremendously big deal.</p>
]]></description><pubDate>Fri, 10 Apr 2026 16:37:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47720621</link><dc:creator>SpicyLemonZest</dc:creator><comments>https://news.ycombinator.com/item?id=47720621</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47720621</guid></item><item><title><![CDATA[New comment by SpicyLemonZest in "US summons bank bosses over cyber risks from Anthropic's latest AI model"]]></title><description><![CDATA[
<p>I guess I'm not sure why you frame this as a "rather than". What Anthropic is saying is that the norm of having tons of vulnerabilities lying around historically worked OK, but Mythos shows it will soon become catastrophically not OK, and everyone who's responsible for software security needs to know this so they can take action.</p>
]]></description><pubDate>Fri, 10 Apr 2026 16:26:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47720474</link><dc:creator>SpicyLemonZest</dc:creator><comments>https://news.ycombinator.com/item?id=47720474</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47720474</guid></item><item><title><![CDATA[New comment by SpicyLemonZest in "Why do we tell ourselves scary stories about AI?"]]></title><description><![CDATA[
<p>I think most AI execs I'm familiar with <i>would</i>, if they were the god-monarch of humanity, recruit real specialists applying scientific methods to make most decisions. They seem like the kind of people who would understand that the Ministry of Economy is doing valuable things which shouldn't be compromised for personal expediency. Does that really make it any better?</p>
]]></description><pubDate>Fri, 10 Apr 2026 15:55:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47720036</link><dc:creator>SpicyLemonZest</dc:creator><comments>https://news.ycombinator.com/item?id=47720036</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47720036</guid></item><item><title><![CDATA[New comment by SpicyLemonZest in "Why do we tell ourselves scary stories about AI?"]]></title><description><![CDATA[
<p>> Because they don't _understand_ things. If I teach an LLM that 3+5 is 8, it doesn't "get" that 4+5 is 9 (leave aside the details here, as I'm explaining for effect). It needs to be taught that as well, and so on. We understand exactly everything that goes into how LLMs generate answers.<p>What you're saying just isn't true, even directionally. Deployed LLMs routinely generalize outside of their training set to apply patterns they learned within the training set. How else, for example, could LLMs be capable of summarizing new text they didn't see in training?</p>
]]></description><pubDate>Fri, 10 Apr 2026 15:39:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47719770</link><dc:creator>SpicyLemonZest</dc:creator><comments>https://news.ycombinator.com/item?id=47719770</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47719770</guid></item><item><title><![CDATA[New comment by SpicyLemonZest in "Why do we tell ourselves scary stories about AI?"]]></title><description><![CDATA[
<p>Everyone recognized that it was so dangerous to use them <i>after</i> the first two mass casualty events. At the time and even into the 50s it was not universally obvious, and the arguments in favor of nuclear weapons use were quite similar to arguments I often see with regards to AI: bombing cities into rubble is not a new concept, traditional explosives well within the supply capacity of large militaries are capable of it, so what are we even talking about when we say that there's scary new capabilities?</p>
]]></description><pubDate>Fri, 10 Apr 2026 15:32:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47719659</link><dc:creator>SpicyLemonZest</dc:creator><comments>https://news.ycombinator.com/item?id=47719659</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47719659</guid></item><item><title><![CDATA[New comment by SpicyLemonZest in "Why do we tell ourselves scary stories about AI?"]]></title><description><![CDATA[
<p>The actual contents of this article are making reasonable arguments I largely agree with. It would be very surprising for LLM-based AI systems to act as monomanaical goal optimizers, since they're trained on human text and humans are extremely bad at goal-oriented behavior. (My goals for today include a number of work and self-maintenance tasks, and the time I'm spending here writing out a HN comment does not all help achieve them - I suspect most people reading this comment are in the same boat.)<p>It's very frustrating that the magazine wrote such a dumb headline which guarantees people won't talk about the issues the article raised. Obviously non-goal-oriented systems can still have important negative effects.</p>
]]></description><pubDate>Fri, 10 Apr 2026 15:05:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47719250</link><dc:creator>SpicyLemonZest</dc:creator><comments>https://news.ycombinator.com/item?id=47719250</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47719250</guid></item><item><title><![CDATA[New comment by SpicyLemonZest in "OpenAI backs Illinois bill that would limit when AI labs can be held liable"]]></title><description><![CDATA[
<p>Powerful AI models change the dynamics by greatly reducing the amount of effort that's required to perform complex understanding. A lot of information which did not previously need to be gatekept now needs to be if we cannot somehow keep LLMs from discussing it. (State of the art models still can't do complex understanding <i>reliably</i>, but if 10 times as many people are now capable of attempting some terrible thing, you're still in trouble if AI hallucinations catch 1/4 or 1/2 of them.)</p>
]]></description><pubDate>Fri, 10 Apr 2026 14:52:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47719045</link><dc:creator>SpicyLemonZest</dc:creator><comments>https://news.ycombinator.com/item?id=47719045</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47719045</guid></item><item><title><![CDATA[New comment by SpicyLemonZest in "OpenAI backs Illinois bill that would limit when AI labs can be held liable"]]></title><description><![CDATA[
<p>Much easier, not sure how this is even a question. Asking Google (if you're not just reading its own AI overview) requires reading through sources which may be better or more poorly written and more or less reliable. Those of us recreationally sitting here on a text-based platform with links to dense articles are atypical; most people don't enjoy and aren't particularly good at reading a bunch of stuff. If you ask AI you just get a clear, concrete answer.</p>
]]></description><pubDate>Fri, 10 Apr 2026 14:47:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47718975</link><dc:creator>SpicyLemonZest</dc:creator><comments>https://news.ycombinator.com/item?id=47718975</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47718975</guid></item><item><title><![CDATA[New comment by SpicyLemonZest in "Maine is about to become the first state to ban major new data centers"]]></title><description><![CDATA[
<p>If Maine passes this moratorium, and then starts accusing developers of malicious compliance for cancelling their projects instead of redesigning against the 20 megawatt limit, I'll definitely line up to make fun of them. My sense is that this isn't what's happening, and the Maine legislators understand and intend for this policy to discourage datacenter investment altogether.</p>
]]></description><pubDate>Thu, 09 Apr 2026 20:23:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47709355</link><dc:creator>SpicyLemonZest</dc:creator><comments>https://news.ycombinator.com/item?id=47709355</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47709355</guid></item><item><title><![CDATA[New comment by SpicyLemonZest in "Study found that young adults have grown less hopeful and more angry about AI"]]></title><description><![CDATA[
<p>I guess I'm not terribly interested in debating what exactly the term "recession" means or what the line is where it's fair to say the economy "sucks". If you think that the current state of the economy is as bad as we should expect, and it won't get worse if GDP stops rising, I'm confident you're wrong. But I don't know how to convince you of that unless you've experienced a recession in your working life.</p>
]]></description><pubDate>Thu, 09 Apr 2026 17:55:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47707006</link><dc:creator>SpicyLemonZest</dc:creator><comments>https://news.ycombinator.com/item?id=47707006</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47707006</guid></item><item><title><![CDATA[New comment by SpicyLemonZest in "Study found that young adults have grown less hopeful and more angry about AI"]]></title><description><![CDATA[
<p>How old are you? I hate to pull rank on people, but if you're an American who wasn't yet on the job market in 2008, you've never experienced a sustained recession and don't understand the comparison you're drawing. A recession feels much worse than "things cost too much and the world kind of sucks", and workers are affected as much as businesses and CEOs are.</p>
]]></description><pubDate>Thu, 09 Apr 2026 16:55:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47706099</link><dc:creator>SpicyLemonZest</dc:creator><comments>https://news.ycombinator.com/item?id=47706099</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47706099</guid></item><item><title><![CDATA[New comment by SpicyLemonZest in "Clean code in the age of coding agents"]]></title><description><![CDATA[
<p>I'm dealing with a situation right now where a critical mass of "messy" code means that <i>nobody</i>, human or LLM, can understand what it is trying to do or how a straightforward user-specified update should be applied to the underlying domain objects. Multiple proposed semantics have failed so far.</p>
]]></description><pubDate>Thu, 09 Apr 2026 15:19:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47704904</link><dc:creator>SpicyLemonZest</dc:creator><comments>https://news.ycombinator.com/item?id=47704904</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47704904</guid></item><item><title><![CDATA[New comment by SpicyLemonZest in "Meta removes ads for social media addiction litigation"]]></title><description><![CDATA[
<p>Zuckerberg is a rich and high profile guy, so photographers capture many pictures of him, and news editors often find that choosing unflattering pictures of people their readers don't like is helpful for reach. This picture in particular was taken after he'd just finished testifying for 8 hours in a February trial, which I think would wear down the best of us, and even among Getty's extensive gallery of pictures taken then (<a href="https://www.gettyimages.com/detail/news-photo/mark-zuckerberg-chief-executive-officer-of-meta-platforms-news-photo/2261841504" rel="nofollow">https://www.gettyimages.com/detail/news-photo/mark-zuckerber...</a>) this one is particularly unflattering IMO.</p>
]]></description><pubDate>Thu, 09 Apr 2026 14:40:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47704406</link><dc:creator>SpicyLemonZest</dc:creator><comments>https://news.ycombinator.com/item?id=47704406</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47704406</guid></item></channel></rss>