<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: SlinkyOnStairs</title><link>https://news.ycombinator.com/user?id=SlinkyOnStairs</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 30 Apr 2026 10:26:24 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=SlinkyOnStairs" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by SlinkyOnStairs in "Who owns the code Claude Code wrote?"]]></title><description><![CDATA[
<p>What you're asking is, "could someone do fraud" and "would being found out invalidate their copyright". To both of which the answer is generally, yes.<p>It'd be a form of plagiarism, just with different consequences to the most common form.</p>
]]></description><pubDate>Wed, 29 Apr 2026 10:31:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47946398</link><dc:creator>SlinkyOnStairs</dc:creator><comments>https://news.ycombinator.com/item?id=47946398</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47946398</guid></item><item><title><![CDATA[New comment by SlinkyOnStairs in "Period tracking app, Flo, found to be selling user data to Meta"]]></title><description><![CDATA[
<p>> No-one's saying this<p>No-one was saying it <i>explicitly</i>. I merely took what you said and re-stated what it concretely meant in the real world.<p>The generalization to "all computers" is an assumption, but you appear to maintain a narrow view of what is "medically necessary" and just now generalize to things like dairies, so I believe I am correct in asserting that you would generalize this to all "non-essential" software.</p>
]]></description><pubDate>Wed, 29 Apr 2026 10:15:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47946288</link><dc:creator>SlinkyOnStairs</dc:creator><comments>https://news.ycombinator.com/item?id=47946288</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47946288</guid></item><item><title><![CDATA[New comment by SlinkyOnStairs in "Period tracking app, Flo, found to be selling user data to Meta"]]></title><description><![CDATA[
<p>> All the money spent on regulations and regulators to cover increasingly niche opt-in services that are entirely unnecessary is a waste.<p>That isn't what's happening. The regulations don't get little niche cases added to them, they're writen to be generally applicable to all niches.<p>> It's not a medical requirement from a doctor, so just keep a diary if you want to.<p>"Just don't use the computer if you don't want companies to rat you out to the fascist government that'll imprison or kill you for having a miscarriage" is a ridiculous victim-blaming position.<p>It's the practical reality of a fascist government that they won't enact privacy laws. And yes, women really shouldn't be using period tracking apps in the US, or made by the US. But that doesn't mean privacy laws are some "silly waste of my tax money".<p>It's not a "medical requirement" except for the many many many cases where it is. Similarly, this position extends to literally everything. <i>Nothing</i> "needs to be an app". But unless we want to pack up and discard the entire software industry, it really ought to be better about privacy like this.</p>
]]></description><pubDate>Tue, 28 Apr 2026 13:09:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47934065</link><dc:creator>SlinkyOnStairs</dc:creator><comments>https://news.ycombinator.com/item?id=47934065</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47934065</guid></item><item><title><![CDATA[New comment by SlinkyOnStairs in "AI can cost more than human workers now"]]></title><description><![CDATA[
<p>It's not just push from mangement; AI firms themselves really aggressively market this idea of AI replacing everything. It's not "allowed" to be a mere tool, useful for some but not other tasks, it's gotta (be able to) do everything.<p>Part of that is the ridiculous belief that they can create "AGI" by just glueing together enough LLMs.<p>Presumably it's also financial viability. You can't charge <i>thousands</i> a month without replacing those "highly trained engineers" with a bunch of kids in the developing world.</p>
]]></description><pubDate>Mon, 27 Apr 2026 07:06:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47918571</link><dc:creator>SlinkyOnStairs</dc:creator><comments>https://news.ycombinator.com/item?id=47918571</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47918571</guid></item><item><title><![CDATA[New comment by SlinkyOnStairs in "The West forgot how to make things, now it’s forgetting how to code"]]></title><description><![CDATA[
<p>> I think they are not the best example of Open Source development.<p>They're not. I'm using them as an example of the "bad" in Open Source development.<p>But it's also not so much the individual OS components that are a problem, their interactions are just as fragile and usually subject to neither party taking ownership of the problem.</p>
]]></description><pubDate>Sun, 26 Apr 2026 19:38:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47913319</link><dc:creator>SlinkyOnStairs</dc:creator><comments>https://news.ycombinator.com/item?id=47913319</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47913319</guid></item><item><title><![CDATA[New comment by SlinkyOnStairs in "The West forgot how to make things, now it’s forgetting how to code"]]></title><description><![CDATA[
<p>> It can be quickly regained<p>I'm not sure what you mean with this?<p>Sure, hypothetically e.g. any western car manufacturer could poach a bunch of BYD employees. <i>But it's not really practical for most businesses</i>.<p>> The actual problem is, there is no market to go up to anymore.<p>This is the "Market for Lemons" problem, yes.<p>It's less of a problem than you might think. Convincing the entire wider world that you're legitimate is a problem. One made infinitely worse by store marketplaces like Amazon preferring to push "aqekj;bgrsabhghwjbgawrjwsraG" brand garbage.<p>So you just don't. The trick is to start small. The smallest you can sustain. (This doesn't work for cars, or anything that's sufficiently complex. You won't be taking on Salesforce.)<p>But so long as you can find a market niche where there's demand for quality, you can carve out a living, and from there, scale up.<p>The problem with <i>that</i> is twofold: Venture Capital has supplanted other forms of investment and "small business generating single digit millions in revenue" is utterly unappealing to VCs, even though the investment required is downsized accordingly.<p>And problem #2: The cost of starting a business is too high right now. Real estate and cost of living just make it unaffordable to even try. + Healthcare if you're in the US.</p>
]]></description><pubDate>Sun, 26 Apr 2026 17:08:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47911885</link><dc:creator>SlinkyOnStairs</dc:creator><comments>https://news.ycombinator.com/item?id=47911885</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47911885</guid></item><item><title><![CDATA[New comment by SlinkyOnStairs in "The West forgot how to make things, now it’s forgetting how to code"]]></title><description><![CDATA[
<p>> Desktop Linux has gotten better<p>This is on me for being a bit too snarky.<p>So yes, Desktop Linux has "gotten better". What it hasn't done is <i>solved any of the systemic problems</i>.<p>The Open Source development quirks that created the shitshow of the 1999 is still here. Gnome is <i>better</i> but still suffers massively from mainstream features being declared stupid by the maintainers. (A power button that turns off the machine? Heretical.)<p>Valve's recent successes are pretty illustrative here. They used their money to directly hijack the projects their products rely on.<p>For what it pertains the comparison, Windows is not without this "slow" improvement either. 95 and 98 are lightyears behind contemporary Windows in so many ways. Until quite recently it still made about as much sense to use Linux as it did back then; Not much.<p>Take your Linux Laptop example. Sure, Linux finally kind of worked on some specific models that were tested for it. Meanwhile, Windows had moved from "it'll work with some mucking about with drivers" to "It works universally, on practically all hardware". Really, by the mid 2010s Windows would finally be quite tolerant of you changing the hardware.<p>Hence my original point; Desktop Linux hasn't really caught up with Windows in any meaningful sense. Windows is just nose-diving into the ground in the last few years.</p>
]]></description><pubDate>Sun, 26 Apr 2026 16:58:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47911786</link><dc:creator>SlinkyOnStairs</dc:creator><comments>https://news.ycombinator.com/item?id=47911786</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47911786</guid></item><item><title><![CDATA[New comment by SlinkyOnStairs in "Flickr: The first and last great photo platform"]]></title><description><![CDATA[
<p>> Do you call operating systems "malware enablers"?<p>People were making that exact criticism of Microsoft Windows <i>for decades</i>.<p>It's only really in the last decade that Windows got decent enough at security for this attitude to wear off.</p>
]]></description><pubDate>Sun, 26 Apr 2026 10:52:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47909239</link><dc:creator>SlinkyOnStairs</dc:creator><comments>https://news.ycombinator.com/item?id=47909239</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47909239</guid></item><item><title><![CDATA[New comment by SlinkyOnStairs in "The West forgot how to make things, now it’s forgetting how to code"]]></title><description><![CDATA[
<p>They also took out all the <i>quality</i>, though in pure business terms one can argue that's a kind of "slack" by itself.<p>The beancounters have cut all the corners on physical products that they could find. Now even design and manufacturing is outsourced to the lowest bidder, a bunch of monkeys paid peanuts to do a job they're woefully unqualified for.<p>And the end result is just a market for lemons. Nobody trusts products to be good anymore, so they just buy the cheapest garbage.<p>Which, inevitably, <i>is the stuff sold directly by Chinese manufacturers</i>. And so the beancounters are hoisted by their own petard.<p>We've seen it happen to small electronics and general goods.<p>We're seeing it happen right now to cars. Manufacturers clinging on to combustion engines and cutting corners. Why spend twice the money on a western brand when their quality is rapidly declining to meet BYD models half the price.<p>---<p>And we're seeing it happen to software. It was already kind of happening before AI; So much of software was enshittifying rapidly. But AI is just taking a sledgehammer to quality. (Setting aside whether this is an AI problem or a "beancounters push everyone into vibecoding" problem)<p>E.g. Desktop Linux has always been kind of a joke. It hasn't gotten <i>better</i>, the problems are all still there. Windows is just going down in flames. People are jumping ship now.<p>SaaS is quickly going that way as well. If it's all garbage, why pay for it. Either stop using it or just slop something together yourself.<p>---<p>And in the background of this something ominous: Companies can't just pivot back to higher quality after they've destroyed all their inhouse knowledge. So much manufacturing knowledge is just gone, starting a new manufacturing firm in the west is a staffing nightmare. Same story with cars, China has the EV knowledge. And software's going the same way. These beancounters are all chomping at the bit to fire all their devs and replace them with teenagers in the developing world spitting out prompts. They can't move back upmarket after that's done.<p>Even when the knowledge still lives, when the people with the skills requires have simply moved to other industries and jobs, who's going to come back? Why leave your established job for the former field, when all it takes is the management or executive in charge being replaced by another dipshit beancounter for everyone to be laid off again.</p>
]]></description><pubDate>Sun, 26 Apr 2026 10:47:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47909204</link><dc:creator>SlinkyOnStairs</dc:creator><comments>https://news.ycombinator.com/item?id=47909204</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47909204</guid></item><item><title><![CDATA[New comment by SlinkyOnStairs in "Google plans to invest up to $40B in Anthropic"]]></title><description><![CDATA[
<p>A drawn out long change simply doesn't have the major societal upset that imminent mass-unemployment has.<p>With how much scale AI datacenters want and how the Trump administration has made supply problems significantly worse, we'd be talking decades, plural.</p>
]]></description><pubDate>Sat, 25 Apr 2026 23:33:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47905697</link><dc:creator>SlinkyOnStairs</dc:creator><comments>https://news.ycombinator.com/item?id=47905697</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47905697</guid></item><item><title><![CDATA[New comment by SlinkyOnStairs in "Google plans to invest up to $40B in Anthropic"]]></title><description><![CDATA[
<p>There's layoffs, certainly.<p>But all the economic indicators suggest those are "bad economy" layoffs dressed up as "AI" layoffs to keep the shareholders happy.</p>
]]></description><pubDate>Sat, 25 Apr 2026 12:20:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47900868</link><dc:creator>SlinkyOnStairs</dc:creator><comments>https://news.ycombinator.com/item?id=47900868</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47900868</guid></item><item><title><![CDATA[New comment by SlinkyOnStairs in "Google plans to invest up to $40B in Anthropic"]]></title><description><![CDATA[
<p>It won't get that far.<p>It's physically impossible to build out the datacenters required for the "AI is actually good and we have mass layoffs" scenario. This Anthropic investment is spurred on because they've already hit a brick wall with capacity.<p>$40B goes a long way, but not for datacenters where nearly every single component and service is now backordered. Even if you could build the DC, the power connection won't be there.<p>The current oil crisis just makes all of that <i>even worse</i>.</p>
]]></description><pubDate>Sat, 25 Apr 2026 09:53:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47900123</link><dc:creator>SlinkyOnStairs</dc:creator><comments>https://news.ycombinator.com/item?id=47900123</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47900123</guid></item><item><title><![CDATA[New comment by SlinkyOnStairs in "US special forces soldier arrested after allegedly winning $400k on Maduro raid"]]></title><description><![CDATA[
<p>Less so "supermarkets" specifically and moreso "capitalism" and the answers to your conclusion is obviously, yes.<p>This is why welfare systems exist. Because otherwise the system will push people to crime, especially so in our current implementation of Capitalism where it is possible to become unemployed/unemployable through no fault of one's own.</p>
]]></description><pubDate>Fri, 24 Apr 2026 10:38:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47888306</link><dc:creator>SlinkyOnStairs</dc:creator><comments>https://news.ycombinator.com/item?id=47888306</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47888306</guid></item><item><title><![CDATA[New comment by SlinkyOnStairs in "US special forces soldier arrested after allegedly winning $400k on Maduro raid"]]></title><description><![CDATA[
<p>Sexism will play a role, but a big part of the reason why Pelosi gets so much flak is that she did nothing to stop it when the democrats were in charge, thus directly paving the way to the current shitshow.</p>
]]></description><pubDate>Fri, 24 Apr 2026 10:27:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47888224</link><dc:creator>SlinkyOnStairs</dc:creator><comments>https://news.ycombinator.com/item?id=47888224</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47888224</guid></item><item><title><![CDATA[New comment by SlinkyOnStairs in "ChatGPT Images 2.0"]]></title><description><![CDATA[
<p>> Pre-training is very much a matter of scale - and scraping is merely the easiest way to get data at scale.<p>Therein lies the problem. AI firms just bulldozed ahead and "just did it" with no consideration for the ethics or legality. (Nor for that matter, how they're going to get this data in the future now that they're pushing artists into unemployment and filling the internet with slop.)<p>There is no "imagined counterfactual", people just want AI firms to follow basic ethics and apply consent. Something tech in general is woefully inadequate at.<p>The counterfactual isn't offered by artists, but AI companies. "If we had to ask consent then we couldn't have made this". Okay, so? The world isn't worse off without OpenAI's image generator. Who cares, there's no economic value to these slop images, they're merely replacing stock assets & quickly thrown together MS paint placeholders.<p>Given how much of a shitshow this technology has always been (I refuse to mince words: This tech had it's "big break" as "deepfakes", and Elon Musk has escalated that even further. It's always been sexual harassment.) The actual net value to society is almost certainly negative.</p>
]]></description><pubDate>Wed, 22 Apr 2026 11:30:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47862066</link><dc:creator>SlinkyOnStairs</dc:creator><comments>https://news.ycombinator.com/item?id=47862066</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47862066</guid></item><item><title><![CDATA[New comment by SlinkyOnStairs in "ChatGPT Images 2.0"]]></title><description><![CDATA[
<p>If you end up creating something sufficiently similar, yes in fact you do. Or rather, you have done a copyright infringement and retroactive payment may be one of the remedies.<p>This also applies to AI, just worse because:<p>A) AI is not a human brain, and pretending that the process of human authorship is the same as AI is either a massive misunderstanding of the mechanics and architecture of these systems, or plain disingenuous nonsense.<p>B) AI has no capability of original thought. Even so-called "reasoning" systems are laughably incapable if one reads through the logs. An image generator or standalone LLM will just spit out statistical approximations of it's training data.<p>And B) here is especially damning because it means any AI user has zero defense against a copyright claim on their work. This creates enormous legal risks.<p>The model for copyright trolling is trivial. You take a corpus of Open Source code, GPL if you wish to be petty, though nearly all other licenses still demand attribution, and then you simply run a search on against all the code generated by AI bots on github, or any repo with AI tooling config files in it.<p>Won't be long before the FSF does something similar.</p>
]]></description><pubDate>Wed, 22 Apr 2026 10:26:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47861503</link><dc:creator>SlinkyOnStairs</dc:creator><comments>https://news.ycombinator.com/item?id=47861503</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47861503</guid></item><item><title><![CDATA[New comment by SlinkyOnStairs in "Anthropic takes $5B from Amazon and pledges $100B in cloud spending in return"]]></title><description><![CDATA[
<p>Self reply as I could've explained the SEC thing better:<p>Anti-fraud regulators like the SEC give an inherent trustworthiness and credibility to CEOs and other market participants. You can trust that they're not lying to you, because they would be sent to jail if they were.<p>Another example are general anti-fraud regulations; Consider how one would trust North American or European steel suppliers more than Chinese steel suppliers.<p>It's not that the Chinese are "evil lying people" and Americans are "saints who never lie", it's that you can trust American, Canadian, and European courts to hold the liars accountable by regulations even if you're not in any of those regions. But the Chinese liars won't be held accountable by regulations.<p>Thus also the opposite, if someone opts out of this credibility granted to them by anti-fraud regulations, their words may not be quite so truthful.</p>
]]></description><pubDate>Tue, 21 Apr 2026 19:51:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47853664</link><dc:creator>SlinkyOnStairs</dc:creator><comments>https://news.ycombinator.com/item?id=47853664</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47853664</guid></item><item><title><![CDATA[New comment by SlinkyOnStairs in "Anthropic takes $5B from Amazon and pledges $100B in cloud spending in return"]]></title><description><![CDATA[
<p>I'm going to be a dickhead for a moment here, apologies, there's no way to say this that isn't rude to you. This is still the same hearsay "In an interview, somewhere."<p>A bit of google searching later can get us a specific interview. <a href="https://www.dwarkesh.com/p/dario-amodei-2" rel="nofollow">https://www.dwarkesh.com/p/dario-amodei-2</a><p>> <i>Let’s say half of your compute is for training and half of your compute is for inference. The inference has some gross margin that’s more than 50%.</i><p>But the context, the very previous sentence is:<p>> <i>Think about it this way. Again, these are stylized facts. These numbers are not exact. I’m just trying to make a toy model here.</i><p>Here, Amodei is in effect using weasel words. He is not giving any actionable claims about Anthropics margins, merely plucking an arbitrary number. Why 50%? Is 50% reasonable? Is 50% accurate to the company? Those are all conclusions the listener draws, not Amodei.<p>> I don't know about SEC rules<p>The main premise is that, as a CEO, there are some regulations you are beholden to. You're not allowed to announce you've made a trillion dollar profit, sell all your stock, and then go "teehee just kidding". The SEC prosecute you for securities fraud if you do that stuff.<p>This makes such weasel words as earlier suspicious. Because the exact statement Amodei gives is <i>not</i> prosecutable. He's not saying anything about the company, just doing a little "toy model".<p>The degree to which it is intentional that this hearsay travels and is extrapolated from "Well he picked 50% because it's a reasonable figure, and because he's CEO, a reasonable figure would have to be a figure akin to what his company can achieve" into "Anthropic has 50% margin", that's up for debate. Maybe it is intentional, maybe Amodei is exactly the same kind of shitweasel as Altman is. Probably he's just a dumbass who runs his mouth in interviews and for <i>whatever reason</i> cannot issue the true number in an authoritative statement to dismiss this misconception.<p>Hence my original comment; If the <i>real</i> number were better than the hearsay rumours of the number, Amodei would immediately issue a correction; It'd be great for the company. Hell, even if 50% were about the margin, that'd be great! To promote that from mere hearsay to "we're profitable, go invest all your money" would also be huge. Really, any kind of margin at all would put him ahead of OpenAI.<p>But he doesn't issue a correction. He doesn't affirm the statement. Perhaps he has other reasons for that, but a rather big reason could be that the margin number is in fact pretty bad.<p>Now, the observant reader will note I am also using a weasel word there. I do not know whether the number is good or bad, your take away should be "it <i>could be</i> bad." Not "it is bad". Go pressure Amodei into giving us the real number.</p>
]]></description><pubDate>Tue, 21 Apr 2026 18:59:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47852976</link><dc:creator>SlinkyOnStairs</dc:creator><comments>https://news.ycombinator.com/item?id=47852976</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47852976</guid></item><item><title><![CDATA[New comment by SlinkyOnStairs in "Anthropic takes $5B from Amazon and pledges $100B in cloud spending in return"]]></title><description><![CDATA[
<p>While $5000 is a lot, the people who rack up close or just over a thousand "API equivalent cost" are pretty common.<p>> Most likely the subscription inference cost is much lower than you expect.<p>This is probably not true because they'd be screaming it off every rooftop were that the case.<p>Same deal with the API inference. Even the "profitable on inference" claim is sourced back to hearsay of informal statements made by OpenAI/Anthropic staff. No formal announcements, nothing remotely of the "You can trust what I'm saying, because if I'm lying the SEC will have my head" sort.<p>Yet making such statements would be invaluable. If Anthropic can demonstrate profitability before OpenAI, they could poach most of the funding. There's no reason to keep it a company secret.<p>And API inference is only part of the total costs, not even bringing in training and ongoing fine-tuning. If they're not even profitable on inference, how could they hope to be profitable overall.</p>
]]></description><pubDate>Tue, 21 Apr 2026 17:52:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47852082</link><dc:creator>SlinkyOnStairs</dc:creator><comments>https://news.ycombinator.com/item?id=47852082</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47852082</guid></item><item><title><![CDATA[New comment by SlinkyOnStairs in "Anthropic takes $5B from Amazon and pledges $100B in cloud spending in return"]]></title><description><![CDATA[
<p>> Fortune 500s that were not already testing the waters with companies like Anthropic are rushing to figure out governance and how to use these tools across their orgs.<p>Most of this is still structured around "find use cases for AI" rather than one (or more) clear use cases being the reason for adopting AI.<p>There's no "Lotus 1-2-3" of AI. Even the software development applications are still somewhat controversial and highly pushed based on "Sam Altman promised me 10x developers".</p>
]]></description><pubDate>Tue, 21 Apr 2026 17:41:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47851965</link><dc:creator>SlinkyOnStairs</dc:creator><comments>https://news.ycombinator.com/item?id=47851965</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47851965</guid></item></channel></rss>