<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: jerf</title><link>https://news.ycombinator.com/user?id=jerf</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 14 Apr 2026 11:35:31 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=jerf" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by jerf in "I just want simple S3"]]></title><description><![CDATA[
<p>I think we get a "S3 clone" about once every week or two on the Golang reddit.<p>It strikes me as a classic case of "we need all the interested people to pull in one project, not each start their own". AI may have made this worse then ever.</p>
]]></description><pubDate>Mon, 13 Apr 2026 21:47:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47758261</link><dc:creator>jerf</dc:creator><comments>https://news.ycombinator.com/item?id=47758261</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47758261</guid></item><item><title><![CDATA[New comment by jerf in "Someone bought 30 WordPress plugins and planted a backdoor in all of them"]]></title><description><![CDATA[
<p>Make sure you have a run of govulncheck [1] somewhere in your stack. It works OK as a commit hook, it runs quickly enough, but it can be put anywhere else as well, of course.<p>Go isn't immune to supply chain attacks, but it has built in a variety of ways of resisting them, including just generally shorter dependency chains that incorporate fewer whacky packages unless you go searching for them. I still recommend a periodic skim over go.mod files just to make sure nothing snuck in that you don't know what it is. If you go up to "Kubernetes" size projects it might be hard to know what every dependency is but for many Go projects it's quite practical to know what most of them are and get a sense they're probably dependable.<p>[1]: <a href="https://pkg.go.dev/golang.org/x/vuln/cmd/govulncheck" rel="nofollow">https://pkg.go.dev/golang.org/x/vuln/cmd/govulncheck</a> - note this is official from the Go project, not just a 3rd party dependency.</p>
]]></description><pubDate>Mon, 13 Apr 2026 19:13:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47756607</link><dc:creator>jerf</dc:creator><comments>https://news.ycombinator.com/item?id=47756607</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47756607</guid></item><item><title><![CDATA[New comment by jerf in "DIY Soft Drinks"]]></title><description><![CDATA[
<p>I've got a couple of sweetener-free recipes I use with my soda maker, though I should warn you nobody else I've given them to likes them. But I like them well enough.<p>One is a couple of squirts of vanilla, a couple of squirts of lemon juice, and a bit of salt. Salt is probably an underappreciated drink ingredient for this sort of thing. It turns out it isn't in your soft drinks <i>just</i> to make you want to drink more. This makes something that is related to cream soda, except for the aspects of cream soda that come from being crammed full of sugar, which I can't do much about.<p>I also have a mix I keep around made out of 3 tablespoons salt, 1 cup vanilla, 1/2 cup lemon juice, 1/2 cup lime juice, and about 1/3rd cup almond extract. I measure it all (except the salt which I just put in directly) into a single 2 cup Pyrex dish and just sort of eyeball the last 1/3rd cup of almond extract, then funnel it in to a holder. I use McCormick 32 oz vanilla and almond extract for this and order bulk RealLemon and RealLime juice for this from Amazon, and mix it into one of the leftover bottles and keep it around refrigerated. 3 squirts and "whatever dribbles in" as I'm removing the bottle is what I used for one DrinkMate bottle. To taste, as all of this is, of course. If nothing else this is pretty cheap per drink.<p>You can also mix unsweetened electrolytes in, but you have to wait until after you dilute the mixture with water or it'll react with the lemon & lime juice. Salt you can keep in the mix but not electrolytes in general. It adds a certain body to the mix even if you're not interested in the electrolytes <i>per se</i>, and a single packet of them lasts a long time.<p>You're not going to go into business selling this stuff, but if you're already drinking unsweetened apple cider vinegar & lemon/lime juice as a beverage flavoring we might just have some compatible tastes here. Carbonation is required, though, otherwise the vanilla and the almond extract don't come through at all.</p>
]]></description><pubDate>Mon, 13 Apr 2026 17:42:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47755461</link><dc:creator>jerf</dc:creator><comments>https://news.ycombinator.com/item?id=47755461</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47755461</guid></item><item><title><![CDATA[New comment by jerf in "A perfectable programming language"]]></title><description><![CDATA[
<p>I also add the observation that while the dynamic typing languages are all growing in the direction of the statically-typed languages, no statically-typed language (that I know of) is adding a lot of dynamically-typed features. If anything the static languages trend towards <i>more</i> static typing. Which doesn't mean the optimum is necessarily "100% in the direction of static typing", the costs of more static typing do eventually overwhelm the benefits by almost any standard, but the trend is universal and fairly clear.<p>I kind of think there's room for a new dynamically-typed language that is designed around being fast to execute and doesn't cost such a huge performance multiple right off the top, and starts from day 1 to be multi-thread capable, but on the whole the trend is clearly in the direction of static typing.</p>
]]></description><pubDate>Mon, 13 Apr 2026 15:56:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47753933</link><dc:creator>jerf</dc:creator><comments>https://news.ycombinator.com/item?id=47753933</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47753933</guid></item><item><title><![CDATA[New comment by jerf in "A perfectable programming language"]]></title><description><![CDATA[
<p>"Isn't the compile speed of Go so good because it's type system is much simpler?"<p>That, and forgoing fancy compile-time optimization steps which can get arbitrarily expensive. You can recover some of this with profile-guided optimization, but only some and my best guess based on the numbers is that it's not much compared to a more full (but much more expensive) suite of compile-time optimizations.</p>
]]></description><pubDate>Mon, 13 Apr 2026 15:51:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47753824</link><dc:creator>jerf</dc:creator><comments>https://news.ycombinator.com/item?id=47753824</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47753824</guid></item><item><title><![CDATA[New comment by jerf in "The hottest college major [Computer Science] hit a wall. What happened?"]]></title><description><![CDATA[
<p>It is easy to forget sometimes in the excitement, but nobody has been using (2026) AI for 20 years. We're all still new. I am sure that in the next year, something will be found that is fairly exciting, and something we could all be doing right now, but it's simply that nobody has thought of it yet. Or something that is today common practice will become generally considered an anti-pattern and common practice will have some replacement for it that, again, nothing stops us from doing it today but nobody has thought of it yet, because we're all newbs.<p>(One candidate example for this is the discussion I've seen in the last few days about not trying to negate something, to say "Don't do X", but instead stay positive because eventually the negation gets lost in the context window and you're better off just not putting the idea in the LLM's mind at all, where "Don't do X" comes to be seen as an LLM antipattern.)<p>One of the consequences of none of us having used AI for long enough is that we don't know how to onboard developers in an age of AI. This will be, by necessity, transient. Eventually we're going to max out what a person can do and we'll need more people. The supply of existing engineers will be limited. We will be forced to discover how to onboard new engineers.<p>But at the moment we've got our hands full, and we don't know how to do it.<p>The irony is, the best time to join a field is often exactly when the enrollment dips and the worst can be precisely when it is the most popular. Start a programming college program today and the odds that in 4 years we'll have onboarding figured out and have developed some sort of need for fresh developers is pretty decent.<p>But I don't know what to do about the fact that the standard CS curriculum was already of debatable relevance to me in the late 90s and I don't know of what relevance it will be in four years except to guess that it very likely to be even less. I do know that we are again affected by the fact nobody has been doing this for 20 years, like I mentioned above. There is no body of "wisdom" for an AI-powered world to draw on to construct a new curriculum. Universities would be inclined to do the obvious thing and try to chase our current practices with AI but those aren't going to be stable enough to build a curriculum on any time soon, and a real fundamentals-based curriculum may involve less AI than people may think.<p>I know one advantage I have over my younger peers at this point is just a knowledge of what terms to say to the AI to get it to do what I want, words like "event sourced" or "message bus" or "stored procedures", where simply knowing that the concept <i>exists</i> is the bottleneck. I could see a programming curriculum based on touring through a whole whackload of concepts with their pros and cons, or at least, where that is a much larger portion of it.<p>Ask me in 5 years though and I'd almost certainly suggest a completely different curriculum than I would now, though.</p>
]]></description><pubDate>Mon, 13 Apr 2026 14:47:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47752802</link><dc:creator>jerf</dc:creator><comments>https://news.ycombinator.com/item?id=47752802</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47752802</guid></item><item><title><![CDATA[New comment by jerf in "AI could be the end of the digital wave, not the next big thing"]]></title><description><![CDATA[
<p>AI is in spitting distance of being able to do that too.</p>
]]></description><pubDate>Mon, 13 Apr 2026 13:18:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47751548</link><dc:creator>jerf</dc:creator><comments>https://news.ycombinator.com/item?id=47751548</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47751548</guid></item><item><title><![CDATA[New comment by jerf in "Eternity in six hours: Intergalactic spreading of intelligent life (2013)"]]></title><description><![CDATA[
<p>You don't access the computronium. You move into it.<p>"good old-fashioned computing is interesting at all"<p>"Computronium" is defined as "the best computing power available". I deliberately selected it as a neutral term that does not depend on any particular model of QM or black holes or anything else.<p>Personally I doubt it's exactly one thing because optimizing for different types of computing is likely to result in a spectrum of computroniums rather than just the one, but the term flexes to encompass that easily enough.<p>The point is, you build something in that system over there for the same reason a normal human might buy a bit of property and put a house on it. The human in question isn't going "oh, I don't need to do that because the world already has hundreds of millions of residences". The human does that so that the residence <i>belongs to them</i>. The hundreds of millions of residences that do not belong to them do not factor into that question.</p>
]]></description><pubDate>Mon, 13 Apr 2026 00:08:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47745945</link><dc:creator>jerf</dc:creator><comments>https://news.ycombinator.com/item?id=47745945</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47745945</guid></item><item><title><![CDATA[New comment by jerf in "The peril of laziness lost"]]></title><description><![CDATA[
<p>It's going to be difficult for anyone to have any more "data" than you already do. It's early days for all of us. It's not like there's anyone with 20 years of 2026 AI coding assistant experience.<p>However we can say based on the architecture of the LLMs and how they work that if you want them to not do something, you really don't want to mention the thing you don't want them to do at all. Eventually the negation gets smeared away and the thing you don't want them to do becomes something they consider.  You want to stay as positive as possible and flood them with what you do want them to do, so they're too busy doing that to even consider what you didn't want them to do. You just plain don't want the thing you don't want in their vector space at all, not even with adjectives hanging on them.</p>
]]></description><pubDate>Sun, 12 Apr 2026 23:24:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47745565</link><dc:creator>jerf</dc:creator><comments>https://news.ycombinator.com/item?id=47745565</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47745565</guid></item><item><title><![CDATA[New comment by jerf in "Eternity in six hours: Intergalactic spreading of intelligent life (2013)"]]></title><description><![CDATA[
<p>To build more computronium than you can in your own system, assuming the demand for that will always rise.</p>
]]></description><pubDate>Sun, 12 Apr 2026 17:36:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47742304</link><dc:creator>jerf</dc:creator><comments>https://news.ycombinator.com/item?id=47742304</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47742304</guid></item><item><title><![CDATA[New comment by jerf in "Eternity in six hours: Intergalactic spreading of intelligent life (2013)"]]></title><description><![CDATA[
<p>I use the term "30,000 foot view" a lot: <a href="https://nanoglobals.com/glossary/30000-foot-view/" rel="nofollow">https://nanoglobals.com/glossary/30000-foot-view/</a><p>It appeals to me because if you've ever taken a flight you can see how the details get progressively erased as you lift. Details that matter for a lot of reasons even if you can't see them.</p>
]]></description><pubDate>Sun, 12 Apr 2026 17:35:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47742299</link><dc:creator>jerf</dc:creator><comments>https://news.ycombinator.com/item?id=47742299</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47742299</guid></item><item><title><![CDATA[New comment by jerf in "Pro Max 5x quota exhausted in 1.5 hours despite moderate usage"]]></title><description><![CDATA[
<p>It's also kind of interesting that they don't think they can do what an economy would normally do in this situation, which is raise prices until supply matches. Shortages generally imply mispricing.<p>There's a lot of angles you take from that as a starting point and I'm not confident that I fully understand it, so I'll leave it to the reader.</p>
]]></description><pubDate>Sun, 12 Apr 2026 14:50:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47740398</link><dc:creator>jerf</dc:creator><comments>https://news.ycombinator.com/item?id=47740398</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47740398</guid></item><item><title><![CDATA[New comment by jerf in "Pro Max 5x quota exhausted in 1.5 hours despite moderate usage"]]></title><description><![CDATA[
<p>Ads do not pay enough to cover AI usage. People see the big numbers Google and Facebook make in ads and forget to divide the number by the number of people they serve ads to, let alone the number of ads they served to get to that per-user number. You can't pay for 3 cents of inference with .07 cents of revenue.<p>You also can't put ads in code completion AIs because the instant you do the utility to me of them at work drops to <i>negative</i>. Guess how much money companies are going to pay for negative-value AIs? Let's just say it won't exactly pay for the AI bubble. A code agent AI puts an ad for, well, <i>anything</i> and the AI accidentally puts it into code that gets served out to a customer and someone's going to sue. The merits of the case won't matter, nor the fact the customer "should have caught it in review", the lawsuit and public reputation hit (how many people here are reading this and salivating at the thought of being able to post an angrygram about AIs being nothing but ad machines?) still cost way too much for the AI companies creating the agents to risk.</p>
]]></description><pubDate>Sun, 12 Apr 2026 14:46:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47740346</link><dc:creator>jerf</dc:creator><comments>https://news.ycombinator.com/item?id=47740346</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47740346</guid></item><item><title><![CDATA[New comment by jerf in "Small models also found the vulnerabilities that Mythos found"]]></title><description><![CDATA[
<p>I speculatively fired Claude Opus 4.6 at some code I knew very well yesterday as I was pondering the question. This code has been professionally reviewed about a year ago and came up fairly clean, with just a minor issue in it.<p>Opus "found" 8 issues. Two of them looked like they were probably realistic but not really that big a deal in the context it operates in. It labelled one of them as minor, but the other as major, and I'm pretty sure it's wrong about it being "major" even if is correct. Four of them I'm quite confident were just wrong. 2 of them would require substantial further investigation to verify whether or not they were right or wrong. I think they're wrong, but I admit I couldn't prove it on the spot.<p>It tried to provide exploit code for some of them, none of the exploits would have worked without some substantial additional work, even if what they were exploits for was correct.<p>In practice, this isn't a huge change from the status quo. There's all kinds of ways to get lots of "things that may be vulnerabilities". The assessment is a bigger bottleneck than the suspicions. AI providing "things that may be an issue" is not useless by any means but it doesn't necessarily create a phase change in the situation.<p>An AI that could automatically do all that, write the exploits, and then successfully <i>test</i> the exploits, refine them, and turn the whole process into basically "push button, get exploit" is a total phase change in the industry. If it in fact can do that. However based on the current state-of-the-art in the AI world I don't find it very hard to believe.<p>It is a frequent talking point that "security by obscurity" isn't really security, but in reality, yeah, it really is. An unknown but presumably staggering number of security bugs of every shape and size are out there in the world, protected solely by the fact that no human attacker has time to look at the code. And this has <i>worked</i> up until this point, because the attackers have been bottlenecked on their own attention time. It's kind of just been "something everyone knows" that any nation-state level actor could get into pretty much anything they wanted if they just tried hard enough, but "nation-state level" actor attention, despite how much is spent on it, has been quite limited relative to the torrent of software coming out in the world.<p>Unblocking the attackers by letting them simply purchase "nation-state level actor"-levels of attention in bulk is <i>huge</i>. For what such money gets them, it's cheap already today and if tokens were to, say, get an order of magnitude cheaper, it would be effectively negligible for a lot of organizations.<p>In the long run this will probably lead to much more secure software. The transition period from this world to that is going to be <i>total chaos</i>.<p>... again, assuming their assessment of its capabilities is accurate. I haven't used it. I can't attest to that. But if it's even half as good as what they say, yes, it's a <i>huge huge huge</i> deal and anyone who is even remotely worried about security needs to pay attention.</p>
]]></description><pubDate>Sat, 11 Apr 2026 20:09:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47733562</link><dc:creator>jerf</dc:creator><comments>https://news.ycombinator.com/item?id=47733562</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47733562</guid></item><item><title><![CDATA[New comment by jerf in "The future of everything is lies, I guess – Part 5: Annoyances"]]></title><description><![CDATA[
<p>I don't need to conduct 1000 transactions per day. I don't forsee a world in which it will be some sort of fatal inconvenience to need to approve all purchases. I certainly don't plan on ever just handing over my credit card to an LLM, due to its fundamental architectural issues with injection, and I still don't anticipate handing it over to any future AI architecture anytime soon because I struggle to imagine what benefits could possibly be worth the risk of taking down such a basic, cheap barrier.<p>All that stuff about support, though, inevitable.</p>
]]></description><pubDate>Sat, 11 Apr 2026 16:14:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47731775</link><dc:creator>jerf</dc:creator><comments>https://news.ycombinator.com/item?id=47731775</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47731775</guid></item><item><title><![CDATA[New comment by jerf in "Meta removes ads for social media addiction litigation"]]></title><description><![CDATA[
<p>At the risk of going against the gestalt, Facebook openly and publicly rejecting the ads is actually one of the better outcomes. They could have just put their thumbs on the scale, deprioritizing them, serving them to people they think are least likely to bite, etc. Lying about the number of times it was served because, after all, who can check? Many of us suspect the ad platforms already do this pretty routinely through one mechanism or another anyhow, after all.<p>It isn't reasonable to ask a platform to host content that is literally about suing them, not because of "freedom" concerns or whether or not Facebook is being hypocritical, but more because in the end there isn't a "fair" way for them to host that. The constraints people want to put on how Facebook would handle that ends up solving down to the null set by the time we account for them all. Open, public rejection is actually a fairly reasonable response and means the lawyers at least know what is up and can respond to a clear stimulus.</p>
]]></description><pubDate>Thu, 09 Apr 2026 19:12:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47708331</link><dc:creator>jerf</dc:creator><comments>https://news.ycombinator.com/item?id=47708331</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47708331</guid></item><item><title><![CDATA[New comment by jerf in "Claude mixes up who said what"]]></title><description><![CDATA[
<p>You can really see this in the recent video generation where they try to incorporate text-to-speech into the video. All the tokens flying around, all the video data, all the context of all human knowledge ever put into bytes ingested into it, and the systems still completely routinely (from what I can tell) fails to put the speech in the right mouth even with explicit instruction and all the "common sense" making it obvious who is saying what.<p>There was some chatter yesterday on HN about the very strange capability frontier these models have and this is one of the biggest ones I can think of... a model that <i>de novo</i>, from scratch is generating megabyte upon megabyte of really quite good video information that at the same time is often unclear on the idea that a knock-knock joke does not start with the exact same person saying "Knock knock? Who's there?" in one utterance.</p>
]]></description><pubDate>Thu, 09 Apr 2026 14:16:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47704083</link><dc:creator>jerf</dc:creator><comments>https://news.ycombinator.com/item?id=47704083</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47704083</guid></item><item><title><![CDATA[New comment by jerf in "Claude mixes up who said what"]]></title><description><![CDATA[
<p>By the nature of the LLM architecture I think if you "colored" the input via tokens the model would about 85% "unlearn" the coloring anyhow. Which is to say, it's going to figure out that "test" in the two different colors is the same thing. It kind of has to, after all, you don't want to be talking about a "test" in your prompt and it be completely unable to connect that to the concept of "test" in its own replies. The coloring would end up as just another language in an already multi-language model. It might slightly help but I doubt it would be a solution to the problem. And possibly at an unacceptable loss of capability as it would burn some of its capacity on that "unlearning".</p>
]]></description><pubDate>Thu, 09 Apr 2026 14:11:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47704031</link><dc:creator>jerf</dc:creator><comments>https://news.ycombinator.com/item?id=47704031</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47704031</guid></item><item><title><![CDATA[New comment by jerf in "The Future of Everything Is Lies, I Guess"]]></title><description><![CDATA[
<p>One of the reasons I'm comfortable using them as coding agents is that I can and do review every line of code they generate, and those lines of code form a gate. No LLM-bullshit can get through that gate, except in the form of lines of code, that I can examine, and even if I do let some bullshit through accidentally, the bullshit is stateless and can be extracted later if necessary just like any other line of code. Or, to put it another way, the context window doesn't come with the code, forming this huge blob of context to be carried along... the code is just the code.<p>That exposes me to when the models are objectively wrong and helps keep me grounded with their utility in spaces I can check them less well. One of the most important things you can put in your prompt is a request for sources, followed by you actually checking them out.<p>And one of the things the coding agents teach me is that you need to keep the AIs on a tight leash. What is their equivalent in other domains of them "fixing" the test to pass instead of fixing the code to pass the test? In the programming space I can run "git diff *_test.go" to ensure they didn't hack the tests when I didn't expect it. It keeps me wondering what the equivalent of that is in my non-programming questions. I have unit testing suites to verify my LLM output against. What's the equivalent in other domains? Probably some other isolated domains here and there do have some equivalents. But in general there isn't one. Things like "completely forged graphs" are completely expected but it's hard to catch this when you lack the tools or the understanding to chase down "where did this graph actually come from?".<p>The success with programming can't be translated naively into domains that lack the tooling programmers built up over the years, and based on how many times the AIs bang into the guardrails the tools provide I would definitely suggest large amounts of skepticism in those domains that lack those guardrails.</p>
]]></description><pubDate>Wed, 08 Apr 2026 16:10:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47692187</link><dc:creator>jerf</dc:creator><comments>https://news.ycombinator.com/item?id=47692187</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47692187</guid></item><item><title><![CDATA[New comment by jerf in "Microsoft terminated the account VeraCrypt used to sign Windows drivers"]]></title><description><![CDATA[
<p>I know it's not what people want to hear but my response to a lot of the comments here is just a general, I agree, it's time to stop using Windows.<p>They won't let you secure your drive the way you want. They won't let you secure your network the way you want (per the top-level comment about Wireguard). In so doing they are demonstrating not just that they can stop you from running these particular programs but that they are very likely going to exert this control on the entire product category going forward, and I see little reason to believe they will stop there. These are not minor issues; these are fundamental to the safety, security, and functionality of your machine. This indicates that Microsoft will continue to compromise the safety, security, and functionality of your machine going forward to their benefit as they see fit. This is intolerable for many, many use cases.<p>I think it is becoming clear that Microsoft no longer considers Windows users to be their customers any more. Despite the fact that people do in fact pay for Windows, Microsoft has shifted from largely supporting their customers to out-and-out exploiting their customers. (Granted a certain amount of exploitation has been around for a long time, but things like the best backwards compatibility in the industry showed their support, as well.)<p>I suspect this is the result of a lot of internal changes (not one big one) but I also see no particular reason at the moment to expect this to change. To my eyes both the first and second derivative is heading in the direction of more exploitation. More treating users like a cattle field and less like customers. When new features or work is being proposed at Microsoft, it is clear that it is being analyzed entirely in terms of how it can benefit Microsoft and users are not at the table.<p>No amount of wishing this wasn't so is going to change anything. No amount of complaining about how <i>hard</i> it is to get off of Windows is going to change anything; indeed at this point you're just signalling to Microsoft that they are correct and they can treat you this way and there's nothing you will do about it for a long time.</p>
]]></description><pubDate>Wed, 08 Apr 2026 13:20:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47689860</link><dc:creator>jerf</dc:creator><comments>https://news.ycombinator.com/item?id=47689860</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47689860</guid></item></channel></rss>