<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: Chris_Newton</title><link>https://news.ycombinator.com/user?id=Chris_Newton</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 15 Apr 2026 02:16:54 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=Chris_Newton" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by Chris_Newton in "Some things just take time"]]></title><description><![CDATA[
<p><i>Speed actually just wins, because we are usually constrained by time.</i><p>Sorry, but I don’t understand what you mean here. What do we win by being faster at producing the wrong things?</p>
]]></description><pubDate>Sat, 21 Mar 2026 23:12:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47472503</link><dc:creator>Chris_Newton</dc:creator><comments>https://news.ycombinator.com/item?id=47472503</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47472503</guid></item><item><title><![CDATA[New comment by Chris_Newton in "Some things just take time"]]></title><description><![CDATA[
<p>With all the emphasis on the speed of modern AI tools, we often seem to forget that velocity is a vector quantity. Increased speed only gets us where we want to be sooner if we are also heading in the right direction. If we’re far enough off course, increasing speed becomes counterproductive and it ends up taking longer to get where we want to be.<p>I’ve been noticing that this simple reality explains almost all of both the good and the bad that I hear about LLM-based coding tools. Using AI for research or to spin up a quick demo or prototype is using it to help plot a course. A lot of the multi-stage agentic workflows also come down to creating guard rails before doing the main implementation so the AI can’t get too far off track. Most of the success stories I hear seem to be in these areas so far. Meanwhile, probably the most common criticism I see is that an AI that is simply given a prompt to implement some new feature or bug fix for an existing system often misunderstands or makes bad assumptions and ends up repeatedly running into dead ends. It moves fast but without knowing which direction to move in.</p>
]]></description><pubDate>Sat, 21 Mar 2026 17:44:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47469310</link><dc:creator>Chris_Newton</dc:creator><comments>https://news.ycombinator.com/item?id=47469310</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47469310</guid></item><item><title><![CDATA[New comment by Chris_Newton in "Turn Dependabot off"]]></title><description><![CDATA[
<p>Interesting, thanks. In the UUID example you mentioned, it seems the CodeQL model is missing some information about how FastAPI’s runtime validation works and so not drawing correct inferences about the types. It doesn’t seem to have a general problem with tracking request parameters coming into Python web frameworks — in fact, the first thing that really impressed me about CodeQL was how accurate its reports were with some quite old Django code — but there is a lot more emphasis on type annotations and validating input against those types at runtime in FastAPI.<p>I completely agree about the problem of someone deciding to turn these kinds of scanning tools on and then expecting they’ll Just Work. I do think the better tools can provide a lot of value, but they still involve trade-offs and no tool will get everything 100% right, so there will always be a need to review their output and make intelligent decisions about how to use it. Scanning tools that don’t provide a way to persistently mark a certain result as incorrect or to collect multiple instances of the same issue together tend to be particularly painful to work with.</p>
]]></description><pubDate>Sat, 21 Feb 2026 16:29:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47102231</link><dc:creator>Chris_Newton</dc:creator><comments>https://news.ycombinator.com/item?id=47102231</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47102231</guid></item><item><title><![CDATA[New comment by Chris_Newton in "Turn Dependabot off"]]></title><description><![CDATA[
<p>This is true and customers do a lot of unfortunate things in the name of security theatre. Sometimes you have to play the cards you’ve been dealt and roll with it. However, educating them about why they’re wasting significant amounts of money paying you to deal with non-problems does sometimes work as a mutually beneficial alternative.</p>
]]></description><pubDate>Sat, 21 Feb 2026 15:38:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47101747</link><dc:creator>Chris_Newton</dc:creator><comments>https://news.ycombinator.com/item?id=47101747</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47101747</guid></item><item><title><![CDATA[New comment by Chris_Newton in "Turn Dependabot off"]]></title><description><![CDATA[
<p>OK, but all I said before was that CodeQL’s approach where it supplies a specific example to support a specific problem report is inherently resistant to false positives.<p>Clearly it is still <i>possible</i> to generate a false positive if, for example, CodeQL’s algorithm thinks it has found a path through the code where unsanitised user data can be used dangerously, but in fact there was a sanitisation step along the way that it didn’t recognise. This is the kind of situation where the theoretical result about not being able to determine whether a semantic property holds in all cases is felt in practical terms.<p>It still seems much less likely that an algorithm that needs to produce a specific demonstration of the problem it claims to have found will result in a false positive than the kind of naïve algorithms we were discussing before that are based on a generic look-up table of software+version=vulnerability without any attempt to determine whether there is actually a path to exploit that vulnerability in the real code.</p>
]]></description><pubDate>Sat, 21 Feb 2026 15:32:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47101696</link><dc:creator>Chris_Newton</dc:creator><comments>https://news.ycombinator.com/item?id=47101696</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47101696</guid></item><item><title><![CDATA[New comment by Chris_Newton in "I verified my LinkedIn identity. Here's what I handed over"]]></title><description><![CDATA[
<p>I too found that my LinkedIn account had suddenly become “temporarily” disabled a little while ago, for reasons unspecified. I too was invited to share my government ID with some verification system to get back in again.<p>I too declined on privacy grounds.</p>
]]></description><pubDate>Sat, 21 Feb 2026 12:27:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47100143</link><dc:creator>Chris_Newton</dc:creator><comments>https://news.ycombinator.com/item?id=47100143</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47100143</guid></item><item><title><![CDATA[New comment by Chris_Newton in "Turn Dependabot off"]]></title><description><![CDATA[
<p>If you replace a dependency that has a known vulnerability with a different dependency that does not, surely that is objectively an improvement in at least that specific respect? Of course we can’t guarantee that it didn’t introduce some other problem as well, but not fixing known problems because of hypothetical unknown problems that might or might not exist doesn’t seem like a great strategy.</p>
]]></description><pubDate>Sat, 21 Feb 2026 11:33:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47099807</link><dc:creator>Chris_Newton</dc:creator><comments>https://news.ycombinator.com/item?id=47099807</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47099807</guid></item><item><title><![CDATA[New comment by Chris_Newton in "Turn Dependabot off"]]></title><description><![CDATA[
<p><i>CodeQL seems to raise too many false-positives in my experience.</i><p>I’d be interested in what kinds of false positives you’ve seen it produce. The functionality in CodeQL that I have found useful tends to accompany each reported vulnerability with a specific code path that demonstrates how the vulnerability arises. While we might still decide there is no risk in practice for other reasons, I don’t recall ever seeing it make a claim like this that was incorrect from a technical perspective. Maybe some of the other types of checks it performs are more susceptible to false positives and I just happen not to have run into those so much in the projects I’ve worked on.</p>
]]></description><pubDate>Sat, 21 Feb 2026 11:29:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47099788</link><dc:creator>Chris_Newton</dc:creator><comments>https://news.ycombinator.com/item?id=47099788</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47099788</guid></item><item><title><![CDATA[New comment by Chris_Newton in "Turn Dependabot off"]]></title><description><![CDATA[
<p>Sorry, I don’t understand the point you’re making. If CodeQL reports that you have a XSS vulnerability in your code, and its report includes the complete and specific code path that creates that vulnerability, how is Rice’s theorem applicable here? We’re not talking about decidability of some semantic property in the general case; we’re talking about a specific claim about specific code that is demonstrably true.</p>
]]></description><pubDate>Sat, 21 Feb 2026 11:17:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47099713</link><dc:creator>Chris_Newton</dc:creator><comments>https://news.ycombinator.com/item?id=47099713</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47099713</guid></item><item><title><![CDATA[New comment by Chris_Newton in "Turn Dependabot off"]]></title><description><![CDATA[
<p>Dependabot has some value IME, but all naïve tools that only check software and version numbers against a vulnerability database tend to be noisy if they don’t then do something else to determine whether your code is actually exposed to a matching vulnerability.<p>One security checking tool that has genuinely impressed me recently is CodeQL. If you’re using GitHub, you can run this as part of GitHub Advanced Security.<p>Unlike those naïve tools, CodeQL seems to perform a real tracing analysis through the code, so its report doesn’t just say you have user-provided data being used dangerously, it shows you a complete, step-by-step path through the code that connects the input to the dangerous usage. This provides useful, actionable information to assess and fix real vulnerabilities, and it is inherently resistant to false positives.<p>Presumably there is still a possibility of false negatives with this approach, particularly with more dynamic languages like Python where you could surely write code that is obfuscated enough to avoid detection by the tracing analysis. However, most of us don’t intentionally do that, and it’s still useful to find the rest of the issues even if the results aren’t perfect and 100% complete.</p>
]]></description><pubDate>Sat, 21 Feb 2026 07:46:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47098473</link><dc:creator>Chris_Newton</dc:creator><comments>https://news.ycombinator.com/item?id=47098473</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47098473</guid></item><item><title><![CDATA[New comment by Chris_Newton in "Modern CSS Code Snippets: Stop writing CSS like it's 2015"]]></title><description><![CDATA[
<p><i>MVC is a structural separation of responsibilities between model, view, and control logic.</i><p>Yes, but the “MVC” pattern used by various back-end web frameworks that borrowed the term a while back actually has very little to do with the original MVC of the Reenskaug era.<p>The original concept of MVC is based on a triangle of three modules with quite specific responsibilities and relationships. The closest equivalent on the back-end of a web application might be having a data model persisted via a database or similar, and then a web server providing a set of HTTP GET endpoints allowing queries of that model state (perhaps including some sort of WebSocket or Server-Sent Event provision to observe any changes) and a separate set of HTTP POST/PUT/PATCH endpoints allowing updates of the model state. Then on the back end, your “view” code handles any query requests, including monitoring the model state for changes and notifying any observers via WS/SSE, while your “controller” code handles any mutation requests. And then on the front end, you render your page content based on the back-end view endpoints, subscribe for notifications of changes that cause you to update your rendering, and any user interactions get sent to the back-end controller endpoints.<p>In practice, I don’t recall ever seeing an “MVC” back-end framework used anything like that. Instead, they typically have a “controller” in front of the “model” and have it manage all incoming HTTP requests, with “view” referring to the front-end code. This is fundamentally a tiered, linear relationship and it allocates responsibilities quite differently to the original, triangular MVC.</p>
]]></description><pubDate>Mon, 16 Feb 2026 00:42:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47029450</link><dc:creator>Chris_Newton</dc:creator><comments>https://news.ycombinator.com/item?id=47029450</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47029450</guid></item><item><title><![CDATA[New comment by Chris_Newton in "Modern CSS Code Snippets: Stop writing CSS like it's 2015"]]></title><description><![CDATA[
<p>In the original MVC architecture, the fundamental idea was that the model was responsible for storing the application state, a view was responsible for rendering output to the user, and a controller was responsible for responding to user interactions.<p>The model can be completely unaware of any specific views or controllers. It only needs to provide an interface allowing views to observe the current state and controllers to update that state.<p>In practice, views and controllers usually aren’t independent and instead come as a pair. This is because most modern UIs use some kind of event-driven architecture where user interactions are indicated by events from some component rendered by the view that the controller then handles.<p>My go-to example to understand why this architecture is helpful is a UI that features a table showing some names and a count for each, alongside a chart visualising that data graphically. Here you would have a model that stores the names and counts as pure data, and you would have two view+controller pairs, one managing the table and one the chart. Each view observes the model and renders an updated table or chart when the model state changes. Each controller responds to user interactions that perhaps edit a name or change its count — whether by typing a new value as text in an editable table cell or by dragging somewhere relevant in the chart — by telling the model to update its state to match (which in turn causes all views observing the model to refresh, without any further action from whichever controller happened to be handling that user interaction).<p>In practical terms for a React application, we might implement this with a simple object/Map somewhere that holds the names and values (our “model”) and two top-level React components that each get rendered once into some appropriate container within the page. Each component would have props to pass in (a) the current state and (b) any functions to be called when the user makes a change. Then you just write some simple glue logic in plain old JavaScript/TypeScript that handles keeping track of observers of the model, registering an observer for each top-level component that causes it rerender when the state changes, and providing a handler for each type of change the user is allowed to make that updates the state and then notifies the observers.<p>There are lots of variations on this theme, for example once you start needing more complicated business logic to interpret a user interaction and decide what state change is required or you need to synchronise your front-end model state with some remote service. However, you can scale a very long way with the basic principle that you hold your application state as pure data in a model that doesn’t know anything about any specific user interface or remote service and instead provides an interface for any other modules in the system to observe and/or update that state.</p>
]]></description><pubDate>Mon, 16 Feb 2026 00:12:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47029245</link><dc:creator>Chris_Newton</dc:creator><comments>https://news.ycombinator.com/item?id=47029245</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47029245</guid></item><item><title><![CDATA[New comment by Chris_Newton in "Babylon 5 is now free to watch on YouTube"]]></title><description><![CDATA[
<p>B5 is still one of my favourite TV shows of all time.<p>The common criticisms are largely true: it does start slow with some weak episodes in season 1, some of the acting is a bit wooden, the CGI hasn’t aged well, season 5 is slightly anti-climactic because they largely wrapped up the main plot arc in season 4 in case the final season didn’t happen.<p>On the other hand, it had an epic storyline that spanned not just episodes but multiple seasons in a way that no-one had really tried in sci-fi before. That storyline made sense and weaves in and out of the individual episodes because it was planned out in advance. The world-building and development of different cultures and how they relate is generally strong.<p>Against that over-arching backdrop, it also had a lot of good individual episodes. They had genuine character development. They explored social and moral issues as well as any show of that period. They varied from diplomatic and political settings to the adventure of deep space exploration to almost pure action episodes. They varied in scale too, from relatable stories about a single individual, to stories about a whole planet or culture, right up to the fate of the known universe.<p>Much of the acting criticism is directed at the main leader characters, but I’ve always thought this is slightly unfair, because the script often relies on those characters to carry the plot and provide much of the exposition and those tend to be the more formulaic parts. The same show also features some of the best acting and main character arcs in TV sci-fi, with the relationship between Londo Mollari (played by Peter Jurasik) and G'Kar (Andreas Katsulas) being one of the great double acts. There were many good moments from the rest of the ensemble cast too, from the doctor wrestling with his conscience to a certain wave. And then there were some great supporting/recurring roles, from the light relief of Zathras (and Zathras, Zathras, Zathras, Zathras and Zathras, of course) to the much more serious Bester (arguably Walter Koenig’s finest work).<p>If you haven’t watched B5 and you’re a fan of epic space sci-fi, I highly recommend it even with its flaws. The first season is a slow burner (although it also has a lot of subtle set-up that you won’t appreciate until much later) but it picks up. If you’re the type of viewer who can’t stand filler episodes, there used to be some relatively spoiler-free guides to which early episodes you really need to watch and which you can skip, so you could look for one of those. Don’t watch <i>In The Beginning</i> first, though; it’s a prequel TV movie that has lots of spoilers about the main story that you’re not supposed to know yet when you watch the early series.</p>
]]></description><pubDate>Sat, 14 Feb 2026 18:26:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47016949</link><dc:creator>Chris_Newton</dc:creator><comments>https://news.ycombinator.com/item?id=47016949</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47016949</guid></item><item><title><![CDATA[New comment by Chris_Newton in "Second Win11 emergency out of band update to address disastrous Patch Tuesday"]]></title><description><![CDATA[
<p>I expect you’re right about the sales funnel angle, though neither Windows nor Office seems to be the same kind of product that those brands have traditionally described any more, presumably for that same reason.<p>Windows appears to be positioned more as a platform to reach all the online services now, rather than its traditional role as a desktop OS. Can you even activate it without being online and having a Microsoft account any more? I’m out of the loop, so genuinely don’t know the answer to this one.<p>Office — or whatever it’s being called after the recent changes — also appears to have morphed into something quite different. I tried searching just now to see if you could still buy a permanent licence and install the classic applications like Word and Excel locally, and some sources implied you could, but I didn’t actually find any way to buy it in five minutes of looking around office.microsoft.com. As far as I saw, that site is now 100% about the online SaaS version and trying to get users to save their documents in the cloud. For businesses, the strategy seems to include promoting other online services like SharePoint and Teams as well.<p>So I think I stand by my original argument, though I don’t think it necessarily disagrees with yours. Windows and The Software Product/Service Formerly Known As Office might still be a significant part of Microsoft’s sales funnel, but they aren’t the products that Windows and Office used to be any more. The products they used to be have been repurposed to support an online-first corporate strategy, along with almost everything else in the Nadella era. Would Microsoft care if 100% of their customers stopped using Windows tomorrow and jumped to Apple or Linux systems, as long as they still used the other services that generate most of Microsoft’s revenues these days? I’m not entirely sure they would.</p>
]]></description><pubDate>Sun, 25 Jan 2026 15:39:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=46755000</link><dc:creator>Chris_Newton</dc:creator><comments>https://news.ycombinator.com/item?id=46755000</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46755000</guid></item><item><title><![CDATA[New comment by Chris_Newton in "Second Win11 emergency out of band update to address disastrous Patch Tuesday"]]></title><description><![CDATA[
<p><i>The cornerstone of Microsoft still is Windows and Office.</i><p>Again, is it really, though? I have no special insider knowledge so perhaps this is just a misunderstanding of the public information, but just going by the organisation structure, leadership comments and recent financials, it looks like Windows makes up a relatively small part of Microsoft’s revenues these days, while the traditional desktop Office applications seem to be almost lost in the noise. The emphasis seems to be firmly on cloud services, though admittedly with all the rebranding from Microsoft lately, I find it hard to understand even what basic products and services they offer any more.</p>
]]></description><pubDate>Sun, 25 Jan 2026 12:51:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46753688</link><dc:creator>Chris_Newton</dc:creator><comments>https://news.ycombinator.com/item?id=46753688</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46753688</guid></item><item><title><![CDATA[New comment by Chris_Newton in "Second Win11 emergency out of band update to address disastrous Patch Tuesday"]]></title><description><![CDATA[
<p><i>Microsoft has seemingly been in a slow but steady decline for 10 years now.</i><p>Has it really, though? Or has it just shifted its corporate priorities away from its traditional stalwarts of Windows and Office, but in doing so caused disruption to users that had bet on the eternal stability of Microsoft’s product line? I don’t like the current direction of Windows any more than the next guy, and personally I’ve made other choices in recent years, but as a general principle, I’m not sure how reasonable it is to expect a business to continue offering the same product or service indefinitely if market forces are pushing it elsewhere.<p>IMHO, a deeper problem here is that we collectively allowed a near-monopoly culture to develop around desktop operating systems and basic business software. Instead of having a healthy degree of competition between providers and using standardisation to ensure interoperability and portability of our data, we’ve ended up in a “too big to fail” situation where many users have all their eggs in one basket and that basket has a rapidly growing hole in the bottom and looks like it’s going to fail anyway.<p>There are also reasonable arguments to be made about length of support for products already sold, forced obsolescence and ratcheting “upgrades”, where possibly the actions of some providers in the market are exploitative in ways we should not allow, and therefore regulating to prevent the undesirable behaviours might be in the public interest.<p>Ultimately, I think a combination of restricting customer-hostile practices while also encouraging a healthy degree of competition and interoperability in important markets would be best for the users and fair to the developers. Sadly, right now, we have neither of those things, and that’s how we get Windows 11, the mobile device duopoly, numerous examples of products or services being locked down against their users’ interests, online services that people increasingly rely on for fundamental aspects of their normal lives and yet that have little real obligation to those people in return, and assorted other ills of the 21st century tech landscape.</p>
]]></description><pubDate>Sun, 25 Jan 2026 11:47:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=46753244</link><dc:creator>Chris_Newton</dc:creator><comments>https://news.ycombinator.com/item?id=46753244</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46753244</guid></item><item><title><![CDATA[New comment by Chris_Newton in "NixOS 25.11 released"]]></title><description><![CDATA[
<p>I can only speak anecdotally, so it’s entirely possible that I’ve just been unlucky with this particular box, but I’ve seen a few quite serious issues going back over the past few years since I switched to NixOS as my primary OS.<p>Not so long ago there was some sort of problem with Hydra builds for a recent version of Node. That seemed to result in trying to build the whole thing locally on every update, taking a huge amount of time and then typically failing there as well.<p>I’ve seen things with Nvidia drivers vs Linux kernel versions as well. We did have a specific reason for choosing Nvidia for that particular workstation, but otherwise, I’d agree with popular advice to get AMD if you’re building a Linux box, just based on the frequency and severity of Nvidia driver issues we’ve seen here.<p>I’ve seen a few issues with Ubuntu upgrades over the years as well, and wouldn’t necessarily rate that much higher for stability. That’s always surprised me because IME Debian Stable is the gold standard — something I’ve trusted with our production servers for well over a decade now, from unattended upgrades to several major new releases, and barely seen a flicker of a hint of anything breaking in all that time. To be fair, I haven’t used Debian much on workstations, so I don’t know whether the kinds of issues I’ve experienced with NixOS and Ubuntu would have been more common if I had.</p>
]]></description><pubDate>Tue, 02 Dec 2025 00:20:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=46115553</link><dc:creator>Chris_Newton</dc:creator><comments>https://news.ycombinator.com/item?id=46115553</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46115553</guid></item><item><title><![CDATA[New comment by Chris_Newton in "NixOS 25.11 released"]]></title><description><![CDATA[
<p>Instability is one of the biggest but perhaps also the least understood downsides of NixOS, IMHO.<p>Contrary to the name, even the stable branch of NixOS can have problems while installing routine updates with `nixos-rebuild switch --upgrade`. In fairness, at least with NixOS you can normally roll back to a previous working configuration where you can try to fix or work around the problem if that does happen. It’s still painful if you have to do that, though.<p>Even if your routine updates all go smoothly, as you mentioned, each stable release is only supported for a <i>very</i> limited time window after the next one is out. NixOS doesn’t have any long-term support branch in the sense that some distros do. Again, you can overcome this to a degree by customising your configuration if you need specific versions of certain packages, but in doing so you’re moving back towards manually setting things up and resolving your own compatibility issues rather than having a distro with compatible packages you can install in whatever combination you want, which reduces the value of using a distro with a package repository in the first place.<p>To be clear, I’m a big fan of NixOS. I run it as my daily driver on a workstation where I do a lot of work on different projects for different clients. Its ability to have a clean, declarative description of what’s currently installed globally or for any given user or even when working in any given project directory for any given user is extremely valuable to me.<p>But it’s also fair to say that NixOS is not for everyone. It has been <i>by far</i> the least stable Linux distro I have ever used, in the sense of “If I turn my computer on and install the latest updates from the stable branch, will my computer still work afterwards?”. If you’re looking for a distro you can deploy and then maintain with little more than semi-automatic routine updates for a period of years then, at least for now, it is not the distro for you.</p>
]]></description><pubDate>Mon, 01 Dec 2025 02:18:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=46102747</link><dc:creator>Chris_Newton</dc:creator><comments>https://news.ycombinator.com/item?id=46102747</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46102747</guid></item><item><title><![CDATA[New comment by Chris_Newton in "The current state of the theory that GPL propagates to AI models"]]></title><description><![CDATA[
<p>I once had a well-known LLM reproduce pretty much an entire file from a well-known React library verbatim.<p>I was writing code in an unrelated programming language at the time, and the bizarre inclusion of that particular file in the output was presumably because the name of the library was very similar to a keyword I was using in my existing code, but this experience did not fill me with confidence about the abilities of contemporary AI. ;-)<p>However, it did clearly demonstrate that LLMs with billions or even trillions of parameters certainly can embed enough information to reproduce some of the material they were trained on verbatim or very close to it.</p>
]]></description><pubDate>Thu, 27 Nov 2025 19:02:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46072248</link><dc:creator>Chris_Newton</dc:creator><comments>https://news.ycombinator.com/item?id=46072248</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46072248</guid></item><item><title><![CDATA[New comment by Chris_Newton in "Shai-Hulud Returns: Over 300 NPM Packages Infected"]]></title><description><![CDATA[
<p><i>I think I prefer languages that realize things can improve and are willing to say if you want to run 10 year old code, use a 10 year old compiler/runtime.</i><p>IMHO, the trouble with that stance is that it leaves no path to <i>incrementally</i> update a long-lived system to benefit from any of those improvements.<p>Suppose we have an application that runs on 2025’s most popular platform and in ten years we’re porting it to whatever new platform is popular in 2035. Personally, I’d like to know that all the business logic and database queries and UI structure and whatever else we wrote that was working before will still be working on the new platform, to whatever extent that makes sense. I’d like to make only some reasonably necessary set of changes for things that are actually different between the two platforms.<p>If we can’t do that, our only other option is a big rewrite. That is how you get a Python 2 to Python 3 situation. And that, in turn, is how you get a lot of systems stuck on the older version for years, despite all the advantages any later versions might offer.</p>
]]></description><pubDate>Mon, 24 Nov 2025 20:39:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46039009</link><dc:creator>Chris_Newton</dc:creator><comments>https://news.ycombinator.com/item?id=46039009</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46039009</guid></item></channel></rss>