<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: swisniewski</title><link>https://news.ycombinator.com/user?id=swisniewski</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 04 May 2026 16:15:33 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=swisniewski" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by swisniewski in "OpenAI's o1 correctly diagnosed 67% of ER patients vs. 50-55% by triage doctors"]]></title><description><![CDATA[
<p>Let’s assume the AI does out perform the DR.<p>I still want humans in the loop, interpreting the LLMs findings and providing a sanity check.<p>You can’t hold an LLM accountable.<p>That’s the min responsible bar for LLM authored code, which normally doesn’t really matter much. For something as important as ER diagnostics, having a human in the loop is crucial.<p>The narrative that these tools are replacing human intelligence rather than augmenting it is, quite frankly, stupid.<p>We should embrace these tools.<p>But, “eliminating DRs”… hardly.</p>
]]></description><pubDate>Sun, 03 May 2026 21:06:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=48001495</link><dc:creator>swisniewski</dc:creator><comments>https://news.ycombinator.com/item?id=48001495</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48001495</guid></item><item><title><![CDATA[New comment by swisniewski in "WireGuard makes new Windows release following Microsoft signing resolution"]]></title><description><![CDATA[
<p>How big is the Wire Guard user base on Windows?<p>How often do they ship new versions?<p>My understanding is that:<p>1. Windows drivers are Attested by Microsoft<p>2. Windows collects driver telemetry<p>Which means a really good question to ask is:<p>Why are they canceling driver signing accounts without looking at metrics?</p>
]]></description><pubDate>Sat, 11 Apr 2026 05:19:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47727646</link><dc:creator>swisniewski</dc:creator><comments>https://news.ycombinator.com/item?id=47727646</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47727646</guid></item><item><title><![CDATA[New comment by swisniewski in "Is BGP safe yet?"]]></title><description><![CDATA[
<p>You can use BGP hijacks to spoof another website.<p>You just need to get a publicly trusted CA to mint a certificate for your new site.<p>This can be done, for example, with let’s encrypt, using several of the various domain verification challenges they support.<p>There are some protections against this, such as CAA records in DNS, which restrict which CAs can issue certs and depending on the CA which verification methods are allowed. That may not provide adequate protection.<p>For example if you are using LE and are using verification mechanisms other than DNS then the attacker could trick LE to issuing it a cert.<p>That also depends on the security of DNS, which can be tricky.<p>So, yes, BGP hijacks can be used to impersonate other sites, even though they are using HTTPS.<p>When you configure your domains, 
Make sure you setup CAA, locked down to your specific CA, and have DNS sec setup, as a minimum bar. Also avoid using DV mechanisms that only rely on control over an IP address, as that can be subverted via BGP.</p>
]]></description><pubDate>Wed, 01 Apr 2026 14:36:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47601558</link><dc:creator>swisniewski</dc:creator><comments>https://news.ycombinator.com/item?id=47601558</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47601558</guid></item><item><title><![CDATA[New comment by swisniewski in "Spring Boot Done Right: Lessons from a 400-Module Codebase"]]></title><description><![CDATA[
<p>There is no right way to do Spring Boot.The entire idea is broken.<p>Dependency injection is good.  It makes it possible to test stuff.<p>Automagic wiring of dependencies based on annotations is bad and horrible.<p>If you want to do dependency injection, you should do it the way Go programs do it. Create the types you need in your main method and pass them into the constructors that need them.<p>When you write tests and you want to inject something else, then create something else and pass that in.<p>But the idea that you create magic containers and then decorate packages or classes or methods or fields somewhere and then stuff suddenly gets wired into something else via reflection magic is a maintenance nightmare. This is particularly true when some bean is missing, and the one guy who knows which random package out of hundreds has that bean in it is on vacation and the poor schmucks on his team have no clue why their stuff doesn't work.<p>"I added Spring Boot to our messy Java project."<p>"Now you have 3 problems."</p>
]]></description><pubDate>Mon, 30 Mar 2026 17:48:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47577449</link><dc:creator>swisniewski</dc:creator><comments>https://news.ycombinator.com/item?id=47577449</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47577449</guid></item><item><title><![CDATA[New comment by swisniewski in "GitHub appears to be struggling with measly three nines availability"]]></title><description><![CDATA[
<p>To be honest, I’m not surprised that GitHub has been having issues.<p>If you have ever operated GitHub Enterprise Server, it’s a nightmare.<p>It doesn’t support active-active. It only supports passive standbys. Minor version upgrades can’t be done without downtime, and don’t support rollbacks. If you deploy an update, and it has a bug, the only thing you can do is restore from backup leading to data loss.<p>This is the software they sell to their highest margin customers, and it fails even basic sniff tests of availability.<p>Data loss for source code is a really big deal.<p>Downtime for source control is a really big deal.<p>Anyone that would release such a product with a straight face, clearly doesn’t care deeply about availability.<p>So, the fact that their managed product is also having constant outages isn’t surprising.<p>I think the problem is that they just don’t care.</p>
]]></description><pubDate>Mon, 23 Mar 2026 16:51:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47492009</link><dc:creator>swisniewski</dc:creator><comments>https://news.ycombinator.com/item?id=47492009</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47492009</guid></item><item><title><![CDATA[New comment by swisniewski in "Bringing Chrome to ARM64 Linux Devices"]]></title><description><![CDATA[
<p>I use a DGX spark, with Cosmic as my DE, and it's super awesome.<p>This is a bit of a franekin-distro, as it's ubuntu + nvdia packgages + system 76 packages, but it works pretty well.<p>I've been using Flatpack chromium, which is ok for most things. It performs a bit better than Firefox does. Having access to official Chrome will be nice though, as it should come with Widevine support. Chromium doesn't support DRM, so some things like Netflix don't work.</p>
]]></description><pubDate>Thu, 12 Mar 2026 23:24:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47358664</link><dc:creator>swisniewski</dc:creator><comments>https://news.ycombinator.com/item?id=47358664</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47358664</guid></item><item><title><![CDATA[New comment by swisniewski in "GPT-5 outperforms federal judges in legal reasoning experiment"]]></title><description><![CDATA[
<p>The premise seems flawed.<p>From the paper:<p>“we find that the LLM adheres to the legally correct outcome significantly more often than human judges”<p>That presupposes that a “legally correct” outcome exists<p>The Common Law, which is the foundation of federal law and the law of 49/50 states, is a “bottom up” legal system.<p>Legal principals flow from the specific to the general. That is, judges decided specific cases based on the merits of that individual case. General principles are derived from lots of specific examples.<p>This is different from the Civil Law  used in most of Europe, which is top-down. Rulings in specific cases are derived from statutory principles.<p>In the US system, there isn’t really a “correct legal outcome”.<p>Common Law heavily relies on “Juris Prudence”. That is, we have a system that defers to the opinions of “important people”.<p>So, there isn’t a “correct” legal outcome.</p>
]]></description><pubDate>Thu, 12 Feb 2026 00:13:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=46983118</link><dc:creator>swisniewski</dc:creator><comments>https://news.ycombinator.com/item?id=46983118</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46983118</guid></item><item><title><![CDATA[New comment by swisniewski in "Software factories and the agentic moment"]]></title><description><![CDATA[
<p>Some of this is people trying to predict the future.<p>And it’s not unreasonable to assume it’s going there.<p>That being said, the models are not there yet. If you care about quality, you still need humans in the loop.<p>Even when given high quality specs, and existing code to use as an example, and lots of parallelism and orchestration, the models still make a lot of mistakes.<p>There’s lots of room for Software Factories, and Orchestrators, and multi agent swarms.<p>But today you still need humans reviewing code before you merge to main.<p>Models are getting better, quickly, but I think it’s going to be a while before “don’t have humans look at the code” is true.</p>
]]></description><pubDate>Sat, 07 Feb 2026 22:25:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46928819</link><dc:creator>swisniewski</dc:creator><comments>https://news.ycombinator.com/item?id=46928819</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46928819</guid></item><item><title><![CDATA[New comment by swisniewski in "Reducing Dependabot Noise"]]></title><description><![CDATA[
<p>This is not satire.<p>If you have a large dependency graph, you are going to have a lot of vulnerable stuff.<p>Letting one computer send you patches and the other computer merge it for you when all your tests pass is a good thing.</p>
]]></description><pubDate>Sun, 18 Jan 2026 04:19:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46664740</link><dc:creator>swisniewski</dc:creator><comments>https://news.ycombinator.com/item?id=46664740</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46664740</guid></item><item><title><![CDATA[New comment by swisniewski in "Reducing Dependabot Noise"]]></title><description><![CDATA[
<p>Take a look at pr-bot:<p><a href="https://github.com/marqeta/pr-bot" rel="nofollow">https://github.com/marqeta/pr-bot</a><p>The answer to dependabot, or snyk prs is to automatically merge them once all the status checks pass.<p>This free your devs from having to worry about patching.<p>PR-BOT will let you define policy on when it’s ok to automerge prs.</p>
]]></description><pubDate>Sun, 18 Jan 2026 01:57:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=46664090</link><dc:creator>swisniewski</dc:creator><comments>https://news.ycombinator.com/item?id=46664090</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46664090</guid></item><item><title><![CDATA[New comment by swisniewski in "AI misses nearly one-third of breast cancers, study finds"]]></title><description><![CDATA[
<p>The <i>article</i> has the headline "AI Misses Nearly One-Third of Breast Cancers, Study Finds".<p>It also has the following quotes:<p>1. "The results were striking: 127 cancers, 30.7% of all cases, were missed by the AI system"<p>2. "However, the researchers also tested a potential solution. Two radiologists reviewed only the diffusion-weighted imaging"<p>3. "Their findings offered reassurance: DWI alone identified the majority of cancers the AI had overlooked, detecting 83.5% of missed lesions for one radiologist and 79.5% for the other. The readers showed substantial agreement in their interpretations, suggesting the method is both reliable and reproducible."<p>So, if you are saying that the <i>article</i> is "not about AI performance vs human performance", that's not correct.<p>The <i>article</i> very clearly makes claims about the performance of AI vs the performance of doctors.<p>The <i>study</i> doesn't have the ability to state anything about the performance of doctors vs the performance of AI, because of the issues I mentioned. That was my point.<p>But the <i>study</i> can't state anything about the sensitivity of AI either because it doesn't compare the sensitivity of AI based mammography (XRay) analysis with that of human reviewed mammography. Instead it compares AI based mammography vs human based DWI when the humans knew the results were all true positives. It's both a different task ("diagnose" vs "find a pattern to verify an existing diagnosis") and different data (XRay vs MRI).<p>So, I don't think the claims from the <i>article</i> are valid in any way. And the <i>study</i> seems very flawed.<p>Also, attempting to measure sensitivity without also measuring specificity seems doubly flawed, because there are very big tradeoffs between the two.<p>Increasing sensitivity while also decreasing specificity can lead to unnecessary  amputations. That's a very high cost. Also, apparently studies have show that high false positive rates for breast cancer can lead to increased cancer risks because they deter future screening.<p>Given that I don't have access to the actual study, I have to assume I am missing something. But I don't think it's what you think I'm missing.</p>
]]></description><pubDate>Thu, 08 Jan 2026 18:09:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=46544340</link><dc:creator>swisniewski</dc:creator><comments>https://news.ycombinator.com/item?id=46544340</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46544340</guid></item><item><title><![CDATA[New comment by swisniewski in "AI misses nearly one-third of breast cancers, study finds"]]></title><description><![CDATA[
<p>Huh? I was commenting that there were no controls  and the doctors were given skewed data, so any conclusions of ai ability vs Dr ability seem misplaced. Which seems to be what you just said… so I am confused about what I said that was inaccurate.<p>Can you clarify?<p>I also hinted at the fact that I only had access to the posted summary and the original linked article, and not the study. So if there is data I am missing… please enlighten me.</p>
]]></description><pubDate>Thu, 08 Jan 2026 08:39:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46538764</link><dc:creator>swisniewski</dc:creator><comments>https://news.ycombinator.com/item?id=46538764</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46538764</guid></item><item><title><![CDATA[New comment by swisniewski in "AI misses nearly one-third of breast cancers, study finds"]]></title><description><![CDATA[
<p>The description from the summaries sound very flawed.<p>1. They only tested 2 Radiologists. And they compared it to one model. Thus the results don’t say anything about how Radiologists in general perform against AI in general. The most generous thing the study can say is that 2 Radiologists outperformed a particular model.<p>2. The Radiologists were only given one type of image, and then only for those patients that were missed by the AI. The summaries don’t say if the test was blind. The study has 3 authors, all of which appear to be Radiologists, and it mentions 2 Radiologists looked at the ai-missed scans. This raises questions about whether the test was blind or not.<p>Giving humans data they know are true positives and saying “find the evidence the AI missed” is very different from giving an AI model also trained to reduce false positives a classification task.<p>Humans are very capable at finding patterns (even if they don’t exist) when they want to find a pattern.<p>Even if the study was blind initially, trained humans doctors would likely quickly notice that the data they are analyzing is skewed.<p>Even if they didn’t notice, humans are highly susceptible to anchoring bias.<p>Anchoring bias is a cognitive bias where individuals rely too heavily on the first piece of information they receive (the "anchor") when making subsequent judgments or decisions.<p>They skewed nature or the data has a high potential to amplify any anchoring bias.<p>If the experiment had controls, any measurement error resulting from human estimation errors could potentially cancel out (a large random sample of either images or doctors should be expected to have the same estimation errors in each group). But there were no controls at all in the experiment, and the sample size was very small. So the influence of estimation biases on the result could be huge.<p>From what I can read in the summary, these results don’t seem reliable.<p>Am I missing something?</p>
]]></description><pubDate>Thu, 08 Jan 2026 08:26:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=46538687</link><dc:creator>swisniewski</dc:creator><comments>https://news.ycombinator.com/item?id=46538687</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46538687</guid></item><item><title><![CDATA[New comment by swisniewski in "Netflix to Acquire Warner Bros"]]></title><description><![CDATA[
<p>look up the parent tree... There was this statement:<p>> From a Hacker News perspective, I wonder what this means for engineers working on HBO Max. Netflix says they’re keeping the company separate but surely you’d be looking to move them to Netflix backend infrastructure at the very least.<p>The HBO Max service has something like 128M subscribers. This is < half of the 301M subscribers Netflix has, but is still a large number.<p>Certainly there's going to be some duplication, but it would be unwise to suddenly disrupt the delivery vehicles that you have 128M paying customers using in favor of a different delivery vehicle.<p>So, you should expect all the various HBO Max clients in existence to continue working for at least 5 years after the acquisition closes, if not longer.<p>Suddenly turning that off and saying "go use the Netflix app" wouldn't be good.<p>In any case, moving all the WB content onto the Netflix CDN and making it available on all the Netflix clients is "product integration", not "infrastructure integration". You are likely to see that very quickly. Weeks to months after the acquisition closes.<p>But, getting rid of all the HBO Max client software that talks to the HBO Max Servers running in whatever data center or cloud WB is using, and downloading video from whatever CDN WB has, and all the associated infra stuff, that's infra integration and it won't happen for a while. I think that will take 5-10 years.</p>
]]></description><pubDate>Fri, 19 Dec 2025 20:05:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=46330246</link><dc:creator>swisniewski</dc:creator><comments>https://news.ycombinator.com/item?id=46330246</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46330246</guid></item><item><title><![CDATA[New comment by swisniewski in "Pop_OS 24.04 LTS with COSMIC desktop environment"]]></title><description><![CDATA[
<p>I use Cosmic on a DGX Spark, as my daily driver, and it works pretty well.<p>They don’t have a pop os iso for arm64, but they do have arm64 Debian repo. So I just took DGX os (what Nvidia ships on the device), added the pos os “releases” repo, and installed cosmic-session.<p>It works like a charm and provides a super useful tiling experience out of the box.<p>This is replacing my M3 Pro as my daily driver and I’ve been pretty happy with it.<p>I recently upgraded to an ultrawide monitor and find the Cosmic UX to be hands down better than what I get in the Mac with it.<p>If you want a Linux desktop with the productivity boost of a tiling window manager with a low learning curve, it’s pretty good.</p>
]]></description><pubDate>Thu, 11 Dec 2025 21:30:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=46237422</link><dc:creator>swisniewski</dc:creator><comments>https://news.ycombinator.com/item?id=46237422</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46237422</guid></item><item><title><![CDATA[New comment by swisniewski in "Netflix to Acquire Warner Bros"]]></title><description><![CDATA[
<p>Generally with large acquisitions, product integration tends to precede infrastructure integration by years to decades.<p>Look at GitHub as an example, they were acquired in 2018, and are just migrating to Azure now after 7 years.<p>Microsoft shipping integrations with GitHub in 20108.<p>This is definitely the case with several
Salesforce acquisitions (early product integration, little, no, or much later infrastructure integration).<p>So… I predict some level of content integration within a few months.<p>But infra integration is likely years away.</p>
]]></description><pubDate>Sat, 06 Dec 2025 02:07:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46169882</link><dc:creator>swisniewski</dc:creator><comments>https://news.ycombinator.com/item?id=46169882</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46169882</guid></item><item><title><![CDATA[New comment by swisniewski in "Fizz Buzz without conditionals or booleans"]]></title><description><![CDATA[
<p>Not sure why this got downvoted.<p>The technique could be implemented without conditionals, but not in python, and not using iterators.<p>You could do it in C, and use & and ~ to make the cyclic counters work.<p>But, like I mentioned, the code in the article is very far from being free of conditionals.</p>
]]></description><pubDate>Wed, 19 Nov 2025 04:06:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=45975743</link><dc:creator>swisniewski</dc:creator><comments>https://news.ycombinator.com/item?id=45975743</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45975743</guid></item><item><title><![CDATA[New comment by swisniewski in "Fizz Buzz without conditionals or booleans"]]></title><description><![CDATA[
<p>Sigh…<p>Saying the code doesn’t have conditions or booleans is only true if you completely ignore how the functions being called are being implemented.<p>Cycle involves conditionals, zip involves conditionals, range involves conditionals, array access involves conditionals, the string concatenation involves conditionals, the iterator expansion in the for loop involves conditionals.<p>This has orders of magnitude more conditionals than normal fizz buzz would.<p>Even the function calls involve conditionals (python uses dynamic dispatch). Even if call site caching is used to avoid repeated name lookups, that involves conditionals.<p>There is not a line of code in that file (even the import statement) that does not use at least one conditional.<p>So… interesting implementation, but it’s not “fizzbuzz without booleans or conditionals”.</p>
]]></description><pubDate>Wed, 19 Nov 2025 03:36:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=45975588</link><dc:creator>swisniewski</dc:creator><comments>https://news.ycombinator.com/item?id=45975588</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45975588</guid></item><item><title><![CDATA[New comment by swisniewski in "Disassembling terabytes of random data with Zig and Capstone to prove a point"]]></title><description><![CDATA[
<p>Another interesting thing… random data has a high likely hood of disassembling into random instructions, but there’s a low probability that such instructions (particularly sequences of such instructions) are valid semantically.<p>For example, there’s a very high chance a single random instruction would page fault.<p>If you want to generate random instructions and have them execute, you have to write a tiny debugger, intercept the page faults, fix up the program’s virtual memory map, then re-run the instruction to make it work.<p>This means that even though high entropy data has a good chance of producing valid instructions, it doesn’t have a high chance of producing valid instruction sequences.<p>Code that actually does something will have much much lower entropy.<p>That is interesting…even though random data is syntactically valid as instructions, it’s almost certainly invalid semantically.</p>
]]></description><pubDate>Thu, 13 Nov 2025 02:54:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=45909955</link><dc:creator>swisniewski</dc:creator><comments>https://news.ycombinator.com/item?id=45909955</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45909955</guid></item><item><title><![CDATA[New comment by swisniewski in "Too Much Go Misdirection"]]></title><description><![CDATA[
<p>There's a much simpler way to do this:<p>If you want your library to operate on bytes, then rather than taking in an io.Reader and trying to figure out how to get bytes out of it the most efficient way, why not just have the library taken in []byte rather than io.Reader?<p>If someone has a complex reader and needs to extract to a temporary buffer, they can do that. But if like in the author's case you already have []byte, then just pass that it rather than trying to wrap it.<p>I think the issue here is that the author is adding more complexity to the interface than needed.<p>If you need a []byte, take in a []byte. Your callers should be able to figure out how to get you that when they need to.<p>With go, the answer is usually "just do the simple thing and you will have a good time".</p>
]]></description><pubDate>Mon, 19 May 2025 16:47:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=44031738</link><dc:creator>swisniewski</dc:creator><comments>https://news.ycombinator.com/item?id=44031738</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44031738</guid></item></channel></rss>