<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: black3r</title><link>https://news.ycombinator.com/user?id=black3r</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 05 Apr 2026 20:36:34 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=black3r" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by black3r in "LinkedIn is searching your browser extensions"]]></title><description><![CDATA[
<p>that's basically how it already works...<p>extensions choose on which site they're active and if they provide any available assets (e.g. some extensions modify CSS of the website by injecting their CSS, so that asset is public and then any website where the extension is active can call fetch("chrome-extension://<extension_id>/whatever/file/needed.css" if it knows the extension ID (fixed for each extension) and the file path to such asset... if the fetch result is 404, it can assume the extension is not installed, if the result is 200 it can assume the extension is installed.<p>This is what LinkedIn is doing... they have their own database of extension IDs and a known working file path, and they are just calling these fetches... they have been doing it for years, I've noticed it a few years back when I was developing a chrome extension which also worked with LinkedIn, but back then it was less than 100 extensions scanned, so I just assumed they want to detect specific extensions which break their site or their terms of use... now it's apparently 6000+ extensions...</p>
]]></description><pubDate>Thu, 02 Apr 2026 18:04:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47617972</link><dc:creator>black3r</dc:creator><comments>https://news.ycombinator.com/item?id=47617972</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47617972</guid></item><item><title><![CDATA[New comment by black3r in "Modern CSS Code Snippets: Stop writing CSS like it's 2015"]]></title><description><![CDATA[
<p>You can have decoupled Controllers from Views using React. That's the basis of the "original" Flux/Redux architecture used by React developers 10+ years ago when React was just beginning to get traction.<p>A flux/redux "Store" acts as a Model -> contains all the global state and exactly decides what gets rendered. A flux/redux "Dispatcher" acts as a Controller. And React "Components" (views) get their props from the "Store" and send "events" to "dispatcher", which in turn modifies the "Store" and forces a redraw.<p>Of course they aren't "entirely decoupled" because the view still has to call the controller functions, but the same controller action can be called from multiple views, and you can still design the architecture from Model, through Controller (which properties can change under what conditions) and then design the Views (where the interactions can happen).</p>
]]></description><pubDate>Mon, 16 Feb 2026 11:45:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47033881</link><dc:creator>black3r</dc:creator><comments>https://news.ycombinator.com/item?id=47033881</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47033881</guid></item><item><title><![CDATA[New comment by black3r in "MinIO repository is no longer maintained"]]></title><description><![CDATA[
<p>in theory "maintenance mode" should mean that they still deal with security issues and "no longer maintained" that they don't even do that anymore...<p>unless a security issue is reported it does feel very much the same...</p>
]]></description><pubDate>Fri, 13 Feb 2026 10:00:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47000960</link><dc:creator>black3r</dc:creator><comments>https://news.ycombinator.com/item?id=47000960</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47000960</guid></item><item><title><![CDATA[New comment by black3r in "I miss thinking hard"]]></title><description><![CDATA[
<p>My experience is similar, but I feel I'm actually thinking way harder than I ever was before LLMs.<p>Before LLMs once I was done with the design choices as you mention them - risks, constraints, technical debt, alternatives, possibilities, ... I cooked up a plan, and with that plan, I could write the code without having to think hard. Actually writing code was relaxing for me, and I feel like I need some relax between hard thinking sessions.<p>Nowadays we leave the code writing to LLMs because they do it way faster than a human could, but then have to think hard to check if the code LLM wrote satisfies the requirements.<p>Also reviewing junior developers' PRs became harder with them using LLMs. Juniors powered by AI are more ambitious and more careless. AI often suggests complicated code the juniors themselves don't understand and they just see that it works and commit it. Sometimes it suggests new library dependencies juniors wouldn't think of themselves, and of course it's the senior's role to decide whether the dependency is warranted and worthy of being included. Average PR length also increased. And juniors are working way faster with AI so we spend more time doing PR reviews.<p>I feel like my whole work somehow from both sides collapsed to reviewing code = from one side the code that my AI writes, from the other side the code that juniors' AI wrote, the amount of which has increased. And even though I like reviewing code, it feels like the hardest part of my profession and I liked it more when it was balanced with tasks which required less thinking...</p>
]]></description><pubDate>Wed, 04 Feb 2026 22:25:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46892754</link><dc:creator>black3r</dc:creator><comments>https://news.ycombinator.com/item?id=46892754</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46892754</guid></item><item><title><![CDATA[New comment by black3r in "What came first: the CNAME or the A record?"]]></title><description><![CDATA[
<p>> Literally every datacenter in the world was going to fail on this change<p>I would expect most datacenters to use their own local recursive caching DNS servers instead of relying on 1.1.1.1 to minimize latency.</p>
]]></description><pubDate>Mon, 19 Jan 2026 23:41:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=46686148</link><dc:creator>black3r</dc:creator><comments>https://news.ycombinator.com/item?id=46686148</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46686148</guid></item><item><title><![CDATA[New comment by black3r in "What came first: the CNAME or the A record?"]]></title><description><![CDATA[
<p>> 4. Ends up in test environment for, what, a month.. nothing using getaddrinfo from glibc is being used to test this environment or anyone noticed that it was broken<p>"Testing environment" sounds to me like a real network real user devices are used with (like the network used inside CloudFlare offices). That's what I would do if I was developing a DNS server anyway, other than unit tests (which obviously wouldn't catch this unless they were explicitly written for this case) and maybe integration/end-to-end tests, which might be running in Alpine Linux containers and as such using musl. If that's indeed the case, I can easily imagine how noone noticed anything was broken. First look at this line:<p>> Most DNS clients don’t have this issue. For example, systemd-resolved first parses the records into an ordered set:<p>Now think about what real end user devices are using: Windows/macOS/iOS obviously aren't using glibc and Android also has its own C library even though it's Linux-based, and they all probably fall under the "Most DNS clients don't have this issue.".<p>That leaves GNU/Linux, where we could reasonably expect most software to use glibc for resolving queries, so presumably anyone using Linux on their laptop would catch this right? Except most distributions started using systemd-resolved (most notable exception is Debian, but not many people use that on desktops/laptops), which is a locally-cached recursive DNS server, and as such acts as a middleman between glibc software and the network configured DNS server, so it would resolve 1.1.1.1 queries correctly, and then return the results from its cache ordered by its own ordering algorithm.</p>
]]></description><pubDate>Mon, 19 Jan 2026 23:36:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46686096</link><dc:creator>black3r</dc:creator><comments>https://news.ycombinator.com/item?id=46686096</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46686096</guid></item><item><title><![CDATA[New comment by black3r in "Can Bundler be as fast as uv?"]]></title><description><![CDATA[
<p>I've been using pyenv for a decade before uv and it wasn't a "major pain" either. But compared to uv it was infinitely more complex, because uv manages python versions seamlessly.<p>If python version changes in an uv-managed project, you don't have to do any extra step, just run "uv sync" as you normally do when you want to install updated dependencies. uv automatically detects it needs a new python, downloads it, re-creates the virtual environment with it and installs the deps all in one command.<p>And since it's the command which everyone does anytime a dependency update is required, no dev is gonna panic why the app is not working after we merge in some new code which requires newer python cause he missed the python update memo.</p>
]]></description><pubDate>Fri, 02 Jan 2026 09:58:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=46463245</link><dc:creator>black3r</dc:creator><comments>https://news.ycombinator.com/item?id=46463245</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46463245</guid></item><item><title><![CDATA[New comment by black3r in "PNG in Chrome shows a different image than in Safari or any desktop app"]]></title><description><![CDATA[
<p>this picture does show differently in Chrome and Safari, but if I analyze it using the methods you did I arrive at a different result - I don't see an iHDR chunk there, instead I see a gAMA chunk and if I remove it with pngcrush it shows normally in Chrome.<p>maybe you linked a different picture?</p>
]]></description><pubDate>Sat, 27 Dec 2025 18:39:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46404052</link><dc:creator>black3r</dc:creator><comments>https://news.ycombinator.com/item?id=46404052</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46404052</guid></item><item><title><![CDATA[New comment by black3r in "2026 Apple introducing more ads to increase opportunity in search results"]]></title><description><![CDATA[
<p>There have been ads in App Store for a long time. The upcoming change is that they will also appear further down in search results, right now they only show on top...</p>
]]></description><pubDate>Fri, 19 Dec 2025 11:44:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46324747</link><dc:creator>black3r</dc:creator><comments>https://news.ycombinator.com/item?id=46324747</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46324747</guid></item><item><title><![CDATA[New comment by black3r in "Getting bitten by Intel's poor naming schemes"]]></title><description><![CDATA[
<p>There are GPUs from 3 different generations in that list... Quadro 6000 is an old Fermi from 2010, Quadro RTX6000 is Turing from 2018, RTX6000 Ada is Ada from 2022...<p>Oh and there's also RTX PRO 6000 Blackwell which is Blackwell from 2025...</p>
]]></description><pubDate>Fri, 19 Dec 2025 11:40:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46324715</link><dc:creator>black3r</dc:creator><comments>https://news.ycombinator.com/item?id=46324715</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46324715</guid></item><item><title><![CDATA[New comment by black3r in "Getting bitten by Intel's poor naming schemes"]]></title><description><![CDATA[
<p>Ubuntu has alphabetical order too, but that's only useful if you want to know if "noble" is newer than "jammy", and useless if you know you have 24.04 but have no idea what its codename is and<p>Android also sucks for developers because they have the public facing numbers and then API versions which are different and not always scaling linearly (sometimes there is something like "Android 8.1" or "Android 12L" with a newer API), and as developers you always deal with the API numbers (you specify minimum API version, not the minimum "OS version" your code runs in your code), and have to map that back to version numbers the users and managers know to present it to them when you're upping the minimum requirements...</p>
]]></description><pubDate>Fri, 19 Dec 2025 11:29:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46324630</link><dc:creator>black3r</dc:creator><comments>https://news.ycombinator.com/item?id=46324630</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46324630</guid></item><item><title><![CDATA[New comment by black3r in "SoundCloud has banned VPN access"]]></title><description><![CDATA[
<p>Private Relay also works in macOS Safari.</p>
]]></description><pubDate>Mon, 15 Dec 2025 16:13:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46276404</link><dc:creator>black3r</dc:creator><comments>https://news.ycombinator.com/item?id=46276404</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46276404</guid></item><item><title><![CDATA[New comment by black3r in "30 Year Anniversary of WarCraft II: Tides of Darkness"]]></title><description><![CDATA[
<p>> Development stared in the first months of 1995, and the game was released in North America and Australia on December 9, 1995.<p>This feels absolutely insane for today's standards. And not just in the gaming world. Somehow with all the advancement of libraries, frameworks, coding tools, and even AI these days, development speeds seem so much slower and it seems like too much time is spent on eye candy, monetization and dark patterns and too few times on things people actually like to see - that's what made us buy games and software in the old days.<p>(But also in the gaming world, especially the past few years when almost no game studio develops its own engine, assets don't look more detailed than what was used 3 years ago, stories seem hastily written and it feels like 80% of developer's time is spent on making cosmetic items for purchase which often cost more than the base game price)<p>Also somehow we spend lots of times researching UX and developing tutorials (remember when software had the "?" button next to the close button and no software "tutorials" were needed?) and yet all the games and software are harder to learn than what we had in the 90s and 00s.</p>
]]></description><pubDate>Tue, 09 Dec 2025 22:02:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=46211359</link><dc:creator>black3r</dc:creator><comments>https://news.ycombinator.com/item?id=46211359</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46211359</guid></item><item><title><![CDATA[New comment by black3r in "Warner Bros Begins Exclusive Deal Talks With Netflix"]]></title><description><![CDATA[
<p>> HBO Max is coming to EU in Jan and UK sometime next year finally<p>This is a very misleading sentence. HBO Max has been available in 14/27 EU countries since 2022, and by now it's available in 22/27 EU countries, 4 of the remaining ones are covered by Sky, with which they signed an exclusive distribution agreement valid until 2025 back in 2019 - even before HBO Max was launched in the USA.</p>
]]></description><pubDate>Fri, 05 Dec 2025 12:23:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=46160332</link><dc:creator>black3r</dc:creator><comments>https://news.ycombinator.com/item?id=46160332</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46160332</guid></item><item><title><![CDATA[New comment by black3r in "Search tool that only returns content created before ChatGPT's public release"]]></title><description><![CDATA[
<p>Well there's also the fact that GPT-3 API was released in June 2020 and its writing capabilities were essentially on par with ChatGPT initial release. It was just a bit harder to use, because it wasn't yet trained to follow instructions, it only worked as a very good "autocomplete" model, so prompting was a bit "different" and you couldn't do stuff like "rewrite this existing article in your own words" at all, but if you just wanted to write some bullshit SEO spam from scratch it was already as good as ChatGPT would be 2 years later.</p>
]]></description><pubDate>Mon, 01 Dec 2025 10:58:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46105943</link><dc:creator>black3r</dc:creator><comments>https://news.ycombinator.com/item?id=46105943</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46105943</guid></item><item><title><![CDATA[New comment by black3r in "Post-mortem of Shai-Hulud attack on November 24th, 2025"]]></title><description><![CDATA[
<p>While I agree with the general sentiment that lots of things about GH actions don't make sense, when you actually look at what the vulnerability was, you'll find that for lots of your questions it wasn't GitHub Actions' fault.<p>This is the vulnerable workflow in question: <a href="https://github.com/PostHog/posthog/blob/c60544bc1c07deecf3368b781e76b988866e8dc6/.github/workflows/auto-assign-reviewers.yml" rel="nofollow">https://github.com/PostHog/posthog/blob/c60544bc1c07deecf336...</a><p>> Why are actions configured per branch?<p>This workflow uses `pull_request_target` targeting where the actions are configured by the branch you're merging PR into, which should be safe - attacker can't modify the YML actions are running.<p>> Why do workflows have such strong permissions?<p>What permissions are workflow run with is irrelevant here, because the workflow runs the JS script with a custom access token instead of the permissions associated with the GH actions runner by default.<p>> Why is security almost impossible to achieve instead of being the default?<p>The default for `pull_request_target` is to checkout the branch you're trying to merge into (which again should be safe as it doesn't contain attacker's files), but this workflow explicitly checks out the attacker's branch on line 22.</p>
]]></description><pubDate>Sun, 30 Nov 2025 00:40:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46092324</link><dc:creator>black3r</dc:creator><comments>https://news.ycombinator.com/item?id=46092324</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46092324</guid></item><item><title><![CDATA[New comment by black3r in "What happened to Apple's legendary attention to detail?"]]></title><description><![CDATA[
<p>Some of these issues are excusable by saying they "added too many features too fast" (especially the inconsistencies which the article begins with), but lots of the issues are just caused by Liquid Glass becoming a thing and some "less important" apps didn't get a proper UX test after switching to Liquid Glass design (the whole latter half of the article)...<p>And that's not excusable - every feature should have its maintainer who should know that a large framework update like Liquid Glass can break basically anything and should re-test the app under every scenario they could think of (and as "the maintainer" they should know all the scenarios) and push to fix any found bugs...<p>Also a company as big as Apple should eat its own dogfood and force their employees to use the beta versions to find as many bugs as they could... If every Apple employee used the beta version on their own personal computer before release I can't realistically imagine how the "Electron app slowing down Tahoe" issue wouldn't be discovered before global release...</p>
]]></description><pubDate>Thu, 23 Oct 2025 22:42:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=45688328</link><dc:creator>black3r</dc:creator><comments>https://news.ycombinator.com/item?id=45688328</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45688328</guid></item><item><title><![CDATA[New comment by black3r in ""Vibe code hell" has replaced "tutorial hell" in coding education"]]></title><description><![CDATA[
<p>While I agree with the premise that "vibe code hell" has replaced "tutorial hell", they are very much not the same. To expand on that, let's start with the fact, that a good coder needs both "skill" and "knowledge".<p>Tutorials (at least the good ones) give you some knowledge - the tutorial often explains why they do what they do and how they do it, but don't give you any skill, you just follow what other people do, you don't learn how to build stuff on your own.<p>Vibe coding on the other hand gives you some skill - how to build stuff with AI, but don't give you necessary coding knowledge - the AI does all the decisions for you and doesn't explain why it did what it did, or how it did it, it just does it for you.<p>"I can't do anything without Cursor's help" is not really the problem. The problem is that vibe coders create some stuff and they don't understand how that stuff works. And I believe this is much bigger problem than knowing how stuff works but not knowing how to use it.<p>Learning doesn't need to be "uncomfortable". Learning needs to be "challenging". There is a difference. The suggested approach here vaguely reminds me of the "you must first learn how to code in a notepad before using an IDE" approach.<p>While the real takeaway should be "you must first learn how to learn, before properly learning something". To learn something properly, you need 2 things: To know what to learn, and to know when you've learned it. To know what to learn you need a curriculum - this obviously depends on your specialization for coders, and can be more crude or more detailed, but you still need something to follow so that you can track your progress. "When you've learned it" for coders is when you can explain what some code does to a colleague and answer questions about said code. It doesn't matter if you wrote it, or someone else wrote it, or an AI wrote it. Understanding code you didn't write is even more important than understanding your own code.</p>
]]></description><pubDate>Fri, 10 Oct 2025 21:11:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=45543823</link><dc:creator>black3r</dc:creator><comments>https://news.ycombinator.com/item?id=45543823</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45543823</guid></item><item><title><![CDATA[New comment by black3r in "Who owns Express VPN, Nord, Surfshark? VPN relationships explained (2024)"]]></title><description><![CDATA[
<p>On the other hand, if as an aspiring software engineer I was forced to do military service and had the option to do it as part of a military cybersecurity unit, I'd pick that over running around with weapons without blinking an eye.</p>
]]></description><pubDate>Tue, 07 Oct 2025 07:48:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=45500467</link><dc:creator>black3r</dc:creator><comments>https://news.ycombinator.com/item?id=45500467</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45500467</guid></item><item><title><![CDATA[New comment by black3r in "Benefits of choosing email over messaging"]]></title><description><![CDATA[
<p>What emails suck at is communication between multiple people in a work setting. That's why Slack, Teams, and others emerged and got popular.<p>For example:<p>- When multiple people respond to the same email, the email "thread" branches out into a tree. If the tree branches out multiple times, keeping track of all the replies gets messy.<p>- While most clients can show you the thread/tree structure of an email chain, it only works if you've been on every email in the chain. If you get CC'd later, you'll just see a single email and navigating that is messy.<p>- Also if you get CC'd later, you can't access any attachments from the chain.<p>- You can link to a Slack/Teams conversation and as long as it's in a public channel, anyone with the link can get in on it (for example you have a conversation about a proposed feature which then turns into a task -> you describe the task simply and link "more info in this slack convo"), you can't do that with Emails (well I guess you could export a .eml file, but it has the same issue as getting CC'd later)<p>- When a thread no longer interests you, you can mute it in Slack/Teams. You can't realistically do that with emails, as most people will just hit "reply all"<p>- But also sometimes people will hit "reply" instead of "reply all" by a mistake and a message doesn't get delivered to everyone in the thread.</p>
]]></description><pubDate>Sun, 05 Oct 2025 09:26:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=45480190</link><dc:creator>black3r</dc:creator><comments>https://news.ycombinator.com/item?id=45480190</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45480190</guid></item></channel></rss>