<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: xurukefi</title><link>https://news.ycombinator.com/user?id=xurukefi</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 30 Apr 2026 03:25:24 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=xurukefi" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by xurukefi in "Wikipedia deprecates Archive.today, starts removing archive links"]]></title><description><![CDATA[
<p>Sure, but maybe there are other ways to control Googlebot in a similar fashion. Maybe even with a pristine looking User-Agent header.</p>
]]></description><pubDate>Fri, 20 Feb 2026 22:21:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47094822</link><dc:creator>xurukefi</dc:creator><comments>https://news.ycombinator.com/item?id=47094822</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47094822</guid></item><item><title><![CDATA[New comment by xurukefi in "Wikipedia deprecates Archive.today, starts removing archive links"]]></title><description><![CDATA[
<p>That's actually a really neat idea.</p>
]]></description><pubDate>Fri, 20 Feb 2026 22:06:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47094671</link><dc:creator>xurukefi</dc:creator><comments>https://news.ycombinator.com/item?id=47094671</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47094671</guid></item><item><title><![CDATA[New comment by xurukefi in "Wikipedia deprecates Archive.today, starts removing archive links"]]></title><description><![CDATA[
<p>There are ways to work around this. I've just tested this: I've used the URL inspection tool of Google Search Console to fetch a URL from my website, which I've configured to redirect to a paywalled news article. Turns out the crawler follows that redirect and gives me the full source code of the redirected web site, without any paywall.<p>That's maybe a bit insane to automate at the scale of archive.today, but I figure they do something along the lines of this. It's a perfect imitation of Googlebot because it is literally Googlebot.</p>
]]></description><pubDate>Fri, 20 Feb 2026 22:04:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47094646</link><dc:creator>xurukefi</dc:creator><comments>https://news.ycombinator.com/item?id=47094646</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47094646</guid></item><item><title><![CDATA[New comment by xurukefi in "Wikipedia deprecates Archive.today, starts removing archive links"]]></title><description><![CDATA[
<p>But it is reliable in the sense that if it works for a site, then it usually never fails.</p>
]]></description><pubDate>Fri, 20 Feb 2026 21:50:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47094493</link><dc:creator>xurukefi</dc:creator><comments>https://news.ycombinator.com/item?id=47094493</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47094493</guid></item><item><title><![CDATA[New comment by xurukefi in "Wikipedia deprecates Archive.today, starts removing archive links"]]></title><description><![CDATA[
<p>Exactly. If I was an admin of a popular news website I would try to archive some articles and look at the access logs in the backend. This cannot be too hard to figure out.</p>
]]></description><pubDate>Fri, 20 Feb 2026 21:39:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47094352</link><dc:creator>xurukefi</dc:creator><comments>https://news.ycombinator.com/item?id=47094352</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47094352</guid></item><item><title><![CDATA[New comment by xurukefi in "Wikipedia deprecates Archive.today, starts removing archive links"]]></title><description><![CDATA[
<p>Because it works too reliably. Imagine what that would entail. Managing thousands of accounts. You would need to ensure to strip the account details form archived peages <i>perfectly</i>. Every time the website changes its code even slightly you are at risk of losing one of your accounts. It would constantly break and would be an absolute nightmare to maintain. I've personally never encountered such a failure on a paywalled news article. archive.today managed to give me a non-paywalled clean version every single time.<p>Maybe they use accounts for some special sites. But there is definetly some automated generic magic happening that manages to bypass paywalls of news outlets. Probably something Googlebot related, because those websites usually give Google their news pages without a paywall, probably for SEO reasons.</p>
]]></description><pubDate>Fri, 20 Feb 2026 21:37:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47094326</link><dc:creator>xurukefi</dc:creator><comments>https://news.ycombinator.com/item?id=47094326</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47094326</guid></item><item><title><![CDATA[New comment by xurukefi in "Wikipedia deprecates Archive.today, starts removing archive links"]]></title><description><![CDATA[
<p>Kinda off-topic, but has anyone figured out how archive.today manages to bypass paywalls so reliably? I've seen people claiming that they have a bunch of paid accounts that they use to fetch the pages, which is, of course, ridiculous. I figured that they have found an (automated) way to imitate Googlebot <i>really</i> well.</p>
]]></description><pubDate>Fri, 20 Feb 2026 21:15:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47094078</link><dc:creator>xurukefi</dc:creator><comments>https://news.ycombinator.com/item?id=47094078</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47094078</guid></item><item><title><![CDATA[New comment by xurukefi in "IPv6 is not insecure because it lacks a NAT"]]></title><description><![CDATA[
<p>I hate NAT with a passion. It's a terrible technology, whose disruptive nature has probably prevented any novelty on the transport layer. But this article is oversimplifying things.<p>It is well known that NAT is not meant for security and that NAT is not a firewall. But one cannot deny that it implicitly brings some "default" security to the table. With NAT it's basically impossible to screw you over because there is no meaningful practical way to allow inbound connections without the client explicitly defining them (port forwarding). With IPv6, you could have a lazy vendor that does not do any firewalling or a has a default allow policy or maybe buggy firewall. With NAT that is not possible. There is no lazy/buggy NAT implementation that allows inbound connections for your entire network, because it is technically not possible. When a NATting device receives a packet with a destination port that has not previously been opened by a client, it does not decide to drop this packet because of a decision by the vendor. It drops the packet because there is simply no other option due to the nature of NAT. That is what people mean when they talk about the inherent "security" of NAT.<p>Again, NAT is terrible. We need to finally get rid globally of IPv4 and all the NATting that comes with it. But let's keep it to the facts.</p>
]]></description><pubDate>Wed, 21 Jan 2026 08:12:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=46702573</link><dc:creator>xurukefi</dc:creator><comments>https://news.ycombinator.com/item?id=46702573</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46702573</guid></item><item><title><![CDATA[New comment by xurukefi in "Python developers are embracing type hints"]]></title><description><![CDATA[
<p>For me, type hints are mainly useful because they're the only reliable way to get decent IDE auto-completion. Beyond that, they feel like a bolted-on compromise that goes against the spirit of Python. If you really need strict typing, you're probably better off using a statically typed language.</p>
]]></description><pubDate>Sun, 28 Sep 2025 09:38:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=45403040</link><dc:creator>xurukefi</dc:creator><comments>https://news.ycombinator.com/item?id=45403040</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45403040</guid></item><item><title><![CDATA[New comment by xurukefi in "Typst: A Possible LaTeX Replacement"]]></title><description><![CDATA[
<p>The LaTeX community is astonishingly good at gatekeeping. I can't think of another field where the adoption of a clearly superior modern alternative has been so slow. For some reason, they seem to take pride in clinging to a 50-year-old typesetting system—with its bloated footprint, sluggish compilation, incomprehensible error messages, and a baroque syntax that nobody truly understands. People have simply learned just enough to make it work, and now they treat that fragile familiarity as a virtue.</p>
]]></description><pubDate>Sat, 27 Sep 2025 18:44:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=45398388</link><dc:creator>xurukefi</dc:creator><comments>https://news.ycombinator.com/item?id=45398388</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45398388</guid></item><item><title><![CDATA[New comment by xurukefi in "A Linux kernel syscall implementation tracker"]]></title><description><![CDATA[
<p>removed</p>
]]></description><pubDate>Sun, 21 Jul 2024 07:40:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=41023248</link><dc:creator>xurukefi</dc:creator><comments>https://news.ycombinator.com/item?id=41023248</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41023248</guid></item><item><title><![CDATA[New comment by xurukefi in "YouTube's next move might make it virtually impossible to block ads"]]></title><description><![CDATA[
<p>> The question then becomes, whether Adblockers could use this information to skip the ads. It's a cat and mouse game.<p>I wouldn't call it a cat and mouse game because there is nothing from a technical point of view that prevents adblockers to use this information to skip ads. Unless YouTube gets completely rid of the concept of timestamps for their videos, they will always lose this battle.</p>
]]></description><pubDate>Thu, 13 Jun 2024 07:26:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=40666924</link><dc:creator>xurukefi</dc:creator><comments>https://news.ycombinator.com/item?id=40666924</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40666924</guid></item><item><title><![CDATA[New comment by xurukefi in "YouTube's next move might make it virtually impossible to block ads"]]></title><description><![CDATA[
<p>> For one thing, this approach seems to inherently conflict with the fact that you can link directly to a particular timestamp in a YouTube video, either in an external link using the `&t=...` URL parameter, or by just including a timestamp in a YouTube comment.<p>The more I think about this, the more I belive that this is literally the <i>only</i> reason that ad blocking cannot be meaningfully defeated for video on emand. Because of the concept of referencing a fixed point in the video by a time stamp, there will always need to be a mechanism to offset the time stamp with respect to the injected ads, which, in turn, gives ad blockers the ability to find out where the ads are exactly.</p>
]]></description><pubDate>Thu, 13 Jun 2024 07:18:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=40666877</link><dc:creator>xurukefi</dc:creator><comments>https://news.ycombinator.com/item?id=40666877</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40666877</guid></item><item><title><![CDATA[New comment by xurukefi in "YouTube's next move might make it virtually impossible to block ads"]]></title><description><![CDATA[
<p>> What does "server side injection" actually mean?<p>The way ads usually work is that they are separate video files that are fetched by the YouTube client (e.g., the browser) and then displayed to the user. Ad blockers modify the content of the web site such that the URLs to those ads (usually embedded in some JSON object from an API endpoint or something like that) get removed so that no ads are displayed.<p>Server side injection in this context means that the server renders a video file on demand that contains the original clean video plus a few ads here or there. Blocking the ads now is much harder because you cannot simply manipulate API responses containing refrences to those ads because there is only this one video file. Instead you would need to implement a mechanism that skips those ads in the player.<p>AFAIK server sie injection is already done on twitch for live streams where blocking ads is really basically impossible because you cannot skip anything in a live stream. I think the best solution for twitch to get rid of ads is to use a VPN/Proxy in a country where no ads are delivered for contractual reasons.</p>
]]></description><pubDate>Thu, 13 Jun 2024 07:08:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=40666829</link><dc:creator>xurukefi</dc:creator><comments>https://news.ycombinator.com/item?id=40666829</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40666829</guid></item><item><title><![CDATA[New comment by xurukefi in "Show HN: Sum (algebraic) types for C in one 100 line header"]]></title><description><![CDATA[
<p>It's a nice idea, but I don't think it adds enough clarity to the code to justify the messy compiler warnings and errors that this kind of preprocessor abuse will eventually cause.</p>
]]></description><pubDate>Tue, 28 May 2024 11:55:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=40499791</link><dc:creator>xurukefi</dc:creator><comments>https://news.ycombinator.com/item?id=40499791</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40499791</guid></item><item><title><![CDATA[New comment by xurukefi in "Microsoft PlayReady – Complete Client Identity Compromise"]]></title><description><![CDATA[
<p>With PlayReady, as with any other DRM scheme really, there are different tiers. There is SL2000, which is done completely in software (whitebox crypto), and there is SL3000, which does require a TEE. Which tier is requried for which type of content is driven by streaming provider or studio requirements. I think it is pretty common to allow content up to 1080p to be used with whitebox crypto, whereas 4k+ content will require hardware DRM.</p>
]]></description><pubDate>Thu, 09 May 2024 21:01:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=40312927</link><dc:creator>xurukefi</dc:creator><comments>https://news.ycombinator.com/item?id=40312927</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40312927</guid></item><item><title><![CDATA[New comment by xurukefi in "Microsoft PlayReady – Complete Client Identity Compromise"]]></title><description><![CDATA[
<p>The "client" whose "identity" is abused here is not an end user. A "client" in this context is a program or library that talks to the license servers and receives the content decryption keys. On my Windows machine I see a "Windows.Media.Protection.PlayReady.dll", which I guess is the client that they cracked. Maybe there are also other clients that are widely accepted by license servers.<p>The attack essentially means that they could write a program themselves that acts as "Windows.Media.Protection.PlayReady.dll" to get decryption keys from a server. What will happen now is that Microsoft will deprecate the client and release a new one with new obfuscation and new keys. The license servers will start rejecting the old cracked client. And then people will crack the new client. And the cycle continues.</p>
]]></description><pubDate>Thu, 09 May 2024 14:35:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=40308655</link><dc:creator>xurukefi</dc:creator><comments>https://news.ycombinator.com/item?id=40308655</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40308655</guid></item><item><title><![CDATA[New comment by xurukefi in "Tips on how to structure your home directory (2023)"]]></title><description><![CDATA[
<p>Reading the comments here makes me feel guilty. I'm sitting on probably a few hundred files and folders called something like tmp, tmp1, foo, foo23, foobar, testxyz, etc...
They all hold probably very irrelevant stuff and are safe to delete, and I have yet to resort to those files for rescue, but you never know! Every now and then I collect them and put them in an archive folder. I'm now at "archive10".</p>
]]></description><pubDate>Fri, 19 Apr 2024 15:04:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=40087727</link><dc:creator>xurukefi</dc:creator><comments>https://news.ycombinator.com/item?id=40087727</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40087727</guid></item><item><title><![CDATA[New comment by xurukefi in "The xz attack shell script"]]></title><description><![CDATA[
<p>Since I'm a bit late to the party and feeling somewhat overwhelmed by the multitude of articles floating around, I wonder: Has there been any detailed analysis of the actual injected object file? Thus far, I haven't come across any, which strikes me as rather peculiar given that it's been a few days.</p>
]]></description><pubDate>Wed, 03 Apr 2024 15:05:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=39918438</link><dc:creator>xurukefi</dc:creator><comments>https://news.ycombinator.com/item?id=39918438</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39918438</guid></item><item><title><![CDATA[New comment by xurukefi in "Decompilation of Paper Mario for N64"]]></title><description><![CDATA[
<p>Thanks for the insight. The fact that Paper Mario uses optimizaion flags makes this project even more fascinating. Great work.</p>
]]></description><pubDate>Sun, 14 Jan 2024 10:15:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=38989129</link><dc:creator>xurukefi</dc:creator><comments>https://news.ycombinator.com/item?id=38989129</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38989129</guid></item></channel></rss>