<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: Nathan2055</title><link>https://news.ycombinator.com/user?id=Nathan2055</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 03 May 2026 21:34:27 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=Nathan2055" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by Nathan2055 in "Dav2d"]]></title><description><![CDATA[
<p>The funniest part about WordPress is that you can usually achieve at least a 50% speed boost or more by adding a plugin that just minifies and caches the ridiculous number of dynamic CSS and JS files that most themes and plugins add to every page. Set those up with HTTP 103 Early Hints preload headers (so the browser can start sending subresource requests in the background before the HTML is even sent out, exactly the kind of thing HTTP/2 and /3 were designed to make possible) and then throw Cloudflare or another decent CDN on top, and you're suddenly getting TTFBs much closer to a more "modern" stack.<p>The bizarre thing is that pretty much no CMS, even the "new" ones, seems to automate all of that by default. None of those steps are that difficult to implement, and provide a serious speed boost to everything from WordPress to MediaWiki in my experience, and yet the only service that seems to get close to offering it is Cloudflare.<p>Even then, Cloudflare's tooling only works its best if you're already emitting minified and compressed files and custom written preload headers on the origin side, since the hit on decompressing all the origin traffic to make those adjustments and analyses is way worse for performance than just forwarding your compressed responses directly, hence why they removed Auto Minify[1] and encourage sending pre-compressed Brotli level 11 responses from the origin[2] so people on recent browsers get pass-through compression without extra cycles being spent on Cloudflare's servers.<p>The solution seems pretty clear: aim to get as much stuff served statically, preferably pre-compressed, as you can. But it's still weird that actually implementing that is still a manual process on most CMSes, when it shouldn't be that hard to make it a standard feature.<p>And as for Git web interfaces, the correct solution is to require logins to view complete history. Nobody likes saying it, nobody likes hearing it. But Git is not efficient enough on its own to handle the constant bombardment of random history paginations and diffs that AI crawlers seem to love. It wasn't an issue before, because old crawlers for things like search engines were smart enough to ignore those types of pages, or at least to accept when the sysadmin says it should ignore those types of pages. AI crawlers have no limits, ignore signals from site operators, make no attempts to skip redundant content, and in general are very dumb about how they send requests (this is a large part of why Anubis works so well; it's not a particularly complex or hard to bypass proof of work system[3], but AI bots genuinely don't care about anything but consuming as many HTTP 200s as a server can return, and give up at the slightest hint of pushback (but do at least try randomizing IPs and User-Agents, since those are effectively zero-cost to attempt).<p>[1]: <a href="https://community.cloudflare.com/t/deprecating-auto-minify/655677" rel="nofollow">https://community.cloudflare.com/t/deprecating-auto-minify/6...</a><p>[2]: <a href="https://blog.cloudflare.com/this-is-brotli-from-origin/" rel="nofollow">https://blog.cloudflare.com/this-is-brotli-from-origin/</a><p>[3]: <a href="https://lock.cmpxchg8b.com/anubis.html" rel="nofollow">https://lock.cmpxchg8b.com/anubis.html</a> but see also <a href="https://news.ycombinator.com/item?id=45787775">https://news.ycombinator.com/item?id=45787775</a> and then <a href="https://news.ycombinator.com/item?id=43668433">https://news.ycombinator.com/item?id=43668433</a> and <a href="https://news.ycombinator.com/item?id=43864108">https://news.ycombinator.com/item?id=43864108</a> for how it's working in the real world. Clearly Anubis actually <i>does</i> work, given testimonials from admins and wide deployment numbers, but that can only mean that AI scrapers aren't actually implementing effective bypass measures. Which does seem pretty in line with what I've heard about AI scrapers, summarized well in <a href="https://news.ycombinator.com/item?id=43397361">https://news.ycombinator.com/item?id=43397361</a>, in that they are basically making no attempt to actually optimize how they're crawling. The general consensus seems to be that if they were going to crawl optimally, they'd just pull down a copy of Common Crawl like every other major data analysis project has done for the last two decades, but all the AI companies are so desperate to get just <i>slightly</i> more training data than their competitors that they're repeatedly crawling near-identical Git diffs just on the off-chance they reveal some slightly different permutation of text to use. This is also why open source models have been able to almost keep pace with the state of the art models coming out of the big firms: they're just designing way more efficient training processes, while the big guys are desperately throwing hardware and crawlers at the problem in the desperate hope that they can will it into an Amazon model instead of a Ben and Jerry’s model[4].<p>[4]: <a href="https://www.joelonsoftware.com/2000/05/12/strategy-letter-i-ben-and-jerrys-vs-amazon/" rel="nofollow">https://www.joelonsoftware.com/2000/05/12/strategy-letter-i-...</a> - still probably the single greatest blog post ever written, 26 years later.</p>
]]></description><pubDate>Sun, 03 May 2026 09:53:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47995299</link><dc:creator>Nathan2055</dc:creator><comments>https://news.ycombinator.com/item?id=47995299</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47995299</guid></item><item><title><![CDATA[New comment by Nathan2055 in "“This is not the computer for you”"]]></title><description><![CDATA[
<p>> And because that "someone" isn't a bigcorp (i.e. Microsoft) wanting to do a co-marketing push, but just FOSS people gradually building something but never quite "launching" a 1.0 of it — Apple just "acknowledged" it quietly, at developer conferences, exposing it only via developer-centric CLI tooling, rather than with the sort of polished UI experience they would need if Microsoft was trying to convince Joe Excel User to dual-boot Windows on their Apple Silicon MBP.<p>It's also important to remember that Microsoft was in the middle of their Qualcomm exclusivity deal at the time of the M1's release, and thus Windows for ARM wasn't available on anything other than a few select devices or unofficial use of Insider builds.<p>That deal didn't actually expire until 2024[1], at which point Windows for ARM finally started to be sold in an official capacity with stable builds widely available.<p>It's entirely possible, though unconfirmed, that Apple was intentionally leaving the door open for "Boot Camp 2", and Microsoft simply never took them up on the offer, either because they were stuck in a deal made prior to the M1's release that prevented it, or because they no longer saw a financial benefit to being able to sell Windows to Mac users (possibly since Windows license sales are effectively a rounding error to Microsoft at this point; they make way more off of subscription services and/or Office, all of which are already available on macOS without having to dual-boot Windows).<p>[1]: <a href="https://www.tomshardware.com/pc-components/cpus/windows-on-arm-may-be-a-thing-of-the-past-soon-arm-ceo-confirms-qualcomms-exclusivity-agreement-with-microsoft-expires-this-year" rel="nofollow">https://www.tomshardware.com/pc-components/cpus/windows-on-a...</a></p>
]]></description><pubDate>Fri, 13 Mar 2026 08:23:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47361861</link><dc:creator>Nathan2055</dc:creator><comments>https://news.ycombinator.com/item?id=47361861</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47361861</guid></item><item><title><![CDATA[New comment by Nathan2055 in "News publishers limit Internet Archive access due to AI scraping concerns"]]></title><description><![CDATA[
<p>That already exists, it's called Common Crawl[1], and it's a huge reason why none of this happened prior to LLMs coming on the scene, back when people were crawling data for specialized search engines or academic research purposes.<p>The problem is that AI companies have decided that they want instant access to all data on Earth the moment that it becomes available somewhere, and have the infrastructure behind them to actually try and make that happen. So they're ignoring signals like robots.txt or even checking whether the data is actually useful to them (they're not getting anything helpful out of recrawling the same search results pagination in every possible permutation, but that won't stop them from trying, and knocking everyone's web servers offline in the process) like even the most aggressive search engine crawlers did, and are just bombarding every single publicly reachable server with requests on the off chance that some new data fragment becomes available and they can ingest it first.<p>This is also, coincidentally, why Anubis is working so well. Anubis kind of sucks, and in a sane world where these companies had real engineers working on the problem, they could bypass it on every website in just a few hours by precomputing tokens.[2] But...they're not. Anubis is actually working quite well at protecting the sites it's deployed on despite its relative simplicity.<p>It really does seem to indicate that LLM companies want to just throw endless hardware at literally any problem they encounter and brute force their way past it. They really aren't dedicating real engineering resources towards any of this stuff, because if they were, they'd be coming up with way better solutions. (Another classic example is Claude Code apparently using <i>React</i> to render a terminal interface. That's like using the space shuttle for a grocery run: utterly unnecessary, and completely solvable.) That's why DeepSeek was treated like an existential threat when it first dropped: they actually got some engineers working on these problems, and made serious headway with very little capital expenditure compared to the big firms. Of course they started freaking out, their whole business model is based on the idea that burning comical amounts of money on hardware is the only way we can actually make this stuff work!<p>The whole business model backing LLMs right now seems to be "if we burn insane amounts of money now, we can replace <i>all labor everywhere</i> with robots in like a decade", but if it turns out that either of those things aren't true (either the tech can be improved without burning hundreds of billions of dollars, or the tech ends up being unable to replace the vast majority of workers) all of this is going to fall apart.<p>Their approach to crawling is just a microcosm of the whole industry right now.<p>[1]: <a href="https://en.wikipedia.org/wiki/Common_Crawl" rel="nofollow">https://en.wikipedia.org/wiki/Common_Crawl</a><p>[2]: <a href="https://fxgn.dev/blog/anubis/" rel="nofollow">https://fxgn.dev/blog/anubis/</a> and related HN discussion <a href="https://news.ycombinator.com/item?id=45787775">https://news.ycombinator.com/item?id=45787775</a></p>
]]></description><pubDate>Sat, 14 Feb 2026 21:36:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47018637</link><dc:creator>Nathan2055</dc:creator><comments>https://news.ycombinator.com/item?id=47018637</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47018637</guid></item><item><title><![CDATA[New comment by Nathan2055 in "TIL: Apple Broke Time Machine Again on Tahoe"]]></title><description><![CDATA[
<p>In the past, I've heard recommendations not to use remote Time Machine over SMB directly, but rather to create an APFS disk image on a remote server and then backup to that as if its an external hard drive.<p>Supposedly, doing that eliminates a lot of the flakiness specific to SMB Time Machine, and while I haven't tested it personally, I have used disk images over SMB on macOS Tahoe recently, and they actually work great (other than the normal underlying annoyances of SMB that everyone with a NAS is mostly used to at this point).<p>The new ASIF format for disk images added in Tahoe actually works very well for this sort of thing, and gives you the benefits of sparse bundle disk images without requiring specific support for them on the underlying file system.[1][2] As long as you're on a file system that supports sparse files (I think pretty much every currently used file system except FAT32, exFAT, and very old implementations of HFS+), you get almost native performance out of the disk image now. (Although, again, that's just fixing the disk image overhead, you still have to work around the usual SMB weirdness unless you can get another remote file system protocol working.)<p>[1]: <a href="https://eclecticlight.co/2025/06/12/macos-tahoe-brings-a-new-disk-image-format/" rel="nofollow">https://eclecticlight.co/2025/06/12/macos-tahoe-brings-a-new...</a><p>[2]: <a href="https://eclecticlight.co/2025/09/17/should-you-use-tahoes-new-asif-disk-images/" rel="nofollow">https://eclecticlight.co/2025/09/17/should-you-use-tahoes-ne...</a></p>
]]></description><pubDate>Sun, 01 Feb 2026 20:40:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46849151</link><dc:creator>Nathan2055</dc:creator><comments>https://news.ycombinator.com/item?id=46849151</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46849151</guid></item><item><title><![CDATA[New comment by Nathan2055 in "Signal Secure Backups"]]></title><description><![CDATA[
<p>That's really surprising to me.<p>iOS has had pretty decent audio format support for a few years now: even though you can't directly import FLAC files to iTunes/Music, they are supported in the OS itself since 2017 and play fine both in Files and in Safari. The other big mainstream formats (WAV, AIFF, MP3, AAC, and ALAC) have been supported for years, and even Opus finally got picked up in 2021.<p>About the only non-niche audio format that isn't supported natively on Apple platforms at this point is Vorbis, which was fully superseded by Opus well over a decade ago. Even then, I believe it's possible to get Vorbis support in iOS apps using various media libraries, although I'm sure Apple frowns upon it.<p>I'd really love to know what's causing that incompatibility.</p>
]]></description><pubDate>Mon, 08 Sep 2025 19:25:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=45172739</link><dc:creator>Nathan2055</dc:creator><comments>https://news.ycombinator.com/item?id=45172739</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45172739</guid></item><item><title><![CDATA[New comment by Nathan2055 in "Signal Secure Backups"]]></title><description><![CDATA[
<p>This has been the advantage, and the drawback, of Signal's security model from the start.<p>Everything on Signal (at least the "original" design from a few years ago, this has started to be adjusted with the introduction of usernames and now backups and eventually syncing) is end-to-end encrypted between users, with your original phone acting as the primary communication node doing the encryption. Any other devices like desktops and tablets that get added are replicating from the original node rather than receiving new messages straight from the network.<p>This offers substantial privacy and security guarantees, at the cost of convenience and portability. It can be contrasted with something like iMessage, before Messages in iCloud was implemented, where every registered device is a full node that receives every new message directly, as long as they're connected at the time that it's sent.<p>Today's addition brings Signal to where iMessage was originally: each device is backing up their own messages, but those backups aren't syncing with one another. Based on the blog post, the goal is to eventually get Signal to where iMessage is today now that Messages in iCloud is available: all of the devices sync their own message databases with a version in the cloud, which is also end-to-end encrypted with the same guarantees as the messages themselves, but which ensures that every device ends up with the same message history regardless of whether they're connected to receive all of the messages as they come in. Then, eventually, they seem to also intend to take it one step farther and allow for arbitrary sync locations for that "primary replica" outside of their own cloud storage, which is even better and goes even further than Apple's implementation does.<p>If done well, I actually quite like the vision they're going for here. I'm still frustrated that they wouldn't just port the simple file backup feature from Android to the other platforms, even as just a stopgap until this is finished, but I think that the eventual completion of this feature as described will solve all of my major concerns with Signal's current storage implementation.</p>
]]></description><pubDate>Mon, 08 Sep 2025 19:15:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=45172591</link><dc:creator>Nathan2055</dc:creator><comments>https://news.ycombinator.com/item?id=45172591</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45172591</guid></item><item><title><![CDATA[New comment by Nathan2055 in "The MacBook has a sensor that knows the exact angle of the screen hinge"]]></title><description><![CDATA[
<p>Okay so here's the argument I've heard: if arbitrary replacements of the lid sensor were possible, it would be feasible to create a tampered sensor that failed to detect the MacBook closing, thus preventing it from entering sleep mode.<p>This could then be combined with some software on the machine to turn a MacBook into a difficult to detect recording device, bypassing protections such as the microphone and camera privacy alerts, since the MacBook would be closed but not sleeping.<p>Additionally, since the auto-locking is also tied to triggering sleep mode, it would be possible to gain access to a powered off device, switch the sensors, wait for the user to attempt to sleep mode the device, and then steal it back, completely unlocked with full access to the drive.<p>Are these absolutely ridiculous, James Bond-tier threat assessments? Yes, absolutely. But they're both totally feasible (and not too far off from exploits I've heard about in real life), and both are completely negated by simply serializing the lid sensor.<p>Should Apple include an option, buried in recoveryOS behind authentication and disk unlock steps like the option to allow downgrades and allow kernel extensions, that enables arbitrary and "unauthorized" hardware replacements like this? Yes, they really should. If implemented correctly, it would not harm the security profile of the system while still preventing the aforementioned exploits.<p>There are good security reasons for a lot of what Apple does. They just tend to push a <i>little</i> too far beyond mitigating those security issues into doing things which start to qualify as vendor lock-in.<p>I really wish people would start to recognize where the line <i>should</i> be drawn, rather than organizing into "security of the walled garden" versus "freedom of choice" groups whenever these things get brought up. You can have both! The dichotomy itself is a fiction perpetuated to defend the status quo.</p>
]]></description><pubDate>Mon, 08 Sep 2025 00:52:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=45163725</link><dc:creator>Nathan2055</dc:creator><comments>https://news.ycombinator.com/item?id=45163725</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45163725</guid></item><item><title><![CDATA[New comment by Nathan2055 in "The first Media over QUIC CDN: Cloudflare"]]></title><description><![CDATA[
<p>Hmm…do you have the in-browser DNS over HTTPS resolver enabled? I personally can't reproduce this, but I'm using DoH with 1.1.1.1.<p>I've noticed that both Chrome and Firefox tend to have less consistent HTTP/3 usage when using system DNS instead of the DoH resolver because a lot of times the browser is unable to fetch HTTPS DNS records consistently (or at all) via the system resolver.<p>Since HTTP/3 support on the server has to be advertised by either an HTTPS DNS record or a cached Alt-Svc header from a previous successful HTTP/2 or HTTP/1.1 connection, and the browsers tend to prefer recycling already open connections rather than opening new ones (even if they would be "upgraded" in that case), it's often much trickier to get HTTP/3 to be used in that case. (Alt-Svc headers also sometimes don't cache consistently, especially in Firefox in my experience.)<p>Also, to make matters even worse, the browsers, especially Chrome, seem to automatically disable HTTP/3 support if connections fail often enough. This happened to me when I was using my university's Wi-Fi a lot, which seems to block a large (but inconsistent) amount of UDP traffic. If Chrome enters this state, it stops using HTTP/3 entirely, and provides no reasoning in the developer tools as to why (normally, if you enable the "Protocol" column in the developer tools Network tab, you can hover over the listed protocol to get a tooltip explaining how Chrome determined the selected protocol was the best option available; this tooltip doesn't appear in this "force disabled" state). Annoyingly, Chrome also doesn't (or can't) isolate this state to just one network, and instead I suddenly stopped being able to use HTTP/3 at home, either. The only actual solution/override to this is to go into about:flags (yes, I know it's chrome://flags now, I don't care) and make sure that the option for QUIC support is <i>manually</i> enabled. Even if it's already indicated as "enabled by default", this doesn't actually reflect the browser's true state. Firefox also similarly gives up on HTTP/3, but its mechanism seems to be much less "sticky" than Chrome's, and I haven't had any consistent issues with it.<p>To debug further: I'd first try checking to see if EncryptedClientHello is working for you or not; you can check <a href="https://tls-ech.dev" rel="nofollow">https://tls-ech.dev</a> to test that. ECH requires HTTPS DNS record support, so if that shows as working, you can ensure that your configuration is able to parse HTTPS records (that site also only uses the HTTPS record for the ECH key and uses HTTP/1.1 for the actual site, so it's fairly isolated from other problems). Next, you can try Fastly's HTTP/3 checker at <a href="https://http3.is" rel="nofollow">https://http3.is</a> which has the benefit of only using Alt-Svc headers to negotiate; this means that the first load will always use HTTP/2, but you should be able to refresh the page and get a successful HTTP/3 connection. Cloudflare's test page at <a href="https://cloudflare-quic.com" rel="nofollow">https://cloudflare-quic.com</a> uses both HTTPS DNS records and an Alt-Svc header, so if you are able to get an HTTP/3 connection to it first try, then you know that you're parsing HTTPS records properly.<p>Let me know how those tests perform for you; it's possible there is an issue in Firefox but it isn't occurring consistently for everyone due to one of the many issues I just listed.<p>(If anyone from Cloudflare happens to be reading this, you should know that you have some kind of misconfiguration blocking <a href="https://cloudflare-quic.com/favicon.ico" rel="nofollow">https://cloudflare-quic.com/favicon.ico</a> and there's also a slight page load delay on that page because you're pulling one of the images out of the Wayback Machine via 
<a href="https://web.archive.org/web/20230424015350im_/https://www.cloudflare.com/img//nav/globe-lang-select-dark.svg" rel="nofollow">https://web.archive.org/web/20230424015350im_/https://www.cl...</a> when you should use an "id_" link for images instead so the Internet Archive servers don't have to try and rewrite anything, which is the cause of most of the delays you typically see from the Wayback Machine. (I actually used that feature along with Cloudflare Workers to temporarily resurrect an entire site during a failed server move a couple of years back, it worked splendidly as soon as I learned about the id_ trick.) Alternatively, you could also just switch that asset back to <a href="https://www.cloudflare.com/img/nav/globe-lang-select-dark.svg" rel="nofollow">https://www.cloudflare.com/img/nav/globe-lang-select-dark.sv...</a> since it's still live on your main site anyway, so there's no need to pull it from the Wayback Machine.)<p>I've spent <i>a lot</i> of time experimenting with HTTP/3 and it's weird quirks over the past couple of years. It's a great protocol, it just has a lot of bizarre and weirdly specific implementation and deployment issues.</p>
]]></description><pubDate>Fri, 22 Aug 2025 23:34:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=44991259</link><dc:creator>Nathan2055</dc:creator><comments>https://news.ycombinator.com/item?id=44991259</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44991259</guid></item><item><title><![CDATA[New comment by Nathan2055 in "Reddit will block the Internet Archive"]]></title><description><![CDATA[
<p>> What they're really afraid of is that people will read content using LLM inference and make all the ads and nags and "download the app for a crap experience" go away -- and never click on ads accidentally for an occasional ka-ching.<p>See, I don't think this is right either. Back during the original API protests, several people (including me!) pointed out that if the concern was really that third-party apps weren't contributing back to Reddit (which was a fair point: Apollo never showed ads of any kind, neither Reddit's or their own) then a good solution would be to make using third-party apps require paying for Reddit Premium. Then they wouldn't have to audit all of the apps to ensure they were displaying ads correctly and would be able to collect revenue outside of the inherent limitations of advertising.<p>Theoretically, this should have been a straight win for Reddit, especially given the incredibly low income that they've apparently been getting from ads anyway (I can't find the report now so the numbers might not be exact, but I remember it being reported that Reddit was pulling in something like ~$0.60 per user per month versus Twitter's slightly better ~$8 per user per month and Meta's frankly mindblowing ~$50 per user per month) but it was immediately dismissed out of hand in favor of their way more complicated proposal that app developers audit their own usage and then pay Reddit back.<p>My initial thoughts were either that the Reddit API was so broken that they couldn't figure out how to properly implement the rate limits or payment gating needed for the other strategy (even now the API <i>still</i> doesn't have proper rate limits, they just commence legal action anyone they find abusing it rather than figure out how to lock them out; the best they can really do is the sort of basic IP bans they're using here), or the Reddit higher-ups were so frustrated that Apollo had worked out a profitable business model before them that they just wanted to deploy a strategy targeted specifically at punishing them.<p>But it quickly became clear later that Reddit genuinely wasn't even thinking about third-party apps. They saw dollar signs from the AI boom, and realized that Reddit was one of the largest and most accessibly corpuses of generally-high-quality text on a wide variety of topics, and AI companies were going to need that. Google showing an intense dependency on Reddit during the blackout didn't hurt either (yes, at this point I genuinely believe the blackout actually hurt more than it helped by giving Reddit further leverage to use on Google, hence why they were one of the first to sign a crawler deal afterwards).<p>So they decided to use any method they could think of to lock down access to the platform while keeping enough people around that the Reddit platform was still mostly decent enough to be usable for AI training and pivoted much of their business to selling data. All of this while claiming, as they're still doing today with the Internet Archive move, that this is somehow a "privacy measure" meant to ensure deleted comments aren't being archived anywhere.<p>The same thing basically happened with Stack Exchange, except they had much less leverage over their community because the entire site was previously CC licensed and they didn't have any real authority to override that beyond making data access really annoying.<p>The good news is that it really does seem like "injest everything" big model AI is the least likely to survive at this point. Between ChatGPT scaling things down massively to save on costs with the GPT-5 update and the Chinese models somehow making do with less data and slower chips by just using better engineering techniques, I highly doubt these economics around AI are going to last. The bad news is that, between stuff like this and the GitHub restructuring today, I don't thing Big Tech has any plans on how they're going to continue functioning in an economy that isn't entirely based on AI hype. And that's really concerning.</p>
]]></description><pubDate>Mon, 11 Aug 2025 20:44:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=44869267</link><dc:creator>Nathan2055</dc:creator><comments>https://news.ycombinator.com/item?id=44869267</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44869267</guid></item><item><title><![CDATA[New comment by Nathan2055 in "TrackWeight: Turn your MacBook's trackpad into a digital weighing scale"]]></title><description><![CDATA[
<p>The infamous Dropbox comment[0] actually didn't even cite rsync; it recommended getting a remote FTP account, using curlftpfs to mount it locally, and then using SVN or CVS to get versioning support.<p>The double irony of that comment is that pretty much all of those technologies listed are obsolete now while Dropbox is still going strong: FTP has been mostly replaced with SFTP and rsync due to its lack of encryption and difficult to manage network architecture, direct mounting of remote hosts still happens but it's more typical in my experience to have local copies of everything that are then synced up with the remote host to provide redundancy, and CVS and SVN have been pretty much completely replaced with Git outside of some specialist and legacy use cases.<p>The "evaluating new products" xkcd[1] is extremely relevant, as is the continued ultra-success of Apple: developing new technologies, and then turning around and marketing those technologies to people who aren't already in this field working on them are effectively two completely different business models.<p>[0]: <a href="https://news.ycombinator.com/item?id=9224">https://news.ycombinator.com/item?id=9224</a>
[1]: <a href="https://xkcd.com/1497/" rel="nofollow">https://xkcd.com/1497/</a></p>
]]></description><pubDate>Mon, 21 Jul 2025 20:51:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=44640274</link><dc:creator>Nathan2055</dc:creator><comments>https://news.ycombinator.com/item?id=44640274</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44640274</guid></item><item><title><![CDATA[New comment by Nathan2055 in "Someone at YouTube needs glasses"]]></title><description><![CDATA[
<p>The desktop issue was an intentional change that happened sometime in like 2017 or so.<p>The original functionality of the quality selector was to throw out whatever video had been buffered and start redownloading the video in the newly selected quality. All well and good, but that causes a spinning circle until enough of the new video arrives.<p>The "new" functionality is to instead keep the existing quality video in the buffer and have all the <i>new</i> video coming in be set to the new quality. The idea is that you would have the video playing, change the quality, and it keeps playing until a few seconds later the new buffer hits and you jump up to the new quality level. Combined with the fact that YouTube only buffers a few seconds of video (a change made a few years prior to this; back in the Flash era YouTube would just keep buffering until you had the entire video loaded, but that was seen as a waste of both YouTube's bandwidth and the user's since there was always the possibility of the user clicking off the video; the adoption of better connection speeds, more efficient video codecs, and widespread and expensive mobile data caps led to that being seen as the better behavior for most people) and for most people, changing quality is a "transparent" operation that doesn't "interrupt" the video.<p>In general, it's a behavior that seems to come from the fairly widespread mid-2010s UX theory that it's better to degrade service or even freeze entirely than to show a loading screen of some kind. It can also be seen in Chrome sometimes on high-latency connections: in some cases, Chrome will just stop for a few moments while performing DNS resolution or opening the initial connections rather than displaying the usual "slow light gray" loading circle used on that step, seemingly because some mechanism within Chrome has decided that the requests will <i>probably</i> return quickly enough for it to not be an issue. YouTube Shorts on mobile also has similar behavior on slow connections: the whole video player will just freeze entirely until it can start playing the video with no loading indicator whatsoever. Another example is Gmail's old basic HTML interface versus the modern AJAX one: an article which I remember reading, but can't find now found that for pretty much every use case the basic HTML interface was statistically faster to load, but users subjectively felt that the AJAX interface was faster, seemingly just because it didn't trigger a full page load when something was clicked on.<p>And, I mean, they're kind of right. It's nerds like us that get annoyed when the video quality isn't updated immediately, the average consumer would much rather have the video "instantly load" rather than a guarantee that the video feed is the quality you actually selected. It's the same kind of thought process that led to the YouTube mobile app getting an unskippable splash screen animation last year; to the average person, it feels like the app loads much faster now. It doesn't, of course, it's just firing off the home page requests in the background while the locally available animation plays, but the user <i>sees</i> a thing rather than a blank screen while it loads, which tricks the brain into <i>thinking</i> it's loading faster.<p>This is also why Google's Lighthouse page loading speed algorithm prioritizes "Largest Contentful Paint" (how long does it take to get the biggest element on the page rendered), "Cumulative Layout Shift" (how much do things move around on the page while loading), and "Time to Interactive" (how long until the user can start clicking buttons) rather than more accurate but "nerdy" indicators like Time to First Byte (how long until the server starts sending data) or Last Request Complete (how long until all of the HTTP requests on a page are finished; for most modern sites, this value is infinity thanks to tracking scripts).<p>People simply prefer for things to <i>feel</i> faster, rather than for things to actually <i>be</i> faster. And, luckily for Internet companies, the former is usually much easier to achieve than the latter.</p>
]]></description><pubDate>Wed, 30 Apr 2025 18:27:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=43849035</link><dc:creator>Nathan2055</dc:creator><comments>https://news.ycombinator.com/item?id=43849035</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43849035</guid></item><item><title><![CDATA[New comment by Nathan2055 in "Gemini Robotics"]]></title><description><![CDATA[
<p>> The actual scary stuff is the dilution of expertise, we contributed for a long time to share our knowledge for internet points (stack overflow, open source projects, etc), and it has been harvested by the AIs already, anyone that pays access to these services for tens of dollars a month can bootstrap really quickly and do what it might had needed years of expertise before.<p>What scares me more is the opposite of that: information scarcity leading to less accessible intelligence on newer topics.<p>I’ve completely stopped posting on Reddit since the API changes, and I was extremely prolific before[1] because I genuinely love writing about random things that interest me. I know I’m not the only one: anecdotally, the overall quality of content on Reddit has nosedived since the change and while there doesn’t seem to be a drop in traffic or activity, data seems to indicate that the vast majority of activity these days is disposable meme content[2]. This seems to be because they’re attempting desperately to stick recommendation algorithms everywhere they can, which are heavily weighted toward disposable content since people view more of it. So even if there were just as many long discussion posts like before, they’re not getting surfaced nearly as often. And discussion quality when it does happen has noticeably dipped as well: the Severance subreddit has regularly gotten posts and comments where people question things that have already been fully explained in the series itself (not like subtext kind of things, like “a character looked at the camera and blatantly said that in the episode you’re talking about having just watched” things). Those would have been heavily downvoted years ago, now they’re the norm.<p>But if LLMs learn from the in-depth posting that used to be prominent across the Internet, and that kind of in-depth posting is no longer present, a new problem presents itself. If, let’s say, a new framework releases tomorrow and becomes the next big thing, where is ChatGPT going to learn how that framework works? Most new products and platforms seem to centralize their discussion on Discord, and that’s not being fed into any LLMs that I’m aware of. Reddit post quality has nosedived. Stack Overflow keeps trying to replace different parts of its Q&A system with weird variants of AI because “it’s what visitors expect to see these days.” So we’re left with whatever documentation is available on the open Internet, and a few mediocre-quality forum posts and Reddit threads.<p>An LLM might be able to pull together some meaning out of that data combined with the existing data it has. But what about the framework after that? And the language after that? There’s less and less information available each time.<p>“Model collapse” doesn’t seem to have panned out: as long as you have external human raters, you can use AI-generated information in training. (IIRC the original model collapse discussions were the result of AI attempting to rate AI generated content and then feed right back in; that obviously didn’t work since the rater models aren’t typically any better than the generator models.) But what if the “data wells” dry up eventually? They can kick the can down the road for a while with existing data (for example LLMs can relate the quirks of new languages to the quirks of existing languages, or text to image models can learn about characters from newer media by using what it already knows about how similar characters look as a baseline), but eventually quality will start to deteriorate without new high-quality data inputs.<p>What are they gonna do then when all the discussion boards where that data would originate are either gone or optimized into algorithmic metric farms like all the other social media sites?<p>[1]: <a href="https://old.reddit.com/user/Nathan2055" rel="nofollow">https://old.reddit.com/user/Nathan2055</a><p>[2]: I can’t find it now, but there was an analysis about six months ago that showed that since the change a significant majority of the most popular posts in a given month seem to originate from /r/MadeMeSmile. Prior to the API change, this was spread over an enormous number of subreddits (albeit with a significant presence by the “defaults” just due to comparative subscriber counts). While I think the subreddit distribution has gotten better since then, it’s still mostly passive meme posts that hit the site-wide top pages since the switchover, which is indicative of broader trends.</p>
]]></description><pubDate>Wed, 12 Mar 2025 21:41:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=43347940</link><dc:creator>Nathan2055</dc:creator><comments>https://news.ycombinator.com/item?id=43347940</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43347940</guid></item><item><title><![CDATA[New comment by Nathan2055 in "RIP botsin.space"]]></title><description><![CDATA[
<p>This is why I believe that Bluesky and the AT protocol is a significantly more attractive system than Mastodon and ActivityPub. Frankly, we’ve tried the kind of system ActivityPub offers before: a decentralized server network ultimately forming one big system, and the same problems have inevitably popped up every time.<p>XMPP tried to do it for chat. All the big players adopted it and then either realized that the protocol wasn’t complex enough for the features they wanted to offer or that it was much better financially to invest in a closed system. Sometimes both. The big providers split off into their own systems (remember, Google Talk/Hangouts/Chat and Apple iChat/FaceTime both started out as XMPP front-ends) and the dream of interconnected IMing mostly died.<p>RSS tried to do it for blogs. Everyone adopted it at first, but eventually content creators came to the realization that you can’t really monetize sending out full-text posts directly in any useful way without a click back to the originating site (mostly defeating the purpose), content aggregators realized that offering people the option to use any front-end they wanted meant that they couldn’t force profitable algorithmic sorts and platform lock-in, and users overwhelmingly wanted social features integrated into their link aggregators (which Google Reader was famously on the cusp of implementing before corporate opted to kill it in favor of pushing people to Google+; that could have potentially led to a very different Internet today if it had been allowed to release). The only big non-enthusiast use of RSS that survives is podcasts, and even those are slowly moving toward proprietary front-ends like Spotify.<p>Even all the way back to pre-Web protocols: IRC was originally a big network of networks where every server could talk to every other server. As the system grew, spam and other problems began to proliferate, and eventually almost all the big servers made the decision to close off into their own internal networks. Now the multi-server architecture of IRC is pretty much only used for load balancing.<p>But there’s two decentralized systems that have survived unscathed: the World Wide Web over HTTP and email over SMTP. Why those two? I believe that it’s because those systems are based on federated <i>identities</i> rather than federated <i>networks</i>.<p>If you have a domain name, you can move the website attached to it to any publicly routable server and it still works. Nobody visiting the website even sees a difference, and nobody linking to your website has to update anything to stay “connected” to your new server. The DNS and URL systems just <i>work</i> and everyone just locates you automatically. The same thing with email: if you switch providers on a domain you control, all the mail still keeps being routed to you. You don’t have to notify anyone that anything has changed on your end, and you still have the same well-known name after the transition.<p>Bluesky’s killer feature is the idea of portable identities for social media. The whole thing just ties back to a domain name: either one that you own or a subdomain you get assigned from a provider. That means that picking a server isn’t something the average person needs to worry about, you can just use the default and easily change later if you want to and your entire identity just moves with you.<p>If the server you’re on evaporates, the worst thing that you lose is your activity, and that’s only if you don’t maintain any backups somewhere else. For most people, you can just point your identity at a different server, upload a backup of your old data, and your followers don’t even know anything has changed. A sufficiently advanced client could probably even automate all of the above steps and move your whole identity elsewhere in one click.<p>Since the base-level object is now a user identity rather than a server, almost all of the problems with ActivityPub’s federation model go away. You don’t deal with blocking bad servers, you just block bad people (optionally using the same sorts of “giant list” mechanisms already available for places like Twitter). You don’t have to deal with your server operator getting themself blacklisted from the rest of the network. You don’t have to deal with your server operator declaring war on some other server operator and suddenly cutting you off from a third of your followers.<p>People just publish their posts to a server of their choice, others can fetch those posts from their server, the server in question can be moved wherever without affecting anything for those other users, and all of the front-end elements like feed algorithms, post display, following lists and block lists, and user interface options could either be handled on the client-side or by your choice of (transferable) server operator. Storage and bandwidth costs for text and (reasonable) images are mostly negligible at scale, and advertising in clients, subscription fees, and/or offering ancillary services like domain registration could easily pay for everything.<p>ActivityPub sounds great to nerds who understand all of this stuff. But it’s too complicated for the average social media user to use, and too volatile for large-scale adoption to take off.<p>AT protocol is just as straightforward to understand as email (“link a website domain if you already have one or just register for a free one on the homepage, and you can easily change in the future”), doesn’t require any special knowledge to utilize, and actually separates someone’s identity and content from the person running the server. Mastodon is 100 tiny Twitters that are somewhat connected together, AT actually lets everyone have their own personal Twitter and connect them all together in a way that most people won’t even notice.</p>
]]></description><pubDate>Wed, 30 Oct 2024 00:04:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=41990739</link><dc:creator>Nathan2055</dc:creator><comments>https://news.ycombinator.com/item?id=41990739</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41990739</guid></item><item><title><![CDATA[New comment by Nathan2055 in "Every phone should be able to run personal websites"]]></title><description><![CDATA[
<p>> Having to visit fifty personal sites would be a pain.<p>Which is why RSS (and Atom, but I’m just saying RSS because it’s less to type) was such a brilliant invention, and also why it was “killed.”<p>Everyone is talking about things like “ActivityPub” and “interoperability” and “personalized algorithms” nowadays but RSS supported many of those features twenty years ago.<p>Yeah, it didn’t solve the account portability problem (you’d still need a separate account for each forum and blog you wanted to comment on; OpenID almost solved this issue but was a nightmare to work with, Mozilla Persona (which is not the same thing as Mozilla Personas, wow that company is bad at naming things) would have definitely solved this issue if that company had spent more than twenty minutes promoting it), but it did solve the actual fundamental issue that most people seem to be getting at with these modern systems: it offered a way to collate and display updates from a wide variety of mutually incompatible Internet sources all together in one place.<p>It’s an incredible simple pitch, even to non-technical people: display your YouTube subscriptions, Twitter follows, blogs you’re interested in, and news sites that you read all in one place, in software that you control.<p>The problem is that operating a “platform” rather than a website got to be too profitable, and suddenly the goal shifted from serving useful content to make you want to come back to a site to serving enough content that you never want to leave to begin with. Many people believe that if Google had made Reader the center of their social strategy rather than killing it to pursue a short-sighted attempt to compete directly with Facebook, we could be looking at a much healthier Internet today (and Google probably could be earning a lot more money than they currently are, considering the abysmal adoption rate of modern Google services is often argued to be directly linked to fear of shutdown).[1]<p>Personal websites died for the mainstream because Facebook, Twitter, and Instagram offered a better interface for the average consumer. But they could be brought back by a system that made the good parts of those sites interoperable. Frankly, this is the kind of thing that I want to see Mozilla pursuing again, not...whatever the heck they’re doing now. (You go their website and they’re selling Pocket, which is basically a bad centralized version of what I’m talking about; a rebadged VPN service; an email alias service; and Firefox. What happened to the people who tried to do things like Persona?)<p>[1]: <a href="https://www.theverge.com/23778253/google-reader-death-2013-rss-social" rel="nofollow noreferrer">https://www.theverge.com/23778253/google-reader-death-2013-r...</a></p>
]]></description><pubDate>Sat, 12 Aug 2023 20:37:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=37104031</link><dc:creator>Nathan2055</dc:creator><comments>https://news.ycombinator.com/item?id=37104031</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37104031</guid></item><item><title><![CDATA[New comment by Nathan2055 in "AWS to begin charging for public IPv4 addresses"]]></title><description><![CDATA[
<p>No, it’s so much worse than that. Look closely at <a href="https://aws.amazon.com/vpc/pricing/" rel="nofollow noreferrer">https://aws.amazon.com/vpc/pricing/</a> and note this line:<p>> You also incur standard AWS data transfer charges for all data transferred via the NAT gateway.<p>Yes, the $0.045/GB “data processing” charge is <i>in addition to</i> the usual $0.09/GB egress charge. You are paying an effective $0.135/GB for all of your egress, in addition to the $0.045/hr just to keep the NAT gateway running.<p>And yes, your ingress and even internal-to-AWS traffic is also billed at the $0.045/GB rate. (An example given on the aforementioned page is traffic from an EC2 instance to a same-region S3 bucket, which they note doesn’t generate an egress charge but <i>does</i> generate a NAT processing charge.) As far as I can tell, the only traffic which isn’t billed is traffic routed with internal VPC private IP addresses, which don’t hit the NAT gateway and thus aren’t counted.<p>There are highly paid AWS consultants who shave literal millions of dollars off of many company’s AWS bills by just setting it up a cheap EC2 box to handle their NAT instead of using the built-in solution. Doing that instantly wipes out the ingress charges and effectively halves the egress charges, and it’s probably a lower hourly cost than they’re already paying: an a1.large is $0.051/hr on-demand but that immediately drops to just $0.032/hr with a 1 year no upfront reserved plan. If you’re willing to pay upfront and/or sign a longer contract, you can get it as low as $0.019/hr.</p>
]]></description><pubDate>Fri, 04 Aug 2023 06:21:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=36996110</link><dc:creator>Nathan2055</dc:creator><comments>https://news.ycombinator.com/item?id=36996110</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36996110</guid></item><item><title><![CDATA[New comment by Nathan2055 in "Webrecorder: Capture interactive websites and replay them at a later time"]]></title><description><![CDATA[
<p>This is actually an issue with their docs that I encountered a few weeks ago when I was first experimenting with this tool. They apparently added a Spanish-language version of the docs, including an associated extra directory tree in the URL, but they failed to set up redirects or even update the existing links in the documentation.<p>So those two pages are actually located at <a href="https://archiveweb.page/en/troubleshooting/errors/" rel="nofollow noreferrer">https://archiveweb.page/en/troubleshooting/errors/</a> and <a href="https://archiveweb.page/en/contact/" rel="nofollow noreferrer">https://archiveweb.page/en/contact/</a> respectively.<p>It looks like their docs site is open source at <a href="https://github.com/webrecorder/archiveweb.page-site">https://github.com/webrecorder/archiveweb.page-site</a>, so I may try and send a pull request later today to go ahead and correct those links.</p>
]]></description><pubDate>Tue, 01 Aug 2023 21:48:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=36963594</link><dc:creator>Nathan2055</dc:creator><comments>https://news.ycombinator.com/item?id=36963594</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36963594</guid></item><item><title><![CDATA[New comment by Nathan2055 in "Twitter has officially changed its logo to ‘X’"]]></title><description><![CDATA[
<p>There's a version of that with video that Neil reuploaded a while back as well: <a href="https://www.youtube.com/watch?v=fu3ETgAvQrw">https://www.youtube.com/watch?v=fu3ETgAvQrw</a><p>The corporate PowerPoint visuals just adds so much.</p>
]]></description><pubDate>Mon, 24 Jul 2023 17:41:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=36851624</link><dc:creator>Nathan2055</dc:creator><comments>https://news.ycombinator.com/item?id=36851624</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36851624</guid></item><item><title><![CDATA[New comment by Nathan2055 in "Microsoft faces antitrust scrutiny from the EU over Teams, Office 365"]]></title><description><![CDATA[
<p>The problem is that there simply wasn't a better option at the time.<p>Ogg Vorbis was a novelty at best, and it was the only decently widely adopted open source competitor for any of the items listed that was available at the time.<p>HTML5 was only just published when Chrome launched. So Flash was at that point the only option available to show a video in the browser (sure, downloading a RealPlayer file was always an option, but it was clunky, creators didn't like people being able to save stuff locally, and was also not open source). Chrome in fact arguably <i>accelerated</i> the process of getting web video open sourced: Google bought On2 in 2010 to get the rights to VP8 (the only decent H.264 competitor available at that point) so they could immediately open source it. The plan was in fact to remove H.264 from Chrome entirely once VP8/VP9 adoption ramped up[1], but that didn’t end up happening.<p>Flash was integrated into Chrome because people were going to use it anyway, and having Google distribute it at least let them both sandbox it and roll out automatic updates (a massive vector for malware at the time was ads pretending to be Flash updates, which worked because people were just that used to constant Flash security patches, most of which required a full reboot to apply; Chrome fixed both of those issues). Apple are the ones who ultimately dealt the death blow to Flash, and it was really just because Adobe could not optimize it for phone CPUs no matter what they tried (even the few Android releases of Flash that we got were practically unusable). That also further accelerated the adoption of open source HTML5 technologies.<p>PDF is an open source format, and has been since 2008. While I don't know if pressure from Google is what did it, that wouldn’t surprise me. Regardless, the Chrome PDF reader, PDFium, is open source[2] and Mozilla's equivalent project from 2011, PDF.js, is also open source.[3] Both of these projects replaced the distinctly closed source Adobe Reader plugin that was formerly mandatory for viewing PDFs in the browser.<p>Chrome is directly responsible for eliminating a lot of proprietary software from mainstream use and replacing it with high-quality open source tools. While they've caused problems in other areas of browser development that are worthy of criticism, Chrome's track record when it comes to open sourcing their tech has been very good.<p>[1]: <a href="https://blog.chromium.org/2011/01/html-video-codec-support-in-chrome.html" rel="nofollow noreferrer">https://blog.chromium.org/2011/01/html-video-codec-support-i...</a><p>[2]: <a href="https://github.com/chromium/pdfium">https://github.com/chromium/pdfium</a><p>[3]: <a href="https://github.com/mozilla/pdf.js">https://github.com/mozilla/pdf.js</a></p>
]]></description><pubDate>Wed, 19 Jul 2023 11:46:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=36784831</link><dc:creator>Nathan2055</dc:creator><comments>https://news.ycombinator.com/item?id=36784831</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36784831</guid></item><item><title><![CDATA[New comment by Nathan2055 in "iAnnotate – Whatever happened to the web as an annotation system? (2013)"]]></title><description><![CDATA[
<p>> The best example I have is the massive infrastructure many people insist is required for a website.<p>This is my biggest pet peeve, and I think a lot of it (among supposedly tech-savvy people at least, less technical people are a different story) is caused by people looking at the cost of a random selection of AWS products, often quoting on-demand prices rather than the 40% discount you can get by buying a year of reserved capacity at once, multiplying by 12, and then freaking out.<p>Many cloud products are not good deals, and almost seem designed to make people think running a web service is inaccessible to them. Those products are usually given a healthy markup because you’re paying to avoid certain setup steps or for the ability to scale infinitely large in two clicks.<p>You can still just rent a few cheap servers (or even just one) and, if you set them up properly, you can run a decently sized website off of them no problem.</p>
]]></description><pubDate>Mon, 03 Jul 2023 01:57:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=36567801</link><dc:creator>Nathan2055</dc:creator><comments>https://news.ycombinator.com/item?id=36567801</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36567801</guid></item><item><title><![CDATA[New comment by Nathan2055 in "WebAuthn Is Great and It Sucks (2020)"]]></title><description><![CDATA[
<p>I stopped trusting Google Authenticator several years ago when I realized it had no syncing, backup, or even device transfer functionality whatsoever. A quick test made me realize that if anything happened to my device, I would just lose all of my 2FA keys with no way to recover them. I also then realized that if anything happened to the app (which apparently has a couple of times throughout its existence), I’d have the same problem.<p>I migrated to Authy because it at least has syncing and backup functionality. Sure, it’s less secure, and I should probably self-host somehow for the best security/stability assurances, but Authy seems to work pretty well for what I need it for.</p>
]]></description><pubDate>Mon, 03 Jul 2023 01:46:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=36567738</link><dc:creator>Nathan2055</dc:creator><comments>https://news.ycombinator.com/item?id=36567738</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36567738</guid></item></channel></rss>