<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: xoa</title><link>https://news.ycombinator.com/user?id=xoa</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 01:55:34 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=xoa" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by xoa in "A cryptography engineer's perspective on quantum computing timelines"]]></title><description><![CDATA[
<p>Yeah, sounds like it's time to take this very seriously. Sobering article to read, practical and to the point on risk posture. One brief paragraph though that I think deserves extra emphasis and I don't see in the comments here yet:<p>><i>In symmetric encryption, we don’t need to do anything, thankfully</i><p>This is valuable because it does offer a non-scalable but very important extra layer that a lot of us will be able to implement in a few important places today, or could have for awhile even. A lot of people and organizations here may have some critical systems where they can make a meat-space-man-power vs security trade by virtue of pre-shared keys and symmetric encryption instead of the more convenient and scalable normal pki. For me personally the big one is WireGuard, where as of a few years ago I've been able to switch the vast majority of key site-to-site VPNs to using PSKs. This of course requires out of band, ie, huffing it on over to every single site, and manually sharing every single profile via direct link in person vs conveniently deployable profiles. But for certain administrative capability where the magic circle in our case isn't very large this has been doable, and it gives some leeway there as any traffic being collected now or in the future will be worthless without actual direct hardware compromise.<p>That doesn't diminish the importance of PQE and industry action in the slightest and it can't scale to everything, but you may have software you're using capable of adding a symmetric layer today without any other updates. Might be worth considering as part of low hanging immediate fruit for critical stuff. And maybe in general depending on organization and threat posture might be worth imagining a worst-case scenario world where symmetric and OTP is all we have that's reliable over long time periods and how we'd deal with that. In principle sneakernetting around gigabytes or even terabytes of entropy securely and a hardware and software stack that automatically takes care of the rough edges should be doable but I don't know of any projects that have even started around that idea.<p>PQE is obviously the best outcome, we ""just"" switch albeit with a lot of increase compute and changed assumptions in protocols pain, but we're necessarily going to be leaning on a lot of new math and systems that won't have had the tires kicked nearly as long as all conventional ones have. I guess it's all feeling real now.</p>
]]></description><pubDate>Mon, 06 Apr 2026 18:49:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47665157</link><dc:creator>xoa</dc:creator><comments>https://news.ycombinator.com/item?id=47665157</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47665157</guid></item><item><title><![CDATA[New comment by xoa in "Iran strikes leave Amazon availability zones "hard down" in Bahrain and Dubai"]]></title><description><![CDATA[
<p>><i>Instead of targeting data centers, it's far easier to target the electrical substation that powers the datacenter</i><p>That has a lot of collateral damage that may or may not be desirable though. Simultaneously it might have quite a different long term effect right? If all the actual computers are unharmed they can be powered in other ways in an emergency, even if at much higher cost. Or powered back up later, the time lost might be militarily very significant but they're not gone.<p>But how many people and companies actually have full functional decentralized clones of all programs and data? How many people and companies have devices that are locked to remote hosts they expect to check in on at least once in awhile even if they're not "cloud dependent"? What if all that was literally <i>gone</i>, a few thousand missiles or drones and data centers are all just completely erased including tape backups, everything, suddenly we're in a world where all that compute and data is poof. And without hurting anything else, no traditional war crimes either, no power or direct food or transport disruptions. Everyone is fine and healthy, except with this huge societal exocortex gone.</p>
]]></description><pubDate>Fri, 03 Apr 2026 22:41:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47633291</link><dc:creator>xoa</dc:creator><comments>https://news.ycombinator.com/item?id=47633291</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47633291</guid></item><item><title><![CDATA[New comment by xoa in "Iran strikes leave Amazon availability zones "hard down" in Bahrain and Dubai"]]></title><description><![CDATA[
<p>This may have been long discussed, but I feel like this war is the first time I've really thought hard about how big a target data centers would be in any sort of modern peer war and how that's an entirely new thing since the last time it was really on the radar (end of CW) right? We've built trillions and trillions of dollars in infrastructure in the peace time since, and it seems fairly concentrated. AWS is amongst the biggest there is, and according to mappers like [0] there are only around 240 operational total worldwide with another 130ish under construction. Like, in one respect that seems like a bunch, but vs the kind of attacks we see done in a matter of days in modern wars it's a pretty small number for the whole planet isn't it? In the first 24 hours of the war the US and Israel launched on Iran, they hit something like 1500-2000 targets. How hardened are the data centers? Are they in structures that handle some level of explosives? Do they have counter measures like internal blast walls dividing things into cells so a few hundred pounds of high explosive in one area doesn't damage outside the cell? I mean, of course like all data centers they'll have considered extensive countermeasures to fire, environmental threats, grid issues and so on. But has "nation-state level attack via mass drones or bombardment" been part of the threat model over the last few decades? Hardening of telecoms was certainly considered for old Ma Bell and such back in the CW days but that was a very different environment.<p>I guess it makes me think about what a soft underbelly this could be for a lot of modern society. There's always been consideration of threats to refineries and power stations and industrial production and all those big metal deals. But like, forget any sort of nuclear exchange, any sort of crazy super Starfish style big EMP, just purely a few thousand drones nailing data centers. Nobody even directly dies, just a lot of wrecked computers. What <i>would</i> be the cost of losing all the clouds and colo stuff? How long to replace, at what cost? How much depends on it?<p>----<p>0: <a href="https://www.datacentermap.com/c/amazon-aws/" rel="nofollow">https://www.datacentermap.com/c/amazon-aws/</a></p>
]]></description><pubDate>Fri, 03 Apr 2026 22:08:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47632971</link><dc:creator>xoa</dc:creator><comments>https://news.ycombinator.com/item?id=47632971</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47632971</guid></item><item><title><![CDATA[New comment by xoa in "Iran strikes leave Amazon availability zones "hard down" in Bahrain and Dubai"]]></title><description><![CDATA[
<p>><i>if you dont colo your own servers you don't own anything.</i><p>I'm confused, what does ownership have to do with this particular failure mode? The issue here is a (for many) unforeseen new tradeoff involved in centralization. Colocating at a central place has the exact same tradeoff in this case: bandwidth is vastly more available and cheaper towards the core, and there are significant amortization gains to be had with a lot of basic shared infra. But it's also one big structure holding a lot of computers and infra everyone is depending on, that's the whole point of it! We're all sharing network backbone and power filtering/redundancy and so on and so forth, vs paying for that separately. That means a missile or drone or bomb hit to the building still hits all of us whether we own the servers there or we're running workloads on someone else's servers.<p>The only responses are either central counter measures or decentralization. Both have significant costs and complexity, that's why it wasn't just done proactively right?</p>
]]></description><pubDate>Fri, 03 Apr 2026 21:55:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47632845</link><dc:creator>xoa</dc:creator><comments>https://news.ycombinator.com/item?id=47632845</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47632845</guid></item><item><title><![CDATA[New comment by xoa in "Apple releases iOS 18 security updates for iOS 26 holdouts"]]></title><description><![CDATA[
<p>This has been submitted already over the past few days, but it didn't get traction and I think it's important enough to be worth another spin. Apple has relented and made 18.7.7, the latest security patch series for iOS 18, available to all iOS 18 capable models, not merely the limited number that were dropped for iOS 26 support. So if you (like me) had just grimly determined to skip 26 and hope to be ok until 27 being hopefully better, there is now a better option. If you did one of the 26 nag avoidance tricks and joined the iOS 18 beta channel you will need to turn that off for the 18.7.7 update to show up, and you'll have to scroll past the big prominent iOS 26 update notice to find it below as a smaller Also Available.<p>One thing of somewhat interesting note: as far as I can find they didn't release any standalone signed IPSW file for most devices like they universally have in general, only for a few old ones. Perhaps because if there is an actively signed IPSW they don't have infra in place to prevent people from downgrading back to iOS 18 from 26? So update has to be done from on the device not via Finder or iMazing or the like.</p>
]]></description><pubDate>Fri, 03 Apr 2026 21:35:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47632616</link><dc:creator>xoa</dc:creator><comments>https://news.ycombinator.com/item?id=47632616</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47632616</guid></item><item><title><![CDATA[Apple releases iOS 18 security updates for iOS 26 holdouts]]></title><description><![CDATA[
<p>Article URL: <a href="https://sixcolors.com/post/2026/04/apple-releases-ios-18-security-updates-for-ios-26-holdouts/">https://sixcolors.com/post/2026/04/apple-releases-ios-18-security-updates-for-ios-26-holdouts/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47632615">https://news.ycombinator.com/item?id=47632615</a></p>
<p>Points: 3</p>
<p># Comments: 1</p>
]]></description><pubDate>Fri, 03 Apr 2026 21:35:29 +0000</pubDate><link>https://sixcolors.com/post/2026/04/apple-releases-ios-18-security-updates-for-ios-26-holdouts/</link><dc:creator>xoa</dc:creator><comments>https://news.ycombinator.com/item?id=47632615</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47632615</guid></item><item><title><![CDATA[New comment by xoa in "From Proxmox to FreeBSD and Sylve in our office lab"]]></title><description><![CDATA[
<p>Not sure you'll see this so late but just wanted to say I really appreciate the reply and learning about this project. I've been working to switch myself and various places away from perpetual ESXi licenses as it finally starts really getting old, and while I'm thankful Proxmox exists I've always loved FreeBSD (was kinda bummed when TrueNAS moved from it) and Proxmox can be irksome. Even at such an early stage Sylve already looks like it's clicking nicely. Excited to see next release and what comes in the future.</p>
]]></description><pubDate>Wed, 01 Apr 2026 14:00:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47600999</link><dc:creator>xoa</dc:creator><comments>https://news.ycombinator.com/item?id=47600999</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47600999</guid></item><item><title><![CDATA[New comment by xoa in "From Proxmox to FreeBSD and Sylve in our office lab"]]></title><description><![CDATA[
<p>Too late to edit, but just as a note for anyone else who gets confused by my post: I was not paying careful enough attention and missed/misread the "backups" bit in the parent post, completely my fault. As far as I can tell from reading through the (quite pleasant!) documentation [0], Sylve does not (at least for now) support any sort of network storage for direct use as the VM backing store, though as it is FreeBSD underneath it's presumably doable to get something going from the command line. I'd thought they'd somehow managed to set something up so you could directly use another ZFS system via SSH as the primary backing store with management which would be pretty awesome. It still looks like a beautiful design, but since I'm pretty invested right now in separating out storage into its own hardware vs where compute happens it'd be hard to setup nodes as AIO for the near future at least here.<p>Still an awesome project to learn about and I hope it's successful.<p>----<p>0: <a href="https://sylve.io/docs/" rel="nofollow">https://sylve.io/docs/</a></p>
]]></description><pubDate>Tue, 31 Mar 2026 03:10:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47582323</link><dc:creator>xoa</dc:creator><comments>https://news.ycombinator.com/item?id=47582323</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47582323</guid></item><item><title><![CDATA[New comment by xoa in "From Proxmox to FreeBSD and Sylve in our office lab"]]></title><description><![CDATA[
<p>Do you have any opinions on how this works vs doing iSCSI to some other storage system using ZFS? That's how I've been handling Proxmox on the backend, and have mixed feelings. The GUI leaves a very great deal to be desired in honestly curious ways, have to touch the CLI a lot even for super basic networking or auth stuff, and of course neither side has the same insight to the data structures in question. Either you've got to do ZVOL instances and thus manual effort or scripting, or you give Proxmox a single big blob then let it manage that with LVM but that means the storage side can't give any granular help on snapshots and the like. It still can deal with data integrity and backups and storage redundancy and all that but no further, and some increased overhead. But on the other hand, I do feel like a really firm separation of concerns isn't without value. Having native support though is an interesting alternative I hadn't really considered.</p>
]]></description><pubDate>Mon, 30 Mar 2026 18:51:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47578200</link><dc:creator>xoa</dc:creator><comments>https://news.ycombinator.com/item?id=47578200</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47578200</guid></item><item><title><![CDATA[New comment by xoa in "No one is happy with NASA's new idea for private space stations"]]></title><description><![CDATA[
<p>><i>I don't think Starship has gotten to orbit yet. It's gotten to altitude but not speed.</i><p>I'm honestly kinda curious how you came to this thinking after watching the launches, like the last Flight 11 [0]? They have the velocity listed at all times right there in the bottom corner. It's peaking over 7.4 km/s, seems pretty clear they were stopping just barely short and maintaining a ballistic path on purpose exactly as they said they would in the flight plan they filed ahead of time with the FAA for deorbit safety purposes, not because they couldn't have technically squeezed out another few hundred m/s and different trajectory if that was the goal. It's a hardware rich program, and their testing sequence has been reasonably careful about controlling the space of out of bounds scenarios (on the scale of rocketry). What has lead you to believe that they can do 7.4+ km/s with Raptor 2 and Block 2 but v3 won't be able to do ~7.8 (or that they couldn't have done it with v2 for that matter)?<p>----<p>0: <a href="https://www.youtube.com/watch?v=9tvK7flZ72c" rel="nofollow">https://www.youtube.com/watch?v=9tvK7flZ72c</a></p>
]]></description><pubDate>Sat, 28 Mar 2026 16:27:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47556055</link><dc:creator>xoa</dc:creator><comments>https://news.ycombinator.com/item?id=47556055</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47556055</guid></item><item><title><![CDATA[New comment by xoa in "No one is happy with NASA's new idea for private space stations"]]></title><description><![CDATA[
<p>It's pretty wild to me that in both the article (written by Eric Berger, who really knows his stuff and did two fantastic books on the history of SpaceX and the rise of new space) and the first 31 comments made here on HN as I write this that a Find for one word has zero results: "starship". That's the overwhelming behemoth elephant in the room. For the purposes of launching/building a space station, it doesn't matter if Starship can't reenter, or refueling doesn't work or any of the other hard problems. It just needs to get to orbit. Which it has proven it can. And that means that any space station developed using anything before that will be rapidly completely obsolete from a commercial perspective. Starship will just offer so much more volume and mass for the same cost or less. NASA may want very hard to hit their 2030 deadline, but the technology may simply not line up to do it on the budget they want and desired partner concerns, same as how the retirement of the Space Shuttle didn't line up with American private launch (though of course in the end that has made it and been a big win). No company that actually wants to make money is going to risk billions on something that somebody else can lap them on by an order of magnitude in a few years or less.<p>I suspect that of "continuous presence in low orbit", "longer term new capabilities", "in budget", and "commercially successful" NASA is going to be forced to pick one or two and that's what they're resisting. Rushing things along almost always costs a lot of money and features. If you want to hit a budget and features then you have to be willing to wait for the various bits to line up and preferably spend some time experimenting and exploring new capabilities and strategies before big hardware commitments. There's a lot of moving parts here to think through. This would all be true even if that was NASA's only concern, vs going to the Moon and all the normal and importance science and so on they're getting pushed on.</p>
]]></description><pubDate>Sat, 28 Mar 2026 15:00:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47555196</link><dc:creator>xoa</dc:creator><comments>https://news.ycombinator.com/item?id=47555196</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47555196</guid></item><item><title><![CDATA[New comment by xoa in "Supreme Court Sides with Cox in Copyright Fight over Pirated Music"]]></title><description><![CDATA[
<p>><i>I think I like the idea, but I can't help wondering if it would have unforeseen consequences.</i><p>As I said in a sibling comment, quickie comments on HN should be taken more as mental stimulation and kickoff points for further discussion as opposed to "final bill that has been revised in committee and is going to the floor for a full vote". The details of implementation are certainly critical, and not trivial either! I'm fully in support of thinking through various use cases. But part of why I'm interested in alternate approaches is that they might give us finer grained tools.<p>><i>Could this approach undermine the protections afforded by open-source licenses? (IANAL.)</i><p>I have actually considered that as well but didn't add it into a quickie comment. If we take the second path of approaches I listed there, then thinking about it all open source software would fall under a special even more permissive class of the tier 3, in that it already has "fair, reasonable and non-discriminatory" licensing for all right? Except that it's also free. The motivation here is the "advancement of the useful arts & sciences" and the public good, so having it be explicit that "if you're releasing under an open source license and thus giving up your standard first, second, and part of your third period of IP rights and monopoly, you're excluded from needing to pay a license fee because you've already enable the public to make derivative works for free for decades when they wouldn't otherwise anyway."<p>All <i>that</i> said, I'll also ask fwiw if it'd even be <i>that</i> big a deal given the pace of development? I do think it'd be both ideal and justified if OSS had a longer period for free, that's still a square deal to the public IMO. But like, even if an OSS work went out protection (and keep in mind that a motivated community that could raise even a few thousand dollars would be able to just pay for an extra decade no problem, the cost doesn't really ramp up for awhile [which might itself be considered a flaw?]) after 10 years, how much is it worth it that 2016 era OSS (and no changes since remember, it's a constantly rolling window) now could have proprietary works be worth it against 10 year old proprietary software all getting pushed into the public domain far faster? That's worth some contemplation. Maybe requiring that source/assets be provided to the Library of Congress or something and is released at the same time the work loses copyright would be a good balance, having all that available for down the road would be a huge win vs what we've seen up until now.<p>Anyway, all food for thought is all.</p>
]]></description><pubDate>Wed, 25 Mar 2026 20:38:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47522887</link><dc:creator>xoa</dc:creator><comments>https://news.ycombinator.com/item?id=47522887</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47522887</guid></item><item><title><![CDATA[New comment by xoa in "Supreme Court Sides with Cox in Copyright Fight over Pirated Music"]]></title><description><![CDATA[
<p>I want to be super clear that I'm not proposing some finalized plan or numbers here, it'd need some real work spent hashing it all out. Mainly though I hope people will consider more the huge space of untapped approaches to balancing various benefits and costs towards a better societal outcome. And that maybe that helps a little in getting us out of some of the present seemingly intractable boxes we so often seem stuck in?<p>Your tax idea could certainly be another useful tool. My main immediate thought/caution would be:<p>><i>IE: if you make profit off of it, then it cranks up. There's plenty of music artists who's song blow up a decade or more later.</i><p>As we have endless examples of, "profit" and even "revenue" can be subject to a lot of manipulation/fudging given the right incentives. I also think that part of the cost I describe is objective: whether it takes off right away or takes off after a decade, as long as it's under full copyright it's imposing a cost on society the whole time. Also other stuff like risk of it getting lost/destroyed. So I do think there needs to be some counter to that in the system, sitting on something, even if it makes no money, shouldn't be free.<p>But the graduated approach might help with this too, and again they could be mixed and matched. It could be 100<i>1.3^n to keep full copyright, but only 50</i>1.2^n to maintain "licenseright", 25*1.15^n for "FRANDright", and free for the remaining period of "creditright". Or whatever, play around with numbers and consider different outcomes. But feels like there's room for improvement over the present state of affairs.</p>
]]></description><pubDate>Wed, 25 Mar 2026 16:23:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47519509</link><dc:creator>xoa</dc:creator><comments>https://news.ycombinator.com/item?id=47519509</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47519509</guid></item><item><title><![CDATA[New comment by xoa in "Supreme Court Sides with Cox in Copyright Fight over Pirated Music"]]></title><description><![CDATA[
<p>I'm not sure I agree that any single fixed term makes sense. Rather, I think it'd be better if the exponential cost to society (in terms of works that don't happen, and works that don't happen based on those works that didn't happen and so on compounding) was just part of the yearly renewal price. Do maybe everyone gets 7 years flat to start with, then it costs $100*1.3^(year). So after another 25 years it'd be around $70.5k renewal. At 50 years it'd be $50 million. At 75 years it'd be $35 billion. Fixed amount and exponential can of course be shifted around here but the idea would be to encourage creators to use works hard and if they couldn't make it work not sit on them but release them. Once in awhile something would be such a big hit it'd be worth keeping a long time, and that's ok, but society gets its due too. And most works would be allowed to lapse as they stopped being worth it.<p>Another alternative/additional approach would be to split up the nature of copyright, vs an all or nothing total monopoly. Let there be 7-10 years of total copyright, then another 7-14 years where no exclusivity of where it's sold or DRM is allowed, then 7/14/21 years where royalties can still be had but licensing is mandatory at FRAND rates, then finally some period of "creditright" where the creator has no control or licensing, but if they wish can still require any derivative works to give them a spot in the credits.<p>I think there is a lot of unexplored territory for IP, and wish the conversations were less binary.</p>
]]></description><pubDate>Wed, 25 Mar 2026 15:56:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47519121</link><dc:creator>xoa</dc:creator><comments>https://news.ycombinator.com/item?id=47519121</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47519121</guid></item><item><title><![CDATA[New comment by xoa in "Western carmakers' retreat from electric risks dooming them to irrelevance"]]></title><description><![CDATA[
<p>><i>Most Europeans don't live in single family homes for this to be a practical advantage.</i><p>Uh, where are you getting that from? From what I can tell at sources like [0] "most" Europeans overall (though with very significant country variance) do live in detached or semi-detached housing. Most also own it. Further, even for those in flats the higher voltage EU's grid runs at still means easier higher kilowatts at parking lot or garage chargers, so it's still an advantage anyway?<p>----<p>0: <a href="https://ec.europa.eu/eurostat/web/products-eurostat-news/-/ddn-20210521-1" rel="nofollow">https://ec.europa.eu/eurostat/web/products-eurostat-news/-/d...</a></p>
]]></description><pubDate>Sun, 22 Mar 2026 13:59:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47477634</link><dc:creator>xoa</dc:creator><comments>https://news.ycombinator.com/item?id=47477634</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47477634</guid></item><item><title><![CDATA[New comment by xoa in "Hundreds of Millions of iPhones Can Be Hacked With a New Tool Found in the Wild"]]></title><description><![CDATA[
<p>><i>But I’m arguing the other stuff requires QC attention Apple may not want to provide to a legacy line.</i><p>Oh come on. This is HN, we know how development works, how modular an OS, how the patch process works and what that entails for testing in an incredibly restricted and limited hardware base. We know they have no issue doing retroactive updates for quite awhile on the same code base for Macs, which have enormously more hardware variance then iDevices. These are extremely high profit margin premium products. You really don't need to carry water for the multi-trillion dollar megacorp with absolute wide eyed credulity.<p>And on other systems, even if it wasn't supported, it'd be perfectly possible for hardware owners to patch various components or implement workarounds. It's only on iOS that Apple is utilizing technical controls to stop that dead.<p>><i>That isn’t not allowing something that can be done.</i><p>Yes, it is. They are 100% using their technical controls built into the underlying hardware and then on up for not allow something that can be done. They could trivially allow hardware owners, even if only as a buy-time option, to have the ability to add their own certificates to the iOS root of trust, and in turn install and modify any software they wished on their own to the extent of their abilities. Apple wouldn't have to do anything except not exert maximal artificial control.<p>They don't do that. They have the power. It's their responsibility in turn. It's pretty irritating anyone who has been around the block as much as you have would try to white wash that. FFS.</p>
]]></description><pubDate>Wed, 18 Mar 2026 23:16:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47432590</link><dc:creator>xoa</dc:creator><comments>https://news.ycombinator.com/item?id=47432590</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47432590</guid></item><item><title><![CDATA[New comment by xoa in "Hundreds of Millions of iPhones Can Be Hacked With a New Tool Found in the Wild"]]></title><description><![CDATA[
<p>><i>Is it “you are not allowed,” or Cupertino isn’t going to bother developing and testing?</i><p>It is very firmly "you are not allowed". In fact you're not even allowed to switch back to iOS 18 at all. Only actively signed iOS IPSWs can be installed (barring historical cases where someone had saved signing tickets). You can see the current status at sites like <a href="https://ipsw.me" rel="nofollow">https://ipsw.me</a> and if you're on any iOS 26 supported iDevice currently only 26.3.1 is signed. The last iOS 18 version was 18.6.2 from August of last year. If you go back to the iPhone XS/XR, you'll see they're still updating iOS 18, with 18.7.6 released two weeks ago (March 4), but they've chosen to force anyone who wants security updates to move to iOS 26 instead.</p>
]]></description><pubDate>Wed, 18 Mar 2026 18:10:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47429206</link><dc:creator>xoa</dc:creator><comments>https://news.ycombinator.com/item?id=47429206</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47429206</guid></item><item><title><![CDATA[New comment by xoa in "Hundreds of Millions of iPhones Can Be Hacked With a New Tool Found in the Wild"]]></title><description><![CDATA[
<p>><i>If it's really as bad as all that, they'll patch existing older releases.</i><p>They have patched existing releases of iOS 18... but then they artificially restricted those patches only to a couple of phone models that don't support iOS 26. So if you're on a vaguely modern iDevice and are still on 18 because you don't want the new UI and other fuckups you are not allowed to install the patched 18. It'd be one thing if you had a phone that simply never supported iOS 18 at all, or if Apple wasn't patching iOS 18 at all for anyone, but that they've gone to the effort to fix it but then also used it as another lever for force upgrades is really sucky.</p>
]]></description><pubDate>Wed, 18 Mar 2026 15:46:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47427251</link><dc:creator>xoa</dc:creator><comments>https://news.ycombinator.com/item?id=47427251</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47427251</guid></item><item><title><![CDATA[New comment by xoa in "Animation 10k Starlink Satellites"]]></title><description><![CDATA[
<p>><i>Beamforming is an old technology though. It's not hard to do</i><p>Well, so is satellite launch right? Cost, efficiency, and scaling <i>are</i> hard to do. That's SpaceX's entire raison d'etre. Doing a general public usable all weather maintenance free well designed phased array terminal they can sell for $250 and pump out by the millions is as worthy an achievement as near anything else in the Starlink project. And I'd love if it was more available too even terrestrially, for PtP/PtMP links alignment even motionless is a certain amount of work at long distances. And long range high bandwidth stuff isn't cheap. It'd be pretty cool if you could have units for $250 that you just needed to aim vaguely in the right direction and then it all just worked.</p>
]]></description><pubDate>Wed, 18 Mar 2026 15:10:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47426748</link><dc:creator>xoa</dc:creator><comments>https://news.ycombinator.com/item?id=47426748</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47426748</guid></item><item><title><![CDATA[New comment by xoa in "Animation 10k Starlink Satellites"]]></title><description><![CDATA[
<p>The answer to a lot of the pollution problems is probably, and perhaps counter intuitively, "even more mass even cheaper, combined with regulations that are enabled by that". The key identified current concern is very specific to aluminum reentry, not just generic "whatever mass". Around 15000 tons of space dust hits the Earth each year no problem, but the chemical composition is quite different from what present typical satellites produce on reentry.<p>But in turn the composition of present satellites and the nature of their use/lifespan/safety systems has itself been driven heavily by economics. We don't make satellites out of steel or other safer materials not because they don't work, but because of the cost the extra weight imposes. We haven't put satellites in VLEO not because being lower is bad for communications or imaging (it's the opposite, lower is better) because it'd need more satellites, more fuel per sat, and higher cadence, all increasing cost beyond the historic ROI. But Starship or other future fully reusable methalox designs will give us vastly more mass budget and cadence for the same cost. Some of that could result in more trouble with existing designs made for a low cadence/high $/kg environment, because some externalities that were previously acceptable due to lack of scale stop being so at scale. But the same increased budget also means increased budget to ameliorate that. We can trade some of the gains for materials that burn up harmlessly in the atmosphere, designs for lowering apparent magnitude to the ground, for better self-destruct and end of life systems, more fail-safety, more redundancy in general, etc etc. And if that requires more regularly replacement that too is made easier but order of magnitude or more lower cost.<p>Some of this may happen naturally just due to self-interest, but other parts like pollution may require thoughtful regulation. But such regulation will be a much easier lift when it's affordable, so it's worth it to try to maintain an appropriately thoughtful mindset on the benefits vs tradeoffs and how to keep the former while reducing the latter.</p>
]]></description><pubDate>Wed, 18 Mar 2026 14:28:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47426245</link><dc:creator>xoa</dc:creator><comments>https://news.ycombinator.com/item?id=47426245</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47426245</guid></item></channel></rss>