<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: SirensOfTitan</title><link>https://news.ycombinator.com/user?id=SirensOfTitan</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 22 Apr 2026 22:06:23 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=SirensOfTitan" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by SirensOfTitan in "Drop, formerly Massdrop, ends most collaborations and rebrands under Corsair"]]></title><description><![CDATA[
<p>According to Crunchbase, Massdrop raised ~92 MM through their series C, and then another ~40 MM in debt over the last couple years.<p>There is no way that MassDrop was ever going to justify that kind of capital investment.  VC is such an inefficient and frankly delusional form of capital deployment at this point -- they have no idea what they're doing.  It ironically looks a lot like central planning, where the VCs themselves invest with the intention of picking the winners and losers themselves.<p>This company should've bootstrapped and remained small-and-manageable.  Not every business, not even most businesses, should raise money with the intention of becoming a "unicorn," it is nonsensical and this model has a lot of deleterious effects for our society, namely and most obviously enshittification when the outcome doesn't justify the investment.</p>
]]></description><pubDate>Mon, 06 Apr 2026 13:24:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47660601</link><dc:creator>SirensOfTitan</dc:creator><comments>https://news.ycombinator.com/item?id=47660601</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47660601</guid></item><item><title><![CDATA[New comment by SirensOfTitan in "Ask HN: I burnt out from software development. What now?"]]></title><description><![CDATA[
<p>I really don't think AI is undermining your competitiveness if you have a unique perspective that other people value.  You don't have to build a product that drives a company to IPO, you can just build software that other people value and make yourself a sustainable income.<p>Great products are built on top of unique perspectives gained through a lifetime of individuation and a lot of time thinking and tinkering and trying.  Unfortunately, even before LLMs, tech has been languishing in a lack of vision surrounding the "iterative mindset," where only velocity matters.  But as they say: the electric light did not come from the continuous improvement of candles.  I'd also note: there are many companies that failed because they went to market too early (I started one where this was the case!).<p>If all white collar work is dead, then we're all truly fucked, and if that's the case, why not invest your time into what you find beautiful?  If that is software, what kind of software would align with your values?<p>(I ping pong between this perspective and yours also FWIW)<p>I can also tell you: I've seen a couple vibe coded codebases, and they are scary and unmaintainable.  Your decades of experience is still valuable, don't let the non-technical idea people talk you out of your value.</p>
]]></description><pubDate>Tue, 31 Mar 2026 20:21:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47592940</link><dc:creator>SirensOfTitan</dc:creator><comments>https://news.ycombinator.com/item?id=47592940</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47592940</guid></item><item><title><![CDATA[New comment by SirensOfTitan in "How the AI Bubble Bursts"]]></title><description><![CDATA[
<p>I think calling it inflated is to play to a narrative that labor was overvalued broadly in tech.<p>Salaries across industries in the US have remained flat since the 1970s.  Calling the one sector that can provide access a middle class lifestyle inflated s to play into a narrative capital is eager to tell, even if OP didn't intend that.</p>
]]></description><pubDate>Mon, 30 Mar 2026 14:01:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47574472</link><dc:creator>SirensOfTitan</dc:creator><comments>https://news.ycombinator.com/item?id=47574472</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47574472</guid></item><item><title><![CDATA[New comment by SirensOfTitan in "How the AI Bubble Bursts"]]></title><description><![CDATA[
<p>This is a classic HN mistaking the map for the territory.  R&D and capex absolutely figure into de-facto profitability and sustainability for AI labs, despite their separate treatment in accounting.<p>> well most of us here on HN have benefited from decades of overinflated engineering salaries being paid by often companies that were not profitable and not only unprofitable<p>This is a really concerning perspective: people were paid what they were worth.  Software is or was one of the few remaining arenas wherein a person can find a middle or upper middle class lifestyle consistently.<p>I will also note: a startup raising an 8 MM series A and eventually fizzling out is not the same at the hundreds of billions invested in these AI companies without a path to profitability.  It is utterly absurd to pretend these are the same thing: any company ingesting that much cash needs to justify its capacity to survive.</p>
]]></description><pubDate>Mon, 30 Mar 2026 13:14:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47573888</link><dc:creator>SirensOfTitan</dc:creator><comments>https://news.ycombinator.com/item?id=47573888</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47573888</guid></item><item><title><![CDATA[New comment by SirensOfTitan in "The Cognitive Dark Forest"]]></title><description><![CDATA[
<p>I don't quite remember the details, but there's a fascinating section in Julian Jaynes's "Origin of Consciousness in the Breakdown of the Bicameral Mind" where he talks about how metaphors condense down into more complex forms, and as they do they unlock new realities previously impossible to fathom.  The classical example here is the simultaneous discovery of calculus by Newton and Leibniz: the larger context defines what is possible.<p>I was recently running myself through a thought experiment similar to the author here: if LLMs truly do make generation of ideas cheap (I'm still a skeptic here even within software), then as soon as products enter the public awareness they become trivial to reproduce.  For example, in a prompt like: "Uber but for babysitters," "Uber for" is doing a tremendous amount of work.  Before Uber, its model, UX, modes of engagement would've taken pages and pages to describe, but after, it becomes comparatively much cheaper.<p>... in this way, LLMs <i>could</i> cheapen ideas and creativity so much that they make other factors (which are already the weighing functions) more important, and I think the imbalance here is deeply troubling.  Those factors are namely network effects (existing customers, brand recognition, existing relationships, capital).  And when balance is shifted more toward network effects, it means that the whole system becomes more brittle because it makes it even harder to boot out incumbents.<p>There are a whole slew of issues with LLMs, particularly around their intended devaluation of labor, and we aren't talking enough about them.</p>
]]></description><pubDate>Sun, 29 Mar 2026 22:27:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47568067</link><dc:creator>SirensOfTitan</dc:creator><comments>https://news.ycombinator.com/item?id=47568067</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47568067</guid></item><item><title><![CDATA[New comment by SirensOfTitan in "Why are executives enamored with AI, but ICs aren't?"]]></title><description><![CDATA[
<p>I think executives are excited about AI because it confirms their worldview: that the work is a commodity and the real value lies in orchestration and strategy.<p>It doesn't help that the west has a clear bias wherein moving "up" is moving away from the work.  Many executives often don't know what good looks like at the detail level, so they can't evaluate AI output quality.</p>
]]></description><pubDate>Fri, 27 Mar 2026 23:37:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47549804</link><dc:creator>SirensOfTitan</dc:creator><comments>https://news.ycombinator.com/item?id=47549804</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47549804</guid></item><item><title><![CDATA[New comment by SirensOfTitan in "Slovenia becomes first EU country to introduce fuel rationing"]]></title><description><![CDATA[
<p>The problem is that Iran can defend the strait against the world's most advanced military with drones built with commercial hardware for 30-50K per drone.  And that doesn't even take into account escalation, as if the US escalates then Iran will likely start targeting critical infrastructure in the region, making the crisis worse.<p>The US and Israel are rapidly running out of munitions, while Iran is being resupplied by Russia (<a href="https://www.ft.com/content/d5d7291b-8a53-42cd-b10a-4e02fbcf9047" rel="nofollow">https://www.ft.com/content/d5d7291b-8a53-42cd-b10a-4e02fbcf9...</a>) which is much more tooled out for munition production compared to NATO.  The US also relies on both rare earths and Chinese supply chain for a lot of its munitions (which it is running low on).<p>IMO the best option is for Trump to TACO, take the major L, and cede Iran its demands, but this would partially mean an alignment shift from Israel which still feels unthinkable based on the US political realities.</p>
]]></description><pubDate>Fri, 27 Mar 2026 22:47:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47549398</link><dc:creator>SirensOfTitan</dc:creator><comments>https://news.ycombinator.com/item?id=47549398</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47549398</guid></item><item><title><![CDATA[New comment by SirensOfTitan in "Slovenia becomes first EU country to introduce fuel rationing"]]></title><description><![CDATA[
<p>Oh I'm sorry, that was actually my mistake, I should have been much more specific, and I will update the comment if I still can.  My intention was to emphasize that Taiwan may have to start limiting electricity to its industrial sector based on its current runway. Per the article you listed:<p>> Yeh Tsung-kuang, a professor in the Department of Engineering and System Science at National Tsing Hua University, said Taiwan's maximum LNG inventory is only 11 days but that does not mean the island will run out of fuel or face outages within that time period<p>EDIT: updated comment to be more specific.</p>
]]></description><pubDate>Fri, 27 Mar 2026 22:35:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47549285</link><dc:creator>SirensOfTitan</dc:creator><comments>https://news.ycombinator.com/item?id=47549285</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47549285</guid></item><item><title><![CDATA[New comment by SirensOfTitan in "Slovenia becomes first EU country to introduce fuel rationing"]]></title><description><![CDATA[
<p>Scanning some of the early comments here, and acting as-if the oil and LNG disruptions is just a question of renewable investment is naive.<p>This is the worst energy crisis in modern history, and little of the western world has really started feeling the effects yet:<p><a href="https://thedispatch.com/newsletter/dispatch-energy/iran-war-energy-crisis-hormuz/" rel="nofollow">https://thedispatch.com/newsletter/dispatch-energy/iran-war-...</a><p>Petro is pretty much upstream of everything: plastics, fertilizers, pharmaceuticals, cooking oils, lubricants, cosmetics.  Dow chemical just doubled the cost of polyethylene as of April 1st.  Taiwan relies on LNG for 40% of its energy production and has 11 days of LNG storage--meaning it may have to consider limiting industrial electricity use if things persist.  I will clarify based on a reply, this doesn't mean they'll run out in that time, but that they have limited runway that will have deleterious effects as time goes on:<p>> Yeh Tsung-kuang, a professor in the Department of Engineering and System Science at National Tsing Hua University, said Taiwan's maximum LNG inventory is only 11 days but that does not mean the island will run out of fuel or face outages within that time period.<p>Even if the Strait saw normal traffic today (and Iran is incentivized and well-positioned to keep it closed for a while), it would take quite a while to recover lost supply. Iran continues to employ a tit-for-tat strategy and Israel just targeted steel industry in the country -- I'm not even taking into account more deliberate damage to energy infrastructure in the Mid east.<p>This is a scary crisis wherein the most movable actor (the US) is not going to accept Iran's terms.  It could collapse the global economy, and that crucially includes the AI industry this forum loves to focus on almost exclusively.   The US and the majority of the west has essentially no fiscal room compared to the comparably lesser 1970s crises either.  This could easily spiral out of control and cause a level of suffering across the world (esp the global south) most of us on this forum have not lived to see.</p>
]]></description><pubDate>Fri, 27 Mar 2026 21:56:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47548854</link><dc:creator>SirensOfTitan</dc:creator><comments>https://news.ycombinator.com/item?id=47548854</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47548854</guid></item><item><title><![CDATA[New comment by SirensOfTitan in "If you don't opt out by Apr 24 GitHub will train on your private repos"]]></title><description><![CDATA[
<p>If they were being honest they would ask explicitly for permission instead of advertising opt-out.  Now you might ask: who will explicitly give Microsoft permission to train on their private works?  No one will -- and that's the point: this is a form of theft.</p>
]]></description><pubDate>Fri, 27 Mar 2026 21:40:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47548653</link><dc:creator>SirensOfTitan</dc:creator><comments>https://news.ycombinator.com/item?id=47548653</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47548653</guid></item><item><title><![CDATA[New comment by SirensOfTitan in "If you don't opt out by Apr 24 GitHub will train on your private repos"]]></title><description><![CDATA[
<p>Right, but it shouldn't be opt-out only to begin with.  It's a dishonest pattern that relies on people not noticing.  Honest use of data is a "Caesar's wife must be above suspicion" moment for me -- if this is how you're acting when engaging with customers explicitly, I don't trust you to resist the temptation to tap into my data privately.  AI companies already have trained their models illegally against the intellectual property of all of humanity with little consent along the way.<p>Honestly, if you work at GitHub, maybe you should focus on your uptime -- it's awful.</p>
]]></description><pubDate>Fri, 27 Mar 2026 21:38:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47548623</link><dc:creator>SirensOfTitan</dc:creator><comments>https://news.ycombinator.com/item?id=47548623</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47548623</guid></item><item><title><![CDATA[New comment by SirensOfTitan in "Goodbye to Sora"]]></title><description><![CDATA[
<p>I honestly think it's a bad term.  I constantly chuckle from Tyler Cowen's post from last April calling o3 AGI:<p><a href="https://marginalrevolution.com/marginalrevolution/2025/04/o3-and-agi-is-april-16th-agi-day.html" rel="nofollow">https://marginalrevolution.com/marginalrevolution/2025/04/o3...</a><p>Commercial labs rely on weak terms like AGI or strong AI or whatever else because it allows for them to weaken the definition as a means of achieving the goal.   Coming to clear, unambiguous terms is probably especially important when it comes to LLMs, as they're very susceptible to projection, allowing people like Cowen to be fooled by something that is more liken to looking back at ourselves through a mirror.<p>I'm currently reading "Master and his Emissary," and one of my early takeaways is how narrow our definition of intelligence is, and how real intelligence is an attunement to an environment that combines many ways of sensing into a coherent whole.  LLMs are a narrow form of intelligence and I think we will need at least a couple more breakthroughs to get to what I would consider human-level intelligence, let alone superhuman intelligence.<p>Whatever the timeline is, I hope we have enough time as a species to define a future where intelligence props everyone up instead of just making the rich richer at the expense of everyone else.  In this way, it is better that the process is slower in my opinion.  There is no rush.</p>
]]></description><pubDate>Wed, 25 Mar 2026 00:06:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47511399</link><dc:creator>SirensOfTitan</dc:creator><comments>https://news.ycombinator.com/item?id=47511399</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47511399</guid></item><item><title><![CDATA[New comment by SirensOfTitan in "Goodbye to Sora"]]></title><description><![CDATA[
<p>AGI is a marketing term used to encourage continued investment in an industry that is not even close to breaking even commensurate with its investment.  Even so, this is a false dichotomy: scaling is clearly not a path on its own to superintelligence.  OpenAI developed Sora largely because the amount of revenue they need to produce any return on investment is massive and not clear whatsoever.  And in fact, I don't even believe any of the frontier labs believe that AGI by any conventional definition is within reach within their likely runways.</p>
]]></description><pubDate>Tue, 24 Mar 2026 23:29:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47511067</link><dc:creator>SirensOfTitan</dc:creator><comments>https://news.ycombinator.com/item?id=47511067</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47511067</guid></item><item><title><![CDATA[New comment by SirensOfTitan in "Skills are quietly becoming the unit of agent knowledge"]]></title><description><![CDATA[
<p>Most of this discourse feels like some kind of religious ritual built on the foundation of authority bias.  Where is the evidence that skills improves performance over any other methodology outside of the fact of its nascent popularity?<p>I do agree with Jacques Ellul in Technological Society that technique precedes science, and that's certainly the case with LLMs; however, this whole industry waves off rigorous validation in favor of personal anecdotes ("it feels more productive to me!" "they didn't study after Opus 4.5 was released").</p>
]]></description><pubDate>Mon, 23 Mar 2026 11:04:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47487753</link><dc:creator>SirensOfTitan</dc:creator><comments>https://news.ycombinator.com/item?id=47487753</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47487753</guid></item><item><title><![CDATA[New comment by SirensOfTitan in "Ask HN: Is vibe coding a new mandatory job requirement?"]]></title><description><![CDATA[
<p>To be perfectly honest, I wouldn't work at a firm that requires vibe-coding.  The hype train in tech is honestly nauseating, and the so-called productivity boost neither shows up in the data nor in qualitative elements (like shovelware on GitHub or firms moving faster).<p>Vibe coding seems like a religion more than anything to me: engineers go and use techniques for prompting but rarely actually test those techniques relative to other ones.  There is especially no evidence that vibe coding is a skill: the people most effective with these tools are people who would've been most effective without them (i.e. it relies on experience and domain knowledge).<p>If I were currently actively hiring and I wanted to capture skills that likely will translate well to an AI-augmented work strategy, I would focus on "code review" in the interview.  I've only seen a handful of truly great, rigorous code reviewers in my career, but AI makes code review supremely important.  Unfortunately, most of the real world "agentic coding" I've seen is light in review (lots of LGTMs!).<p>I think that firms are going to eventually collapse under their own weight unless models continue improving at the velocity slop is merged into main.<p>I will also note that I do use these tools, mainly as a search engine, and I do so in hiccups (I will use the tools for a month and then completely abstain for 1-2 months).  I am worried about undermining my own cognitive fitness by overreliance on these tools.</p>
]]></description><pubDate>Sun, 22 Mar 2026 13:56:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47477610</link><dc:creator>SirensOfTitan</dc:creator><comments>https://news.ycombinator.com/item?id=47477610</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47477610</guid></item><item><title><![CDATA[New comment by SirensOfTitan in "Scaling Karpathy's Autoresearch: What Happens When the Agent Gets a GPU Cluster"]]></title><description><![CDATA[
<p>I think the OP's comment is entirely fair.  Karpathy and others come across to me as people putting a hose into itself: they work with LLMs to produce output that is related to LLMs.<p>I might reframe the comment as: are you actually using LLMs for sustained, difficult work in a domain that has nothing to do with LLMs?<p>It feels like a lot of LLM-oriented work is fake.  It is compounding "stuff," both inputs and outputs, and so the increased amount of stuff makes it feel like we're living in a higher plane of information abundance, but in reality we're increasing entropy.<p>Tech has always had an information bias, and LLMs are the perfect vehicle to create a lot of superfluous information.</p>
]]></description><pubDate>Fri, 20 Mar 2026 00:20:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47448502</link><dc:creator>SirensOfTitan</dc:creator><comments>https://news.ycombinator.com/item?id=47448502</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47448502</guid></item><item><title><![CDATA[New comment by SirensOfTitan in "What CI looks like at a 100-person team (PostHog)"]]></title><description><![CDATA[
<p>Your response reads much more like a strawman than my original comment.<p>I’d challenge you to identify where in my post I said I wouldn’t use software that employs automation?<p>It is pretty clear I am not talking about running CI for automated and predictable signals or cron jobs. I am talking about using AI to write code and also fix tests.<p>It is exceedingly clear in practice that the volume of code produced by LLMs is too much for the humans using these tools to read and understand. We are collectively throwing decades of best practices out of the window in service of “velocity.” Even the FAANG shops I know of who previously had good engineering cultures seem to be endorsing the cult of: AI generated everything with stamp approval.</p>
]]></description><pubDate>Tue, 17 Mar 2026 13:16:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47412227</link><dc:creator>SirensOfTitan</dc:creator><comments>https://news.ycombinator.com/item?id=47412227</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47412227</guid></item><item><title><![CDATA[New comment by SirensOfTitan in "What CI looks like at a 100-person team (PostHog)"]]></title><description><![CDATA[
<p>I don't really think this is at all at the quality bar for posts here.  This is obviously AI-slop -- why should I invest more time reading your slop than you took to write it?<p>Even so, at what point do we consider the LLM-ification of all of tech a hazard?  I've seen Claude go and lazily fix a test by loosening invariants.  AI writes your code, AI writes your tests.  Where is your human judgment?<p>Someone is going to lose money or get hurt by this level of automation.  If the humans on your team cannot keep track of the code being committed, then I would prefer not to use your product.</p>
]]></description><pubDate>Tue, 17 Mar 2026 12:27:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47411719</link><dc:creator>SirensOfTitan</dc:creator><comments>https://news.ycombinator.com/item?id=47411719</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47411719</guid></item><item><title><![CDATA[New comment by SirensOfTitan in "John Carmack about open source and anti-AI activists"]]></title><description><![CDATA[
<p>I totally agree.  I've written about this topic a lot on this site, probably most recently here:<p><a href="https://news.ycombinator.com/item?id=47115597">https://news.ycombinator.com/item?id=47115597</a><p>The US is built on-top of a high value service economy.  And what we're doing is allowing a couple companies to come in, devalue US service labor, and capture a small fraction of the prior value for themselves on top of models trained on copyrighted material without permission.  Of course, to your point: things can get a lot worse than that.  I honestly don't think a lot of executives even know how much they're shooting themselves in the foot because they seem unable to think beyond the first order.<p>I also see a lot of top 1% famous or semi-famous engineers totally ignoring the economic realities of this tech, people like: Carmack, Simon Willison, Mitchell Hashimoto, Steve Yegg, Salvatore Sanfilippo and others. They are blind to the suffering these technologies could cause even in the event it is temporary.  Sure, it's fun, but weekend projects are irrelevant when people cannot put food on the table.  It's been really something to watch them and a lot of my friends from FAANG totally ignore this side. It is why identity matters when people make arguments.<p>I also think I'm insulated partially from the likely initial waves of fallout here by nature of a lucky and successful career.  I would love it if the influential engineers I mentioned above stopped acting like high modernists and started taking the social consequences of this technology seriously.  They could change a lot more minds than I could.  And they could ensure through that advocacy for labor that we see the happiest ending with respect to rolling out LLMs.<p>Unfortunately I don't really believe labor has much teeth anymore, and tech will wake up too late to do anything about it.</p>
]]></description><pubDate>Fri, 13 Mar 2026 20:14:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47369252</link><dc:creator>SirensOfTitan</dc:creator><comments>https://news.ycombinator.com/item?id=47369252</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47369252</guid></item><item><title><![CDATA[New comment by SirensOfTitan in "John Carmack about open source and anti-AI activists"]]></title><description><![CDATA[
<p>In my mind, AI is making a lot of engineers, including Carmack, seem fairly thoughtless. At the other moments in recent history where technology has displaced workers, labor has either had to fight some very bloody battles or had stronger labor organization.  Tech workers are highly atomized now, and if you have to work to live, you're negotiating on your own.<p>It seems like Carmack, like a lot of tech people, have forgotten to ask the question: who stands to benefit if we devalue the US services economy broadly?  Who stands to lose?  It seems like a lot of these people are assuming AI will be a universal good.  It is easy to feel that way when you are independently wealthy and won't feel the fallout.<p>Even a small % of layoffs of the US white collar work force will crash the economy, as our economy is extremely levered.  This is what happened in 2008: like 7% of mortgages failed, and this caused a cascade of failures we are still feeling today.</p>
]]></description><pubDate>Fri, 13 Mar 2026 19:15:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47368438</link><dc:creator>SirensOfTitan</dc:creator><comments>https://news.ycombinator.com/item?id=47368438</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47368438</guid></item></channel></rss>