<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: kypro</title><link>https://news.ycombinator.com/user?id=kypro</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 05:53:43 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=kypro" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by kypro in "Engineering departments from China laid off at Red Hat"]]></title><description><![CDATA[
<p>Sorry to hear this.<p>It's hard one day feeling like you're valued member of a team, then the next day finding out you're actually seen as completely dispensable.<p>Best of luck with the job hunt.</p>
]]></description><pubDate>Fri, 10 Apr 2026 00:02:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47711900</link><dc:creator>kypro</dc:creator><comments>https://news.ycombinator.com/item?id=47711900</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47711900</guid></item><item><title><![CDATA[New comment by kypro in "Ask HN: What would you do with an AI model capable of continuous learning?"]]></title><description><![CDATA[
<p>Understand that it's not the model but the algorithm that's valuable?<p>I wouldn't have the scale or compute to leverage a model capable of continuous learning, but there will be a company or two that will give me a few billion to give them the secret sauce.</p>
]]></description><pubDate>Thu, 09 Apr 2026 23:50:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47711815</link><dc:creator>kypro</dc:creator><comments>https://news.ycombinator.com/item?id=47711815</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47711815</guid></item><item><title><![CDATA[New comment by kypro in "OpenAI puts Stargate UK on ice, blames energy costs and red tape"]]></title><description><![CDATA[
<p>Arguably this aligns with their decision to discontinue Sora.<p>If we assume they are trying to rapidly free up compute then the UK is a pretty stupid place to be building out new datacenters... Any project here overruns both in time and budget – if it even goes ahead at all.<p>Then you have energy costs which makes the UK one of the most expensive places in the world to build a datacenter. If you want to bring compute online fast and at a competitive price, then you're far better off building somewhere else in Europe like Norway.</p>
]]></description><pubDate>Thu, 09 Apr 2026 20:34:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47709569</link><dc:creator>kypro</dc:creator><comments>https://news.ycombinator.com/item?id=47709569</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47709569</guid></item><item><title><![CDATA[New comment by kypro in "System Card: Claude Mythos Preview [pdf]"]]></title><description><![CDATA[
<p>> The other thing you're failing to look at is momentum and majority opinion. When you look at that... nothings going to change, it's like asking an addict to stop using drugs. The end game of AI will play out, that is the most probably outcome. Better to prepare for the end game.<p>Perhaps I didn't sound pessimistic enough lol? I completely agree what you're saying here. This is happening whether we like it or not.<p>On global warming I also agree you're not going to get every nation to coordinate, but least global warming has a forcing function somewhere down the line since there's only a limited amount of fossil fuels in the ground that make economical sense to extract. AI on the other hand really has no clear off-path, at every point along the way it makes sense to invest more in AI. I think at best all we can expect to do is slow progress, which might just be enough to ensure the our generation and the next have a somewhat normal life.<p>My p(doom) is near 99% for a reason... I think that AI progression is basically almost a certainty – like maybe a 1/200 chance that no significant progress is made from here over the next 50 years. And I also think that significant progress from here more or less guarantees a very bad outcome for humanity. That's a harder one to model, but I think along almost all axises you can assume there's about 50 very bad outcomes for every good outcome – no cancer cure without super viruses, no robotics revolution without killer drones, no mass automation without mass job loss which results in destabilising the global order and democratic systems of governance...<p>I am prepping and have been for years at this point... I'm an OG AI doomer. I've been having literal nightmares about this moment for decades, and right now I'm having nightmares almost every night. It's scares me because I know all I can do is delay my fate and that of those I love.</p>
]]></description><pubDate>Wed, 08 Apr 2026 15:12:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47691341</link><dc:creator>kypro</dc:creator><comments>https://news.ycombinator.com/item?id=47691341</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47691341</guid></item><item><title><![CDATA[New comment by kypro in "System Card: Claude Mythos Preview [pdf]"]]></title><description><![CDATA[
<p>I would acknowledge that. I don't think these things are remotely possible any time soon with current rates of progress.<p>However, I think people tend to fail to acknowledge the product of exponential trends, so the question in my mind is more whether or not you believe AI will unlock an exponential increase in the rate of progress and understanding. Extremely complex is still finite complexity at the end of the day.<p>Maybe AI won't significantly increase the rate of progress across all scientific fields. I am fairly confident it will significantly increase the rate of progress over at least some though, and it seems likely to me that biological progresses will be much easier for us to model and predict with AI. I'm much less sure about progress in domains like physics and robotics.</p>
]]></description><pubDate>Wed, 08 Apr 2026 14:06:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47690436</link><dc:creator>kypro</dc:creator><comments>https://news.ycombinator.com/item?id=47690436</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47690436</guid></item><item><title><![CDATA[New comment by kypro in "System Card: Claude Mythos Preview [pdf]"]]></title><description><![CDATA[
<p>Controversially I'd argue that there is likely an optimal and stable level of technological advancement which we would be wise to not to cross. That said, we are human so we will, I'd just rather it happened in a couple hundred years rather than a decade or two.<p>For example, it's hard to imagine an AI which gives us the capability to cure cancer, but doesn't give us the capability to create target super viruses.<p>Nick Bostrom's Vulnerable World Hypothesis more or less describes my own concerns,
<a href="https://nickbostrom.com/papers/vulnerable.pdf" rel="nofollow">https://nickbostrom.com/papers/vulnerable.pdf</a><p>At some point we should probably try to resist the urge to pick balls out of the urn as we may eventually pull out a ball we don't want.</p>
]]></description><pubDate>Wed, 08 Apr 2026 00:15:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47683030</link><dc:creator>kypro</dc:creator><comments>https://news.ycombinator.com/item?id=47683030</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47683030</guid></item><item><title><![CDATA[New comment by kypro in "System Card: Claude Mythos Preview [pdf]"]]></title><description><![CDATA[
<p>What sources would you even be looking for? I think you're asking the wrong question. It's not like I'm arguing a scientific theory which can be backed by data and experimentation. I can only provide you reasoning for why I believe what I believe.<p>Firstly, I'd propose that all technological advances are a product of time and intelligence, and that given unlimited time and intelligence, the discovery and application of new technologies is fundamentally only limited by resources and physics.<p>There are many technologies which might plausibly exist, but which we have not yet discovered because we only have so much intelligence and have only had so much time.<p>With more intelligence we should assume the discovery of new technologies will be much quicker – perhaps exponential if we consider the rate of current technology discovery and exponential progression of AI.<p>There are lots of technologies we have today which would seem like magic to people in the past. Future technologies likely exist which would make us feel this way were they available today.<p>While it's hard to predict specifically which technologies could exist soon in a world with ASI, if we assume it's within the bounds of available resources and physics, we should assume it's at least plausible.<p>Examples:<p>- Mind control – with enough knowledge about how the brain works you can likely devise sensory or electro-magnetic input that would manipulate the functioning of brain to either strongly influence or effectively dictate it's output.<p>- Mind simulation - again, with enough knowledge of the brain, you could take a snapshot of someones mind with an advanced electro-magnetic device and simulate it to torture them in parallel to reveal any secret, or just because you feel like doing it.<p>- Advantage torture – with enough knowledge of human biology death becomes optional in the future. New methods of torture which would have previously have killed the victim are now plausible. States like North-Korea can now force humans to work for hundreds of years in incomprehensible agony for opposing the state.<p>- Advanced biological weapons – with enough knowledge of virology sophisticated tailor-made viruses replace nerve agents as Russia's weapon of choice for killing those accused of treason. These viruses remain dormant in the host for months infecting them and people genetically similar to them (parents, children, grandchildren). After months, the virus rapidly kills its hosts in horrific ways.<p>I could go on, you just need to use your imagination. I'm not arguing any of the above are likely to be discovered, just that it would be very naive to think AI will stop at a cure for cancer. If it gives us cure for cancer, it will give us lots of things we might wish it didn't.</p>
]]></description><pubDate>Wed, 08 Apr 2026 00:04:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47682937</link><dc:creator>kypro</dc:creator><comments>https://news.ycombinator.com/item?id=47682937</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47682937</guid></item><item><title><![CDATA[New comment by kypro in "System Card: Claude Mythos Preview [pdf]"]]></title><description><![CDATA[
<p>> The government only has as much power as they are given and can defend, and the only way I could see that happening is via automated weapons controlled by a few- which at this point aren't enough to stop everyone. What army is going to purge their own people? Most humans aren't psychopaths.<p>I think you're right for the immediate future.<p>I suspect while we're still employing large numbers of humans to fight wars and to maintain peace on the streets it would be difficult for a government to implement deeply harmful policies without risking a credible revolt.<p>However, we should remember the military is probably one of the first places human labour will be largely mechanised.<p>Similarly maintaining order in the future will probably be less about recruiting human police officers and more about surveillance and data. Although I suppose the good news there is that US is somewhat of an outlier in resisting this trend.<p>But regardless, the trend is ultimately the same... If we are assuming that AI and robotics will reach a point where most humans are unable to find productive work, therefore we will need UBI, then we should also assume that the need for humans in the military and police will be limited. Or to put it another way, either UBI isn't needed and this isn't a problem, or it is and this is a problem.<p>I also don't think democracy would collapse immediately either way, but I'd be pretty confident that in a world where fewer than 10% of people are in employment and 99%+ of the wealth is being created by the government or a handful of companies it would be extremely hard to avoid corruption over the span of decades. Arguably increasing wealth concentration in the US is already corrupting democratic processes today, this can only worsen as AI continues exacerbates the trend.</p>
]]></description><pubDate>Tue, 07 Apr 2026 23:20:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47682583</link><dc:creator>kypro</dc:creator><comments>https://news.ycombinator.com/item?id=47682583</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47682583</guid></item><item><title><![CDATA[New comment by kypro in "System Card: Claude Mythos Preview [pdf]"]]></title><description><![CDATA[
<p>This is a theory I can't support well beyond hypothesising about what a post-employment democracy might look like, but I strongly suspect democracy doesn't work in a world where voters neither hold any significant collective might and are not producing any significant wealth.<p>Democracies work because people collectively have power, in previous centuries that was partly collective physical might, but in recent years it's more the economic power people collectively hold.<p>In a world in which a handful of companies are generating all of the wealth incentives change and we should therefore question why a government would care about the unemployed masses over the interests of the companies providing all of the wealth?<p>For example, what if the AI companies say, "don't tax us 95% of our profits, tax us 10% or we'll switch off all of our services for a few months and let everyone starve – also, if you do this we'll make you all wealthy beyond you're wildest dreams".<p>What does a government in this situation actually do?<p>Perhaps we'd hope that the government would be outraged and take ownership of the AI companies which threatened to strike against the government, but then you really just shift the problem... Once the government is generating the vast majority of wealth in the society, why would they continue to care about your vote?<p>You kind of create a new "oil curse", but instead of oil profits being the reason the government doesn't care about you, now it's the wealth generated by AI.<p>At the moment, while it doesn't always seem this way, ultimately if a government does something stupid companies will stop investing in that nation, people will lose their jobs, the economy will begin to enter recession, and the government will probably have to pivot.<p>But when private investment, job loses and economic consequences are no longer a constraining factor, governments can probably just do what they like without having to worry much about the consequences...<p>I mean, I might be wrong, but it's something I don't hear people talking enough about when they talk about the plausibility of a post-employment UBI economy. I suspect it almost guarantees corruption and authoritarianism.</p>
]]></description><pubDate>Tue, 07 Apr 2026 21:33:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47681640</link><dc:creator>kypro</dc:creator><comments>https://news.ycombinator.com/item?id=47681640</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47681640</guid></item><item><title><![CDATA[New comment by kypro in "System Card: Claude Mythos Preview [pdf]"]]></title><description><![CDATA[
<p>I assure you it will soon become very clear that mass job losses are one of the least concerning side effects of developing the magic "everything that can plausibly been done within the constraints of physics is now possible" machine.<p>We're opening a can of worms which I don't think most people have the imagination to understand the horrors of.</p>
]]></description><pubDate>Tue, 07 Apr 2026 21:03:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47681342</link><dc:creator>kypro</dc:creator><comments>https://news.ycombinator.com/item?id=47681342</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47681342</guid></item><item><title><![CDATA[New comment by kypro in "System Card: Claude Mythos Preview [pdf]"]]></title><description><![CDATA[
<p>While we still have months to a year or two left, I will once again remind people that it's not too late to change our current trajectory.<p>You are not "anti-progress" to not want this future we are building, as you are not "anti-progress" for not wanting your kids to grow up on smart phones and social media.<p>We should remember that not all technology is net-good for humanity, and this technology in particular poses us significant risks as a global civilisation, and frankly as humans with aspirations for how our future, and that of our kids, should be.<p>Increasingly, from here, we have to assume some absurd things for this experiment we are running to go well.<p>Specifically, we must assume that:<p>- AI models, regardless of future advancements, will always be fundamentally incapable of causing significant real-world harms like hacking into key life-sustaining infrastructure such as power plants or developing super viruses.<p>- They are or will be capable of harms, but SOTA AI labs perfectly align all of them so that they only hack into "the bad guys" power plants and kill "the bad guys".<p>- They are capable of harms and cannot be reliably aligned, but Anthropic et al restricts access to the models enough that only select governments and individuals can access them, these individuals can all be trusted and models never leak.<p>- They are capable of harms, cannot be reliably aligned, but the models never seek to break out of their sandbox and do things the select trusted governments and individuals don't want.<p>I'm not sure I'm willing to bet on any of the above personally. It sounds radical right now, but I think we should consider nuking any data centers which continue allowing for the training of these AI models rather than continue to play game of Russian roulette.<p>If you disagree, please understand when you realise I'm right it will be too late for and your family. Your fates at that point will be in the hands of the good will of the AI models, and governments/individuals who have access to them. For now, you can say, "no, this is quite enough".<p>This sounds doomer and extreme, but if you play out the paths in your head from here you will find very few will end in a good result. Perhaps if we're lucky we will all just be more or less unemployable and fully dependant on private companies and the government for our incomes.</p>
]]></description><pubDate>Tue, 07 Apr 2026 20:52:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47681225</link><dc:creator>kypro</dc:creator><comments>https://news.ycombinator.com/item?id=47681225</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47681225</guid></item><item><title><![CDATA[New comment by kypro in "System Card: Claude Mythos Preview [pdf]"]]></title><description><![CDATA[
<p>Don't worry – if you're lucky they might decide to redistribute some of their profits to you when you're unemployed =)<p>Of course this assumes you're in the US, and that further AI advancements either lack the capabilities required to be a threat to humanity, or if they do, the AI stays in the hands of "the good guys" and remains aligned.</p>
]]></description><pubDate>Tue, 07 Apr 2026 20:22:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47680868</link><dc:creator>kypro</dc:creator><comments>https://news.ycombinator.com/item?id=47680868</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47680868</guid></item><item><title><![CDATA[New comment by kypro in "System Card: Claude Mythos Preview [pdf]"]]></title><description><![CDATA[
<p>Cool on not publicly releasing it. I would assume they've also not connected it to the internet yet?<p>If they have I guess humanity should just keep our collective fingers crossed that they haven't created a model quite capable of escaping yet, or if it is, and may have escaped, lets hope it has no goals of it's own that are incompatible with our own.<p>Also, maybe lets not continue running this experiment to see how far we can push things because it blows up in our face?</p>
]]></description><pubDate>Tue, 07 Apr 2026 20:09:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47680716</link><dc:creator>kypro</dc:creator><comments>https://news.ycombinator.com/item?id=47680716</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47680716</guid></item><item><title><![CDATA[New comment by kypro in "Ask HN: How Do You Relax?"]]></title><description><![CDATA[
<p>Go outside and spend time with animals.<p>I find watching and interacting with animals brings me back down to Earth. If I could talk to them I know all of the things I worry about would seem so strange to them. They just live in the moment and when I'm with them I live in the moment through them.<p>Other things I do I find my mind is still in worry mode – walking, reading, cooking, sleeping, etc.<p>Something about observing animals, thinking about what they're thinking and interacting with them turns that off for me. It's temporary, but it's nice.</p>
]]></description><pubDate>Mon, 06 Apr 2026 23:41:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47668877</link><dc:creator>kypro</dc:creator><comments>https://news.ycombinator.com/item?id=47668877</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47668877</guid></item><item><title><![CDATA[New comment by kypro in "The cult of vibe coding is dogfooding run amok"]]></title><description><![CDATA[
<p>The argument against this is that human coders are also non-deterministic, so does it really matter if it's a human or an AI agent producing the code – assuming the AI agent is capable of producing human-quality code or better?<p>I agree it's not a layer of abstraction in the traditional sense though. AI isn't an abstraction of existing code, it's a new way to produce code. It's an "abstraction layer" in the same way an IDE is is an abstraction layer.</p>
]]></description><pubDate>Mon, 06 Apr 2026 19:12:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47665462</link><dc:creator>kypro</dc:creator><comments>https://news.ycombinator.com/item?id=47665462</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47665462</guid></item><item><title><![CDATA[New comment by kypro in "The cult of vibe coding is insane"]]></title><description><![CDATA[
<p>Hardly. Claude Code is basically just a wrapper around an LLM with a CLI.<p>Obviously it does some fairly smart stuff under the hood, but it's not exactly comparable to a large software project.<p>But to your point, that doesn't mean you can't vibe code some poorly built product and sell it. But people have always been able to sell poorly built software projects. They can just do it a bit quicker now.</p>
]]></description><pubDate>Mon, 06 Apr 2026 19:06:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47665360</link><dc:creator>kypro</dc:creator><comments>https://news.ycombinator.com/item?id=47665360</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47665360</guid></item><item><title><![CDATA[New comment by kypro in "UK intelligence censored report on global warming and homeland security"]]></title><description><![CDATA[
<p>Depends where they're coming from I guess. There's basically no where on Earth where people are as liberal as Europe.<p>No liberal would want millions of people coming from Afghanistan, for example.</p>
]]></description><pubDate>Sun, 05 Apr 2026 22:27:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47654595</link><dc:creator>kypro</dc:creator><comments>https://news.ycombinator.com/item?id=47654595</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47654595</guid></item><item><title><![CDATA[New comment by kypro in "UK intelligence censored report on global warming and homeland security"]]></title><description><![CDATA[
<p>I think the fundamental problem is the conflict between climate, family and live-style vs corporate interest and economic growth.<p>Ideally, we should want populations that are either not growing or slowly shrinking, but we can't have this because multi-national corporations don't want to invest in countries with a declining consumer base. We must therefore sustain population growth indefinitely.<p>Similar humans would presumably prefer more space – perhaps a home with a few bedrooms and a decent sized garden where they can grow a little food and the kids and play in the summer. But we can't have this because it's more economically productive if we increase population density such that people increasingly live in small flats within high-rise buildings with no gardens and little natural light.<p>And I get it, money is nice... People will trade a lot of things for more money, but the government ideally should not encourage this.<p>Ideally the government should be encouraging people to have a home with a garden. To have a couple of kids. To grow some of their own food. To work in their local community, and therefore obtain an education which will help them to be productive members of their community – rather than say taking a punt at studying journalism at university and hoping they'll get a job in some city 200 miles from home and their family.<p>Just speaking personally, the city I grew up in in the UK has become hell to live in over the last couple of decades. It's almost impossible to drive around today because of densification which has taken place. All of the local fields that I played on as a kid have been turned into cheap flats which has transformed the semi-rural area I used to live into an ugly anti-human concrete jungle. And because of the number of people now living around here no one seems to know anyone anymore – I walk outside my house and it feels like there's random people everywhere, and I've noticed many people around me don't even seem to speak English anymore.<p>It's such a strange thing we are doing... It really makes no sense for us to want to live like this.</p>
]]></description><pubDate>Sun, 05 Apr 2026 22:26:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47654577</link><dc:creator>kypro</dc:creator><comments>https://news.ycombinator.com/item?id=47654577</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47654577</guid></item><item><title><![CDATA[New comment by kypro in "Ask HN: Where are all the disruptive software that AI promised?"]]></title><description><![CDATA[
<p>Code monkeys who spent 90%+ of their day typing code are probably 10x faster today because of AI.<p>I'm not convinced most software developers ever spent the majority of their time coding though... A huge part of the job is taking messy user requirements and converting them into technical requirements, and perhaps you could let AI do that for a while but after some time it will blow up in your face if you don't have any opinion on the technical requirements. Software developers also spend a lot of their time thinking about how to architect solutions, considering different technologies and libraries to use, thinking about how to model data, etc...<p>If I were to guess before AI I suspect the average software developer spent 50% of their day typing code into an editor, so even if AI made this 10-20x faster, that still wouldn't be a 2x in output unless they're also faster at the other parts of the job too... And maybe they are a bit... So maybe the average developer is 2x, or even 3x more productive today with AI. But 10x is so absurd that unless you were a junior developer building Wordpress themes or something I have no idea how you could be working at anywhere remotely close to 10x velocity you were previously.<p>I mean spent 4 or my 8 work hours in meetings on a single day last week, that time certainly didn't go 10x faster because of AI... Is my career as a SWE some extreme outlier, or do other people also have meetings and do other things in their day that doesn't involving producing code?</p>
]]></description><pubDate>Sun, 05 Apr 2026 22:03:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47654377</link><dc:creator>kypro</dc:creator><comments>https://news.ycombinator.com/item?id=47654377</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47654377</guid></item><item><title><![CDATA[New comment by kypro in "Oracle files H-1B visa petitions amid mass layoffs"]]></title><description><![CDATA[
<p>> Federal data shows the tech giant filed for over 3,000 foreign worker visas as it cuts thousands of American jobs.<p>Just trying to understand what context you feel is relevant here...<p>Even if Oracle is also firing people in India the idea that no American can do these jobs in the US should be challenged.<p>Let's assume they do need extremely specialised skills for these roles and are struggling to find those skills in a highly educated country like the US so need to look for employees in countries like India, the question you should then be asking is, well, if they couldn't hire from abroad what would they do instead?<p>Perhaps they would need to give someone who recently graduated a chance? Perhaps they would try to train people working in adjacent fields at Oracle? Maybe they would increase the salary so American's with these skills employed elsewhere would switch jobs?<p>So can you steal-man why I should be in favour of companies hiring abroad given there are clearly smart and educated people in the US who are looking for work or might be tempted to work for Oracle if they offered better salaries or training?<p>Can you explain the advantage to the US workers in allowing this?</p>
]]></description><pubDate>Fri, 03 Apr 2026 21:52:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47632811</link><dc:creator>kypro</dc:creator><comments>https://news.ycombinator.com/item?id=47632811</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47632811</guid></item></channel></rss>