<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: hollerith</title><link>https://news.ycombinator.com/user?id=hollerith</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 19 Apr 2026 05:53:43 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=hollerith" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by hollerith in "Why Japan has such good railways"]]></title><description><![CDATA[
<p>Your first sentence might in fact be true, but you've presented no evidence or argument that it is, so all you've done so far is make a cheap dig at America's private-equity industry with nothing to back it up.<p>I fail to see how the topic of this comment thread (namely "why Japan has such good railways") sheds any light on the US PE industry or vice versa. Maybe you can explain the link. (If you can't then your cheap dig is also off-topic.)<p>(And I fail to see how <i>antitrust</i> law in particular might constrain a PE firm in any way.)</p>
]]></description><pubDate>Sat, 18 Apr 2026 14:53:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47816393</link><dc:creator>hollerith</dc:creator><comments>https://news.ycombinator.com/item?id=47816393</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47816393</guid></item><item><title><![CDATA[New comment by hollerith in "Why Japan has such good railways"]]></title><description><![CDATA[
<p>The point is that Japan has a well-established private-equity industry [1] so the fact that PE firms haven't ruined Japanese railways suggests that PE firms aren't universal corrosive solvents like you seem to want us to believe they are.<p>[1] <a href="https://flippa.com/blog/pe-funds/japan-private-equity-firms/" rel="nofollow">https://flippa.com/blog/pe-funds/japan-private-equity-firms/</a></p>
]]></description><pubDate>Sat, 18 Apr 2026 14:38:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47816263</link><dc:creator>hollerith</dc:creator><comments>https://news.ycombinator.com/item?id=47816263</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47816263</guid></item><item><title><![CDATA[New comment by hollerith in "Japan implements language proficiency requirements for certain visa applicants"]]></title><description><![CDATA[
<p>>Switzerland is a land-locked village with fewer people than <one of the biggest cities in Europe> and entirely dependent on trade and the movement of people and money for all they have, and barely a scrap of a language to call its own.<p>Everything in that quote has been always been true though, and my guess is that they never allowed significant numbers of migrants at any time from about 800 (i.e., after the end of migration period) until whenever they started letting in large numbers of immigrants (some time after 1990 probably) (but not large enough numbers to suit you, I gather).</p>
]]></description><pubDate>Fri, 17 Apr 2026 01:14:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47801515</link><dc:creator>hollerith</dc:creator><comments>https://news.ycombinator.com/item?id=47801515</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47801515</guid></item><item><title><![CDATA[New comment by hollerith in "The rational conclusion of doomerism is violence"]]></title><description><![CDATA[
<p>Those 2 links certainly satisfy my request. Thank you.<p>My summary of Eliezer's deleted tweet is that Eliezer is pointing out that even if everyone dies except for the handful of people it would take to repopulate the Earth, even that (pretty terrible) outcome would be preferable to the outcome that would almost certainly obtain if the AI enterprise continues on its present course (namely, everyone's dying, with the result that there is no hope of the human population's bouncing back). It was an attempt to get his interlocutor (who was busy worrying about whether an action is "pre-emptive" and therefore bad and worrying about "a collateral damage estimate that they then compare to achievable military gains") to step back and consider the bigger picture.<p>Some people do not consider the survival of the human species to be intrinsically valuable. If 99.999% of us die and the rest of us have to go through many decades of suffering just for the species to survive, those people would consider that outcome to be just as bad as everyone dying (or even slightly worse since if  100% of us were to die one day without anyone's knowing what hit them, suffering is avoided). I can see how those people might find Eliezer's deleted tweet to be alarming or bizarre.<p>In contrast, Eliezer cares about the human species independent of individual people (although he cares about them, too).<p>Also, just because I notice that outcome A is preferable to outcome B does not mean that I consider it ethical to do anything to bring about outcome B. For example, just because I notice that everyone's life would be improved if my crazy uncle Bob died tomorrow does not mean that I consider it ethical to kill him. And just because Eliezer noticed and pointed out what I just summarized does not mean that Eliezer believes that "it might be ok to kill most of humanity to stop AI" (to repeat the passage I quoted in my first comment).</p>
]]></description><pubDate>Tue, 14 Apr 2026 11:15:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47764065</link><dc:creator>hollerith</dc:creator><comments>https://news.ycombinator.com/item?id=47764065</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47764065</guid></item><item><title><![CDATA[New comment by hollerith in "The rational conclusion of doomerism is violence"]]></title><description><![CDATA[
<p>"Doomer" sounds like we have a mood disorder.</p>
]]></description><pubDate>Tue, 14 Apr 2026 07:33:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47762448</link><dc:creator>hollerith</dc:creator><comments>https://news.ycombinator.com/item?id=47762448</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47762448</guid></item><item><title><![CDATA[New comment by hollerith in "The rational conclusion of doomerism is violence"]]></title><description><![CDATA[
<p>>I'm willing to bet any amount of money that 99.99% of AI doomers identify with the same extreme end of the political spectrum.<p>Good: a man willing to put his money where his mouth is! However many dollars you put up, I will put up $10. (I.e., I will give you 10:1 odds.) How much do you bet? Who do you suggest as arbiter in case one is needed?</p>
]]></description><pubDate>Tue, 14 Apr 2026 00:09:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47759594</link><dc:creator>hollerith</dc:creator><comments>https://news.ycombinator.com/item?id=47759594</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47759594</guid></item><item><title><![CDATA[New comment by hollerith in "The rational conclusion of doomerism is violence"]]></title><description><![CDATA[
<p>Humble request: do not call us "AI doomers". Most of us would rather be called "AI anti-extinctionists".</p>
]]></description><pubDate>Tue, 14 Apr 2026 00:05:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47759567</link><dc:creator>hollerith</dc:creator><comments>https://news.ycombinator.com/item?id=47759567</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47759567</guid></item><item><title><![CDATA[New comment by hollerith in "The rational conclusion of doomerism is violence"]]></title><description><![CDATA[
<p>Pascal's wager is an argument that even if the probability of God's existence is very small, it is still rational to believe in God and live accordingly. Yudkowsky is the author of a blog post titled "Pascal's <i>mugging</i>", which likewise involves a small probability of an extremely bad outcome, but that blog post is completely silent about the dangerousness of AI research. (The post points out a paradox in decision theory, i.e., the theory that flows from the equation expected_utility = summation over every possible outcome O of U(O) * P(O).)<p>No one to my knowledge has ever argued that AI research should be prohibited because of a very small probability of its turning out extremely badly. This is entirely a straw man set up by people who want AI research to continue. Yudkowsky argues that if AI research is allowed to continue, then the <i>natural expected</i> outcome will be very bad (probably human extinction, but more exotic terrible outcomes are also possible) [1]. There are others who argue that no team or organization anywhere should engage in any program of development that has a 10% or more chance of ending the human race without there first being an extensive public debate followed by a vote in which everyone can participate, and this is their objection to any continuance of AI research.<p>[1] But don't take my word for it: here is Yudkowsky writing in Apr 2022 in 
<a href="https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/" rel="nofollow">https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/</a>: "When Earth’s prospects are that far underwater in the basement of the logistic success curve, it may be hard to feel motivated about continuing to fight, since doubling our chances of survival will only take them from 0% to 0%. That's why I would suggest reframing the problem - especially on an emotional level - to helping humanity die with dignity, or rather, since even this goal is realistically unattainable at this point, die with slightly more dignity than would otherwise be counterfactually obtained."</p>
]]></description><pubDate>Mon, 13 Apr 2026 23:48:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47759457</link><dc:creator>hollerith</dc:creator><comments>https://news.ycombinator.com/item?id=47759457</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47759457</guid></item><item><title><![CDATA[New comment by hollerith in "The rational conclusion of doomerism is violence"]]></title><description><![CDATA[
<p>You were trying to get people to view what EY wrote in the time.com article as an encouragement to engage in <i>criminal</i> violence (as opposed to state-sponsored violence a la an airstrike on a data center) such as the firebombing of Sam's home when in actuality (both before and after the publication of the time.com article) EY has explicitly argued against doing any <i>crimes</i> particularly violent <i>crimes</i> against the AI enterprise.<p>Knowing that most readers do not have time to read the entire article, I brought up how many times various strings occur in the article to make it less likely in the reader's eyes that there are passages in the article other than the one passage I quoted that could possibly be interpreted as advocating criminal violence. I.e., I brought it up to explain why I quoted the 3 (contiguous) paragraphs I quoted, but not any of the other paragraphs.<p>In finding and selecting those 3 paragraphs, I was doing your work for you since if this were a perfectly efficient and fair debate, the burden of providing quotes to support <i>your</i> assertion that EY somehow condones the firebombing of Sam's home would fall on you.</p>
]]></description><pubDate>Mon, 13 Apr 2026 22:42:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47758879</link><dc:creator>hollerith</dc:creator><comments>https://news.ycombinator.com/item?id=47758879</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47758879</guid></item><item><title><![CDATA[New comment by hollerith in "The rational conclusion of doomerism is violence"]]></title><description><![CDATA[
<p>You doubt that Yudkowsky "was only advocating for state-sponsored airstrikes, not civilian airstrikes, bombs, or attacks." Let's let the reader decide.<p>In the article, the string "kill" occurs twice, both times describing what some future AI would do if the AI labs remain free to keep on their present course. The strings "bomb" and "attack" never occur. The strings "strike" and "destroy" occurs once each, and this quote contains both occurrences:<p>>Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.<p>>Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.<p>>That’s the kind of policy change that would cause my partner and I to hold each other, and say to each other that a miracle happened, and now there’s a chance that maybe Nina will live. The sane people hearing about this for the first time and sensibly saying “maybe we should not” deserve to hear, honestly, what it would take to have that happen. And when your policy ask is that large, the only way it goes through is if policymakers realize that if they conduct business as usual, and do what’s politically easy, that means their own kids are going to die too.</p>
]]></description><pubDate>Mon, 13 Apr 2026 19:55:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47757039</link><dc:creator>hollerith</dc:creator><comments>https://news.ycombinator.com/item?id=47757039</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47757039</guid></item><item><title><![CDATA[New comment by hollerith in "The rational conclusion of doomerism is violence"]]></title><description><![CDATA[
<p>>Eliezer Yudkowsky has gone so far as to say that it might be ok to kill most of humanity (excepting a "viable reproduction population") to stop AI<p>That doesn't sound like a non-misleading summary of anything he would say. Do you have a quote or a link?</p>
]]></description><pubDate>Mon, 13 Apr 2026 19:48:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47756982</link><dc:creator>hollerith</dc:creator><comments>https://news.ycombinator.com/item?id=47756982</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47756982</guid></item><item><title><![CDATA[New comment by hollerith in "Sam Altman's home targeted in second attack"]]></title><description><![CDATA[
<p>The Zizians had only a tangential relationship to the people that believe that AI "progress" should be prohibited. They were banned from events run by the Berkeley rationalists well <i>before</i> they started killing people, and their ideological reasons they told each other to justify the killings were trans rights and farm-animal welfare, not to slow down AI "progress".<p><i>How many</i> people believe continued AI "progress" would be so dangerous that it should be prohibited? 136,513 people signed a statement to that effect:<p><a href="https://superintelligence-statement.org/" rel="nofollow">https://superintelligence-statement.org/</a><p>The name of the man that threw the Molotov cocktail is Daniel Alejandro Moreno-Gama, and "Daniel Moreno" is one of the signatures on the statement. I concede that his motivation almost certainly was to try to slow down AI "progress".</p>
]]></description><pubDate>Mon, 13 Apr 2026 15:27:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47753408</link><dc:creator>hollerith</dc:creator><comments>https://news.ycombinator.com/item?id=47753408</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47753408</guid></item><item><title><![CDATA[New comment by hollerith in "Eternity in six hours: Intergalactic spreading of intelligent life (2013)"]]></title><description><![CDATA[
<p>"Far-mode thinking".<p><a href="https://www.lesswrong.com/w/near-far-thinking" rel="nofollow">https://www.lesswrong.com/w/near-far-thinking</a></p>
]]></description><pubDate>Sun, 12 Apr 2026 16:53:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47741859</link><dc:creator>hollerith</dc:creator><comments>https://news.ycombinator.com/item?id=47741859</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47741859</guid></item><item><title><![CDATA[New comment by hollerith in "AI Will Be Met with Violence, and Nothing Good Will Come of It"]]></title><description><![CDATA[
<p>Just keep telling everyone that and hope they keep believing you.</p>
]]></description><pubDate>Sun, 12 Apr 2026 14:27:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47740095</link><dc:creator>hollerith</dc:creator><comments>https://news.ycombinator.com/item?id=47740095</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47740095</guid></item><item><title><![CDATA[New comment by hollerith in "Sam Altman's response to Molotov cocktail incident"]]></title><description><![CDATA[
<p>This second comment is still pretty unhinged.</p>
]]></description><pubDate>Sat, 11 Apr 2026 21:58:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47734387</link><dc:creator>hollerith</dc:creator><comments>https://news.ycombinator.com/item?id=47734387</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47734387</guid></item><item><title><![CDATA[New comment by hollerith in "Sam Altman's response to Molotov cocktail incident"]]></title><description><![CDATA[
<p>That is unhinged.</p>
]]></description><pubDate>Sat, 11 Apr 2026 18:49:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47733019</link><dc:creator>hollerith</dc:creator><comments>https://news.ycombinator.com/item?id=47733019</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47733019</guid></item><item><title><![CDATA[New comment by hollerith in "Assessing Claude Mythos Preview's cybersecurity capabilities"]]></title><description><![CDATA[
<p>Humanity stopped germ-line human genetic engineering (possible since the early 1970s) and humanity can (and should) stop OpenAI, Anthropic, etc.<p>Datacenters that use literal gigawatts of electricity are not exactly easy to conceal from the authorities.</p>
]]></description><pubDate>Thu, 09 Apr 2026 01:25:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47698284</link><dc:creator>hollerith</dc:creator><comments>https://news.ycombinator.com/item?id=47698284</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47698284</guid></item><item><title><![CDATA[New comment by hollerith in "Sam Altman may control our future – can he be trusted?"]]></title><description><![CDATA[
<p>>the company actually is ethical and safety conscious everywhere<p>Anthropic is emphatically not <i>safe</i>. None of the AI labs with customers (i.e., excluding a few small nonprofits whose revenue comes from donations) are anything like <i>safe</i> -- because of extinction risk. The famous positive regard that Anthropic employees have for their organization's mission means almost nothing because there have been hundreds of quite destructive cults and political parties whose members believed that theirs is the most ethical and benign organization ever.<p>The best thing you can say about Anthropic is that if you have to support some AI lab by becoming a customer, investor or employee, it is slightly less dangerous for the world to support Anthropic than OpenAI although IMHO (and I admit I am in a minority on this among extinction-risk activists) it is slightly less dangerous to support Google Deep Mind or Mistral than Anthropic.<p>All four organizations I mentioned should be shut down tomorrow with their assets returned to shareholders.<p>The current crop of services provided by the leading AI labs are IMHO positive on net in their effect of people and society, but the leading AI labs are spending a large fraction of the 100s of billions of dollars they've received from investors on creating more powerful models, and they might succeed in their goal of creating models that are much more powerful than the ones they have now, which is when most of the danger would manifest.<p>The leaders of all of the leading AI labs have the ambition of completely transforming society and the world through AI.</p>
]]></description><pubDate>Tue, 07 Apr 2026 16:08:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47677475</link><dc:creator>hollerith</dc:creator><comments>https://news.ycombinator.com/item?id=47677475</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47677475</guid></item><item><title><![CDATA[New comment by hollerith in "US forces locate and evacuate downed airman in Iran"]]></title><description><![CDATA[
<p>>Without Israel, all Western civilisation is toast.<p>Why is that?</p>
]]></description><pubDate>Sun, 05 Apr 2026 05:11:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47646291</link><dc:creator>hollerith</dc:creator><comments>https://news.ycombinator.com/item?id=47646291</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47646291</guid></item><item><title><![CDATA[New comment by hollerith in "Open source CAD in the browser (Solvespace)"]]></title><description><![CDATA[
<p>That is more of an indictment of the US than it is of Mr Walker. Maybe I should run away to Switzerland, too.</p>
]]></description><pubDate>Wed, 01 Apr 2026 15:02:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47601869</link><dc:creator>hollerith</dc:creator><comments>https://news.ycombinator.com/item?id=47601869</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47601869</guid></item></channel></rss>