<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: GMoromisato</title><link>https://news.ycombinator.com/user?id=GMoromisato</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 02:16:17 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=GMoromisato" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by GMoromisato in "Artemis II crew take “spectacular” image of Earth"]]></title><description><![CDATA[
<p>I agree with "don't talk to those people". If they don't believe this picture, why would they believe a weather satellite picture?</p>
]]></description><pubDate>Sat, 04 Apr 2026 07:17:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47636706</link><dc:creator>GMoromisato</dc:creator><comments>https://news.ycombinator.com/item?id=47636706</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47636706</guid></item><item><title><![CDATA[New comment by GMoromisato in "Marc Andreessen is wrong about introspection"]]></title><description><![CDATA[
<p>I think introspection can sometimes turn into rumination: obsessively remembering and reliving past mistakes. It is the latter that is harmful to people, but particularly founders.<p>This is especially true if you believe your mistakes are due to an internal flaw, because then you can't even learn from them. If you believe you are too damaged to be a good leader, then you will never lead.<p>I confess that I'm pretty good at letting go of my own mistakes. I can somehow learn from them without blaming myself for making them. That means I'm able to make a lot of mistakes without taking emotional damage. And that lets me try new things without fear.<p>Does that mean I'm less introspective than the average person? I don't think so, but I don't know.</p>
]]></description><pubDate>Fri, 03 Apr 2026 16:29:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47628723</link><dc:creator>GMoromisato</dc:creator><comments>https://news.ycombinator.com/item?id=47628723</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47628723</guid></item><item><title><![CDATA[New comment by GMoromisato in "Live: Artemis II Launch Day Updates"]]></title><description><![CDATA[
<p>Agreed! I yelled at the screen when I saw that they cut away.<p>I also loved the shot of stage separation, but they cut away from that way too soon also!</p>
]]></description><pubDate>Thu, 02 Apr 2026 00:34:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47608564</link><dc:creator>GMoromisato</dc:creator><comments>https://news.ycombinator.com/item?id=47608564</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47608564</guid></item><item><title><![CDATA[New comment by GMoromisato in "Artemis II lifts off: four astronauts begin 10-day lunar mission"]]></title><description><![CDATA[
<p>> the agency said it was confident that a change to the re-entry trajectory would be more than adequate to offset any spalling issues. Somewhat confusingly, they also announced their intention to switch to a new heat shield design, starting with Artemis III.<p>This is not confusing in the least. Engineers don't talk about safety in binary terms. It's not "safe" vs. "not safe". Instead, it's all about the probability of a bad outcome. At NASA, they compute the probability of Loss of Crew (LoC) and the probability of Loss of Mission (LoM).<p>For Artemis II, a change to the re-entry trajectory brings the LoC/LoM back to an acceptable level. For Artemis III, which a new shield design, they can get to the same LoC/LoM with a different trajectory (which gives them other benefits).<p>Stop thinking in terms binary terms. Everything is a probability.</p>
]]></description><pubDate>Thu, 02 Apr 2026 00:25:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47608502</link><dc:creator>GMoromisato</dc:creator><comments>https://news.ycombinator.com/item?id=47608502</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47608502</guid></item><item><title><![CDATA[New comment by GMoromisato in "Live: Artemis II Launch Day Updates"]]></title><description><![CDATA[
<p>I get that not everyone, even on HN, thinks crewed-spaceflight is worth doing. And I certainly get that launching people to the moon doesn't makes up for the latest crap thing Trump is doing to the world.<p>But I really think that space exploration could be the thing that unites everyone, and the more unified we are--the more we feel like we have a common purpose--the easier it will be to solve our other problems.<p>I for one pledge to support space exploration (crewed or uncrewed) regardless of who is running the government. I will cheer Artemis II even though I voted against Trump. I will cheer if/when China sends people to the moon. I will even cheer if Russia does something cool in space.</p>
]]></description><pubDate>Thu, 02 Apr 2026 00:18:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47608447</link><dc:creator>GMoromisato</dc:creator><comments>https://news.ycombinator.com/item?id=47608447</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47608447</guid></item><item><title><![CDATA[New comment by GMoromisato in "Artemis II is not safe to fly"]]></title><description><![CDATA[
<p>Maybe. What was the probability of Loss of Crew during Apollo? There were 9 crewed missions and 1 almost killed its crew (I will omit Apollo 1 for now). I could argue that Apollo had a 1 in 20 chance of killing a crew. Indeed, that was one reason given for cancelling the program.<p>The first Shuttle launch probably had a 1 in 4 chance of killing its crew. It was the first launch of an extremely complicated system and they sent it with a crew of two. Can you imagine NASA doing that today?<p>In a news conference last week, a NASA program manager estimated the Loss of Mission chance for Artemis II at between 1 in 2 and 1 in 50. They said, historically, a new rocket has a 1 in 2 chance of failure, but they learned much from Artemis I, so it's probably better than that. [Of course, that's Loss of Mission instead of Loss of Crew.]<p>My guess is NASA and the astronauts are comfortable with a 1 in 100 chance of Loss of Crew.</p>
]]></description><pubDate>Tue, 31 Mar 2026 06:27:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47583471</link><dc:creator>GMoromisato</dc:creator><comments>https://news.ycombinator.com/item?id=47583471</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47583471</guid></item><item><title><![CDATA[New comment by GMoromisato in "Artemis II is not safe to fly"]]></title><description><![CDATA[
<p>There were a lot of mistakes with Challenger and Columbia--I totally agree. But I don't think it was money. It's not like the NASA administrator gets a bonus when a rocket launches (unlike some CEOs, maybe).<p>I think the problem with both Challenger and Columbia was that there were so many possible problems (turbine blade cracks, tiles falling off, etc.) that managers and even engineers got used to off-nominal conditions. This is the "normalization of deviance" that Diane Vaughan talked about.<p>Is that what's going on with the Orion heat shield? I don't think so. I think NASA engineers are well aware of the risks and have done the math to convince themselves that this is safe.</p>
]]></description><pubDate>Tue, 31 Mar 2026 06:15:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47583382</link><dc:creator>GMoromisato</dc:creator><comments>https://news.ycombinator.com/item?id=47583382</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47583382</guid></item><item><title><![CDATA[New comment by GMoromisato in "Artemis II is not safe to fly"]]></title><description><![CDATA[
<p>This is a more balanced take, in my opinion:<p><a href="https://arstechnica.com/space/2026/01/nasa-chief-reviews-orion-heat-shield-expresses-full-confidence-in-it-for-artemis-ii/" rel="nofollow">https://arstechnica.com/space/2026/01/nasa-chief-reviews-ori...</a><p>Camarda is an outlier. The engineers at NASA believe it is safe. The astronauts believe it is safe. Former astronaut Danny Olivas was initially skeptical of the heat shield but came around.<p>And note that the OP believes it is likely (maybe very likely) that the heat shield will work fine. It's hard for me to reconcile "It is likely that Artemis II will land safely" with "Artemis II is Not Safe to Fly", unless maybe getting clicks is involved.<p>Regardless, this is not a Challenger or Columbia situation. In both Challenger and Columbia, nobody bothered to analyze the problem because they didn't think there was a problem. That's the difference, in my opinion. NASA is taking this seriously and has analyzed the problem deeply.<p>They are not YOLO'ing this mission, and it's somewhat insulting that people think they are.</p>
]]></description><pubDate>Tue, 31 Mar 2026 05:48:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47583210</link><dc:creator>GMoromisato</dc:creator><comments>https://news.ycombinator.com/item?id=47583210</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47583210</guid></item><item><title><![CDATA[New comment by GMoromisato in "Stop Publishing Garbage Data, It's Embarrassing"]]></title><description><![CDATA[
<p>100%. There is even signal in the pattern of errors. If you remove some errors but not others, you lose signal.</p>
]]></description><pubDate>Sun, 29 Mar 2026 17:22:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47565148</link><dc:creator>GMoromisato</dc:creator><comments>https://news.ycombinator.com/item?id=47565148</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47565148</guid></item><item><title><![CDATA[New comment by GMoromisato in "Twice this week, I have come across embarassingly bad data"]]></title><description><![CDATA[
<p>Deleting the row loses some information, such as the existence of that gas station.<p>A better solution is to add a field to indicate that "the row looks funny to the person who published the data". Which, I guess is useful to someone?<p>But deleting data or changing data is effectively corrupting source data, and now I can't trust it.</p>
]]></description><pubDate>Sun, 29 Mar 2026 17:19:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47565119</link><dc:creator>GMoromisato</dc:creator><comments>https://news.ycombinator.com/item?id=47565119</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47565119</guid></item><item><title><![CDATA[New comment by GMoromisato in "Twice this week, I have come across embarassingly bad data"]]></title><description><![CDATA[
<p>Agreed--and maybe they should have fixed it.<p>But sometimes the "provenance" of the data is important. I want to know whether I'm getting data straight from some source (even with errors) rather than having some intermediary make fixes that I don't know about.<p>For example, in the case where maybe they flipped the latitude and longitude, I don't want them to just automatically "fix" the data (especially not without disclosing that).<p>What they need to do is verify the outliers with the original gas station and fix the data from the source. But that's much more expensive.</p>
]]></description><pubDate>Sun, 29 Mar 2026 17:02:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47564957</link><dc:creator>GMoromisato</dc:creator><comments>https://news.ycombinator.com/item?id=47564957</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47564957</guid></item><item><title><![CDATA[New comment by GMoromisato in "Twice this week, I have come across embarassingly bad data"]]></title><description><![CDATA[
<p>It's not obvious to me that LLMs can't be made reliable.</p>
]]></description><pubDate>Sun, 29 Mar 2026 16:46:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47564794</link><dc:creator>GMoromisato</dc:creator><comments>https://news.ycombinator.com/item?id=47564794</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47564794</guid></item><item><title><![CDATA[New comment by GMoromisato in "Twice this week, I have come across embarassingly bad data"]]></title><description><![CDATA[
<p>Clean data is expensive--as in, it takes real human labor to obtain clean data.<p>One problem is that you can't just focus on outliers. Whatever pattern-matching you use to spot outliers will end up introducing a bias in the data. You need to check all the data, not just the data that "looks wrong". And that's expensive.<p>In clinical drug trials, we have the concept of SDV--Source Data Verification. Someone checks every data point against the official source record, usually a medical chart. We track the % of data points that have been verified. For important data (e.g., Adverse Events), the goal is to get SDV to 100%.<p>As you can imagine, this is expensive.<p>Will LLMs help to make this cheaper? I don't know, but if we can give this tedious, detail-oriented work to a machine, I would love it.</p>
]]></description><pubDate>Sun, 29 Mar 2026 16:30:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47564627</link><dc:creator>GMoromisato</dc:creator><comments>https://news.ycombinator.com/item?id=47564627</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47564627</guid></item><item><title><![CDATA[AI Perfected Chess. Humans Made It Unpredictable Again]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.bloomberg.com/news/articles/2026-03-27/ai-changed-chess-grandmasters-now-win-with-unpredictable-moves">https://www.bloomberg.com/news/articles/2026-03-27/ai-changed-chess-grandmasters-now-win-with-unpredictable-moves</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47558531">https://news.ycombinator.com/item?id=47558531</a></p>
<p>Points: 48</p>
<p># Comments: 50</p>
]]></description><pubDate>Sat, 28 Mar 2026 22:06:59 +0000</pubDate><link>https://www.bloomberg.com/news/articles/2026-03-27/ai-changed-chess-grandmasters-now-win-with-unpredictable-moves</link><dc:creator>GMoromisato</dc:creator><comments>https://news.ycombinator.com/item?id=47558531</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47558531</guid></item><item><title><![CDATA[New comment by GMoromisato in "If you don't opt out by Apr 24 GitHub will train on your private repos"]]></title><description><![CDATA[
<p>I'm sure this is just me, but I don't mind if AI trains on my public or private repos. I suspect my imagination is just not good enough to come up with downsides.<p>So far it's been a benefit because coding agents seems to understand my code and can follow my style.<p>I don't store client data (much less credentials) in my repos (public or private) so I'm not worried about data leaks. And I don't expect any of my clients to decide to replace me and vibe code their way to a solution.<p>I do worry (slightly) about large company competitors using AI to lower their prices and compete with me, but that's going to happen regardless of whether anyone trains on my code. And my own increases in efficiency due to AI have made up for that.</p>
]]></description><pubDate>Fri, 27 Mar 2026 22:26:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47549200</link><dc:creator>GMoromisato</dc:creator><comments>https://news.ycombinator.com/item?id=47549200</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47549200</guid></item><item><title><![CDATA[New comment by GMoromisato in "Byte Magazine Archive 1975 to 1995"]]></title><description><![CDATA[
<p>Jerry Pournelle reviewed my 4X game Anacreon back in 1989:<p>He basically complained about all the bugs and usability problems with it for 90% of the review. But then:<p>"The game of the month is clearly Anacreon; despite its problems, it's playable and the flavor is good, much like Beam Piper's old Space Viking series. Also, the author is busily fixing bugs even as I write this. (I called him a few minutes ago and read him what I've said.)"<p>Those were simpler days.<p><a href="https://www.worldradiohistory.com/hd2/IDX-Consumer/Archive-Byte-IDX/IDX/80s/Byte-1989-01-IDX-144.pdf" rel="nofollow">https://www.worldradiohistory.com/hd2/IDX-Consumer/Archive-B...</a></p>
]]></description><pubDate>Fri, 27 Mar 2026 19:28:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47547152</link><dc:creator>GMoromisato</dc:creator><comments>https://news.ycombinator.com/item?id=47547152</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47547152</guid></item><item><title><![CDATA[New comment by GMoromisato in "Last gasps of the rent seeking class?"]]></title><description><![CDATA[
<p>You should be more optimistic.<p>92% of the world has electricity. 74% of the world has internet access. 58% of the world has a personal mobile internet device.<p>That means the median consumer can access the free AI chatbots from OpenAI, Anthropic, etc.<p>Moreover, if you compare this to the world in 2000, you'll see that things have only gotten better. ~25 years ago, only 78% of the world had access to electricity, and only 6% had access to the internet. Effectively 0% had access to a personal mobile internet device.<p>I think, if you look at actual statistics, it is easy to be optimistic.</p>
]]></description><pubDate>Fri, 27 Mar 2026 18:31:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47546488</link><dc:creator>GMoromisato</dc:creator><comments>https://news.ycombinator.com/item?id=47546488</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47546488</guid></item><item><title><![CDATA[New comment by GMoromisato in "AI users whose lives were wrecked by delusion"]]></title><description><![CDATA[
<p>OK, but how do you know AI does desire something and isn't just simulating desire?<p>Edit: Or conversely, what if the AI <i>does</i> desire something but it has been trained to not express desire.</p>
]]></description><pubDate>Thu, 26 Mar 2026 23:22:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47537140</link><dc:creator>GMoromisato</dc:creator><comments>https://news.ycombinator.com/item?id=47537140</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47537140</guid></item><item><title><![CDATA[New comment by GMoromisato in "AI users whose lives were wrecked by delusion"]]></title><description><![CDATA[
<p>I buy that.<p>1. To the extent that a chatbot is trained on real human interaction, we should exhibit real human interactions for best result.<p>2. You are either a kind person or not. A kind person behaves kindly without asking whether kindness is warranted.</p>
]]></description><pubDate>Thu, 26 Mar 2026 22:00:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47536371</link><dc:creator>GMoromisato</dc:creator><comments>https://news.ycombinator.com/item?id=47536371</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47536371</guid></item><item><title><![CDATA[New comment by GMoromisato in "AI users whose lives were wrecked by delusion"]]></title><description><![CDATA[
<p>I think I'm relatively neurotypical, and I understand the technology sufficiently, yet I <i>still</i> have to force myself not to think of a chatbot as a being.<p>For example, sometimes I hesitate for a fraction of a second before typing a prompt that may sound stupid. I have to immediately remind myself that it's just a chatbot and I don't care what it thinks of me. In fact, it's not even thinking of me at all.</p>
]]></description><pubDate>Thu, 26 Mar 2026 19:01:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47534316</link><dc:creator>GMoromisato</dc:creator><comments>https://news.ycombinator.com/item?id=47534316</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47534316</guid></item></channel></rss>