<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: threethirtytwo</title><link>https://news.ycombinator.com/user?id=threethirtytwo</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 15 May 2026 16:52:18 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=threethirtytwo" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by threethirtytwo in "Sam Altman's Business Dealings Under GOP Scrutiny Ahead of OpenAI's IPO"]]></title><description><![CDATA[
<p>Psychopaths wear a mask of sanity but underneath they have no moral framework. Some people can hate black people or Muslims or anything like that but these qualities are very human. Things like racism has existed across cultures for more than several millennia among many people. A psychopath is a different ballgame.<p>A psychopath can plunge a knife into a babies face and feel no emotion. The only reason why the psychopath doesn’t plunge a knife into a babies face is because he has nothing to gain from it. The psychopath appears more sane then a racist because the psychopath is usually better at pretending to be sane simply because he is unable to comprehend the passionate yet racial hatred that the racist feels.<p>What I am saying is musk does not fit that archetype as much. Altman fits that archetype more. Neither actually crosses the threshold to be called a “psychopath” but if psychopathy was a gradient Altman is further down the line then musk. Much further.<p>That is the literal definition of what a psychopath is. You’re operating from a lack of understanding of the definition.</p>
]]></description><pubDate>Fri, 15 May 2026 02:19:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=48143777</link><dc:creator>threethirtytwo</dc:creator><comments>https://news.ycombinator.com/item?id=48143777</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48143777</guid></item><item><title><![CDATA[New comment by threethirtytwo in "Sam Altman's Business Dealings Under GOP Scrutiny Ahead of OpenAI's IPO"]]></title><description><![CDATA[
<p>Your stupid. They can be evil because it fits the definition. They can’t be psychopaths because it doesn’t fit the definition. Understand?</p>
]]></description><pubDate>Thu, 14 May 2026 22:27:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=48142090</link><dc:creator>threethirtytwo</dc:creator><comments>https://news.ycombinator.com/item?id=48142090</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48142090</guid></item><item><title><![CDATA[New comment by threethirtytwo in "Sam Altman's Business Dealings Under GOP Scrutiny Ahead of OpenAI's IPO"]]></title><description><![CDATA[
<p>I have asked several.</p>
]]></description><pubDate>Thu, 14 May 2026 22:25:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=48142076</link><dc:creator>threethirtytwo</dc:creator><comments>https://news.ycombinator.com/item?id=48142076</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48142076</guid></item><item><title><![CDATA[New comment by threethirtytwo in "Sam Altman's Business Dealings Under GOP Scrutiny Ahead of OpenAI's IPO"]]></title><description><![CDATA[
<p>God why do people frame things in such extremes? Neither person is a psychopath. If anyone is closer to a psychopath it’s Altman, but he doesn’t completely fit the monicker.</p>
]]></description><pubDate>Thu, 14 May 2026 14:36:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=48136060</link><dc:creator>threethirtytwo</dc:creator><comments>https://news.ycombinator.com/item?id=48136060</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48136060</guid></item><item><title><![CDATA[New comment by threethirtytwo in "Singapore introduces caning for boys who bully others at school"]]></title><description><![CDATA[
<p>Right so without examples your evidence actually lends credence to my point. 3/3 times is a 100 percent rate of the bully backing down. That is the logical and rational analysis.<p>Obviously your personal conclusion is different but I would say it’s not an empirical or strictly logical analytical conclusion.<p>You will also note that I was able to guess and predict where you’re coming from. Not only did I ask for the full metric, I asked for it partly because I can predict it would be 3 out of 3. My response to the other persons reply predicted what was going on in your head quite accurately. Because of this I would  say that I have knowledge about this context that you don’t know about it and that you can learn from. It’s natural for you to you to fight for your point but that would be a form of bias.<p>So here’s something that you can agree with. If the victim wanted to he can go to the kitchen grab a knife bring it to school to try to kill that bully. Another option is to follow that bully home and attempt to slaughter his family.<p>No matter how small a victim is… any bully will back down if he knew the victim was willing to raise the violence level to these insane extremes.<p>The point of that example is to show you a level of resistance that will cause nearly 100 percent of bullies to back down. And to also show how extremely possible it is for anyone to do. Anyone can go to the kitchen and grab a knife and raise the stakes. The point is that most victims just don’t have the balls to do it.<p>From a practical standpoint you only need to go a fraction of that distance in order to cause most bullies to back down. Are you willing to get violent with a bully? Maybe don’t get a knife but be willing to smash his face in with your fists. Or bring a bat to school and try smash down his legs to get him to kneel if your fists are truly too weak. If you don’t do shit then the bully won’t back down and that is primarily what the bully takes advantage of as shown with the 3/3 statistic you presented me with.<p>The victim definitely has choices. But he’s too scared take charge or even follow through with the right choice. Weakness is not a realistic excuse because the victim has choices in weaponry (rocks, bat) that can even the odds or even bend the odds to extreme levels (knife).</p>
]]></description><pubDate>Thu, 14 May 2026 14:25:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=48135888</link><dc:creator>threethirtytwo</dc:creator><comments>https://news.ycombinator.com/item?id=48135888</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48135888</guid></item><item><title><![CDATA[New comment by threethirtytwo in "Bun's experimental Rust rewrite hits 99.8% test compatibility on Linux x64 glibc"]]></title><description><![CDATA[
<p>Human readability and maintainability is not the future.</p>
]]></description><pubDate>Mon, 11 May 2026 00:05:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=48089482</link><dc:creator>threethirtytwo</dc:creator><comments>https://news.ycombinator.com/item?id=48089482</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48089482</guid></item><item><title><![CDATA[New comment by threethirtytwo in "LLMs corrupt your documents when you delegate"]]></title><description><![CDATA[
<p>> First off, It’s good to study all kinds of things isn’t it? Even if it’s not strictly practical.<p>Course it is. But the conclusion everyone is coming to is that LLMs are garbage and can’t be used because of 25 percent degradation which is not in line with reality.<p>> Second, and more importantly these AI tools are EVERYWHERE right now. The effects of people using them for work can be seen throughout many industries and workplaces.<p>At 25 percent degradation these tools would not be everywhere. They are everywhere because it’s not actually used that way.<p>> So I think studying how these models perform in the vast majority of use cases is not only a good idea, but it’s actually really important.<p>I have less of a problem with this study and more about the interpretation of this study.<p>> Even if you’re strictly pro-AI and believe it is the future, a study like this can help you explain to laymen why they need the harnesses you’re so in support of.<p>I’m not pro-AI. I’m anti AI. I fucking hate fucking AI.<p>What I’m angry at is this delusional denial of reality. This experiment is very obviously not accurate yet people are using this study as a headliner to promote an anti AI agenda.<p>I don’t like AI but that’s different for lying to myself or trying to say AI sucks at something when it is in fact superior then us in this respect.</p>
]]></description><pubDate>Sun, 10 May 2026 21:40:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=48088373</link><dc:creator>threethirtytwo</dc:creator><comments>https://news.ycombinator.com/item?id=48088373</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48088373</guid></item><item><title><![CDATA[New comment by threethirtytwo in "Singapore introduces caning for boys who bully others at school"]]></title><description><![CDATA[
<p>Yes I think the implication didn’t fully materialize in his head. If he stop to think about it he doesn’t actually have any other cases.<p>Which sort of explains why there’s no subsequent response.</p>
]]></description><pubDate>Sun, 10 May 2026 18:36:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=48086540</link><dc:creator>threethirtytwo</dc:creator><comments>https://news.ycombinator.com/item?id=48086540</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48086540</guid></item><item><title><![CDATA[New comment by threethirtytwo in "LLMs corrupt your documents when you delegate"]]></title><description><![CDATA[
<p>Stick with the argument.<p>When I said the experiment is inaccurate to the current abilities of AI it’s fucking right. Admit it and stop going off tangents.<p>There’s no argument against this. You’re dodging and weaving trying to dodge reality. I don’t know who roko is and I don’t give a shit.</p>
]]></description><pubDate>Sun, 10 May 2026 18:34:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=48086530</link><dc:creator>threethirtytwo</dc:creator><comments>https://news.ycombinator.com/item?id=48086530</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48086530</guid></item><item><title><![CDATA[New comment by threethirtytwo in "LLMs corrupt your documents when you delegate"]]></title><description><![CDATA[
<p>You should leave this site. Comments like this are not good for this site. You should go somewhere else.</p>
]]></description><pubDate>Sun, 10 May 2026 08:37:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=48082101</link><dc:creator>threethirtytwo</dc:creator><comments>https://news.ycombinator.com/item?id=48082101</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48082101</guid></item><item><title><![CDATA[New comment by threethirtytwo in "LLMs corrupt your documents when you delegate"]]></title><description><![CDATA[
<p>I don’t understand you. We have an AI model. The AI model is obviously capable.<p>But you want to use pretend that it’s not useful because non technical people haven’t figured out how to properly use it yet?<p>Do you think that’s a valid argument? This article is making a claim of 25 percent degredation. Do you think that claim is true because a lot of people don’t use it right?<p>Humans have 99 percent degredation when editing one punctuation point of an entire book when regurgitating that entire book just to change one punctuation point. Does this statement sound reasonable to you? Because that is the statement you and your genius interloper into this thread are standing behind. Just replace human with LLM and it’s the same kind of genius logic.</p>
]]></description><pubDate>Sun, 10 May 2026 08:33:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=48082079</link><dc:creator>threethirtytwo</dc:creator><comments>https://news.ycombinator.com/item?id=48082079</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48082079</guid></item><item><title><![CDATA[New comment by threethirtytwo in "LLMs corrupt your documents when you delegate"]]></title><description><![CDATA[
<p>> You can’t get mad at an experiment for not happening in the future.<p>I’m more getting mad at this sentence not making any sense. I’m disappointed at this experiment for not testing the actual capabilities of an LLM. Comprende?<p>> They simulated common end user behavior<p>Not the way you use it. And not the way it will be used.<p>You love it because you want it to stay this way so you can forever believe AI will never be better than you.<p>Bro the reality is unfolding as you speak. It’s like humanity just discovered guns but hasn’t discovered the bullets and your saying guns are useless because most of humanity hasn’t figured out bullets yet.<p>> We’ve gone from “this study is flawed because language models don’t do that” to “this study is flawed because while language models do do that, I don’t think that they will in the future” to “data that could support a bias other than my own is bad”<p>This is a flat out lie. Models DO do that. The only fucking argument you have is that non technical and average laymen people edit documents the wrong way while all people who use agentic AI as adepts use it the correct way. Like are you fucking kidding me?<p>The only change I acknowledge is your grandma copies and pastes essays into ChatGPT while YOU don’t. You go pretend you live in that reality where the bullets will never appear.</p>
]]></description><pubDate>Sun, 10 May 2026 08:29:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=48082066</link><dc:creator>threethirtytwo</dc:creator><comments>https://news.ycombinator.com/item?id=48082066</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48082066</guid></item><item><title><![CDATA[New comment by threethirtytwo in "LLMs corrupt your documents when you delegate"]]></title><description><![CDATA[
<p>This will change too man. Maybe I am in a bubble but with how fast things are changing, it won’t be too long before the bubble becomes reality.<p>Either way we should be doing experiments on the actual capabilities of AI not about the stupidest possible way to use AI because it helps validate your own negative bias against AI.<p>Additionally as software engineers using agentic AI… which HN basically is… this experiment is not at all relevant in the context of where it is posted. We ALL use agentic ai and we all have the agent use surgical tools for editing. Don’t you find it strange that despite the fact we all do this, HN is full of rabid engineers gobbling this paper up as validation despite complete lack of relevance?</p>
]]></description><pubDate>Sun, 10 May 2026 03:09:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=48080632</link><dc:creator>threethirtytwo</dc:creator><comments>https://news.ycombinator.com/item?id=48080632</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48080632</guid></item><item><title><![CDATA[New comment by threethirtytwo in "LLMs corrupt your documents when you delegate"]]></title><description><![CDATA[
<p>See that’s an example of degradation by a human. Not even an LLM wil make that kinda mistake.</p>
]]></description><pubDate>Sun, 10 May 2026 00:49:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=48079876</link><dc:creator>threethirtytwo</dc:creator><comments>https://news.ycombinator.com/item?id=48079876</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48079876</guid></item><item><title><![CDATA[New comment by threethirtytwo in "LLMs corrupt your documents when you delegate"]]></title><description><![CDATA[
<p>That was true maybe 7 months ago. This is no longer the case. Harnesses use all kinds of tooling to edit things now.</p>
]]></description><pubDate>Sun, 10 May 2026 00:48:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=48079870</link><dc:creator>threethirtytwo</dc:creator><comments>https://news.ycombinator.com/item?id=48079870</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48079870</guid></item><item><title><![CDATA[New comment by threethirtytwo in "LLMs corrupt your documents when you delegate"]]></title><description><![CDATA[
<p>>Except that isn't how humans edit documents,<p>Bro. That's my point.<p>>and it isn't how LLMs work either.<p>This is also my point. To be more technical about it, the harness around the LLM pushes it to do surgical edits rather then regurgitation, so my point is this experiment is garbage and testing an impractical and rarely used use case.<p>>When a human edits a document, they don't typically "reproduce said document with edits", which I assume you mean read the document and reproduce it from memory.<p>No shit sherlock. The point of that sentence was to illustrate the absurdity of doing that which in turn illustrates the absurdity of this scientific paper. You're kind of lost.</p>
]]></description><pubDate>Sat, 09 May 2026 21:00:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=48078216</link><dc:creator>threethirtytwo</dc:creator><comments>https://news.ycombinator.com/item?id=48078216</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48078216</guid></item><item><title><![CDATA[New comment by threethirtytwo in "LLMs corrupt your documents when you delegate"]]></title><description><![CDATA[
<p>That's the point bro. I am saying this Experiment makes no sense.<p>Humans don't do that. And Claude doesn't edit documents like that. Because it makes no sense. The point is saying that the Experiment itself is not helpful here.</p>
]]></description><pubDate>Sat, 09 May 2026 20:56:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=48078183</link><dc:creator>threethirtytwo</dc:creator><comments>https://news.ycombinator.com/item?id=48078183</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48078183</guid></item><item><title><![CDATA[New comment by threethirtytwo in "LLMs corrupt your documents when you delegate"]]></title><description><![CDATA[
<p>You can do a baseline study right now. Read this entire thread and make an edit of changing every E to an I.<p>Show your edit by regurgitating this entire thread by hand on a paper. Don't use any additional tools like Find and replace.<p>Boom there's your baseline. I can simulate the result in my head.<p>Guys I'm basically saying the experiment is innaccurate to the practical reality of how LLMs are actually used.</p>
]]></description><pubDate>Sat, 09 May 2026 20:54:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=48078159</link><dc:creator>threethirtytwo</dc:creator><comments>https://news.ycombinator.com/item?id=48078159</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48078159</guid></item><item><title><![CDATA[New comment by threethirtytwo in "LLMs corrupt your documents when you delegate"]]></title><description><![CDATA[
<p>But no one in this thread addressed the inaccuracy of the experiment. The experiment did not test the actuality of HOW LLMs are used in reality.<p>So that is definitively a biased interpretation. This is independent of how accurate my POV or your POV is on whether LLMs degrade documents. I am simply saying the experiment conducted is COMPLETELY DIFFERENT from how LLMs AND humans edit papers.</p>
]]></description><pubDate>Sat, 09 May 2026 20:52:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=48078142</link><dc:creator>threethirtytwo</dc:creator><comments>https://news.ycombinator.com/item?id=48078142</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48078142</guid></item><item><title><![CDATA[New comment by threethirtytwo in "LLMs corrupt your documents when you delegate"]]></title><description><![CDATA[
<p>A human doing the same tasks as what the LLM did in the paper that the human will degrade the document further then the LLM. If the LLM is 25%, a human would degrade it probably 80% if they used the same technique as the LLM did in this paper. I'm talking about a single pass.<p>The fact of the matter is, humans don't edit things the way it was done in the paper and neither do coding agents like claude. Think about it: You do not ingest an entire paper and then regurgitate that paper with a single targeted edit... and neither do coding agents.<p>Also think carefully. A 25% degradation rate is unacceptable in the industry. The AI change that's taking over all of SWE development would not actually exist if there was 25% degradation... that's way too much.</p>
]]></description><pubDate>Sat, 09 May 2026 14:51:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=48075443</link><dc:creator>threethirtytwo</dc:creator><comments>https://news.ycombinator.com/item?id=48075443</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48075443</guid></item></channel></rss>