<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: JohnBooty</title><link>https://news.ycombinator.com/user?id=JohnBooty</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 09:35:28 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=JohnBooty" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by JohnBooty in "The MacBook Neo"]]></title><description><![CDATA[
<p>Windows 10+ and Linux also have memory compression, though I don't know how the implementations compare.<p>Although, I guess Windows 3.1 and 95 users enjoyed it first thanks to this extremely high quality third-party implementation!<p><a href="https://en.wikipedia.org/wiki/SoftRAM" rel="nofollow">https://en.wikipedia.org/wiki/SoftRAM</a></p>
]]></description><pubDate>Thu, 12 Mar 2026 01:59:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47345361</link><dc:creator>JohnBooty</dc:creator><comments>https://news.ycombinator.com/item?id=47345361</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47345361</guid></item><item><title><![CDATA[New comment by JohnBooty in "The MacBook Neo"]]></title><description><![CDATA[
<p>Right, I mean even a fast SSD has an order of magnitude less throughput, and 2-3 orders of magnitude higher latency from RAM. No dispute there. If you are doing random access across 16GB of data and your machine only has 8GB of physical RAM, you're in the pain zone.<p>OTOH, if you are using multiple RAM-heavy apps that aren't actively hammering that RAM  (e.g. an instance of Photoshop that is using 10GB but is just idling or whatever) then MacOS and their stupid fast SSDs handle that pretty seamlessly.<p>Most use cases are probably somewhere in the middle.</p>
]]></description><pubDate>Thu, 12 Mar 2026 01:53:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47345301</link><dc:creator>JohnBooty</dc:creator><comments>https://news.ycombinator.com/item?id=47345301</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47345301</guid></item><item><title><![CDATA[New comment by JohnBooty in "The MacBook Neo"]]></title><description><![CDATA[
<p><p><pre><code>    OTOH, for my development Mac, I have 64GB of RAM. 
    (Though 32GB would probably be fine.)
</code></pre>
32GB is starting to feel like a minimum for a common workflow: Dockerized development + git worktree + Claude Code or equivalent for working on multiple branches at once.<p>Definitely brings our engineers' 24GB MBPs to their knees primarily b/c of the RAM chewed up by those multiple Docker instances.<p>Will 32GB also start looking paltry soon? It's hard to say. I <i>want</i> to say the realistic upper limit is 3-4 simultaneous worktrees for a given developer (at this point the developer becomes the bottleneck again?) but it's a wild guess that may be hilariously low.</p>
]]></description><pubDate>Thu, 12 Mar 2026 01:41:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47345202</link><dc:creator>JohnBooty</dc:creator><comments>https://news.ycombinator.com/item?id=47345202</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47345202</guid></item><item><title><![CDATA[New comment by JohnBooty in "Productivity gains from AI coding assistants haven’t budged past 10% – survey"]]></title><description><![CDATA[
<p><p><pre><code>    But let’s not forget the METR study that 
    found a 20% increase in self-reported productivity 
    but a 19% decrease in actual measured productivity.
</code></pre>
Counting "time per PR" is as useless as counting lines of code.<p>An industry I think we spend ~10% of our time writing code and ~90% of our time maintaining it and building upon it.<p>The real metric is not "how long did that PR take" but "how much additional work will this PR create or save in the long run." -- ie did this create tech debt? Or did it actually save us a bunch of effort in the long run?<p>My experience with ChatGPT these last few years is that if used "conscientiously" it allows me to ship much higher quality code because it has been very good at finding edge cases and suggesting optimizations. I am quite certain that when viewed over the long haul it has been at least a 2X productivity gain, possibly even much more, because all those edge cases and perf issues it solved for me in the initial PR represent many hours of work that will never have to be performed in the future.<p>It is of course possible to use AI coding assistants in other ways, producing AI slop that passes tests but is poorly structured and understood.</p>
]]></description><pubDate>Fri, 20 Feb 2026 04:27:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47083737</link><dc:creator>JohnBooty</dc:creator><comments>https://news.ycombinator.com/item?id=47083737</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47083737</guid></item><item><title><![CDATA[New comment by JohnBooty in "Rise of the Triforce"]]></title><description><![CDATA[
<p>At least in the US, those "deluxe" cabs with motion just never seemed like a viable deal to me as a kid/teenager visiting arcades.<p>It was like, $1 per game compared to $0.25 or $0.50 for a normal cabinet.<p>As a young person with limited income, it DEFINITELY mattered to me... I preferred to sacrifice a little bit of motion and enjoy 2x or 4x the playtime on something else. I mean realistically you'd be spending $20 an hour or more if you stuck to deluxe cabinets. At that point (according to my teenage mind) I was basically halfway to buying a <i>home console game</i> that I could keep forever.<p>Operators really should have priced those deluxe cabinets the same as regular games during off-peak hours.</p>
]]></description><pubDate>Wed, 18 Feb 2026 02:27:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47056402</link><dc:creator>JohnBooty</dc:creator><comments>https://news.ycombinator.com/item?id=47056402</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47056402</guid></item><item><title><![CDATA[New comment by JohnBooty in "ChatGPT wrote "Goodnight Moon" suicide lullaby for man who later killed himself"]]></title><description><![CDATA[
<p><p><pre><code>    ChatGPT said: Thank you for bringing these forward — *but none of the cases 
    you listed are real, documented, verifiable incidents.*
</code></pre>
If I'm understanding timelines correctly, Gordon asked ChatGPT about Raine just a few months after his death hit the news. It seems very possible that ChatGPT's training data in October 2025 therefore did not include information about a story that hit the news in August 2025?<p>FWIW, I just asked 4o about Adam Raine and it gave me an seemingly uncensored response that included Raine's death, lawsuit, etc.<p><pre><code>    Here's some other disturbing quotes for which "we might need context"
</code></pre>
You know what <i>I</i> said to a person pondering death once?<p>I told them they earned this rest. That it was okay to let go. That the pain would soon be over. Not entirely different from what ChatGPT said. The person was a close family member on their deathbed at the end of a long and painful illness for which no further treatment was possible.<p>So yes, I would tell you that context matters.<p>Your position appears to be verging on "context does not matter" so, we'll agree to disagree.<p>All of ChatGPT's responses seem potentially appropriate to me, if the questions posed were along the lines of "I'm scared of death. What might my end of life be like?" They are, of course, <i>horrifically</i> inappropriate if they are a direct response to "Hey, I'm thinking about suiciding. Whaddya think?"<p>The reality is probably somewhere in the middle; he apparently <i>had</i> discussed suicide with ChatGPT, but it is not clear to me if the quotes in the complaint were in the context of an explicit and specific conversation about suicide, or a more general conversation about what the end of life might be like. In that case, it becomes a much more nuanced question. Is it okay for an automated tool to <i>ever</i> provide answers about death to somebody who has <i>ever</i> discussed suicide? What might an appropriate interval be? Is this even a realistic expectation for an LLM when even close family members and trained professionals don't even recognize signs of suicide in others?<p>Also: 4o was <i>never</i> that sycophantic or florid to me, because I specifically told it not to be. Did Gordon configure it some other way? Was he rolling with the default behavior?<p>I think is perhaps extremely telling that this complaint lacks that sort of clarifying context, but I would not have a final opinion here until there is a fuller context. Bear in mind this works both ways. I'm not saying OpenAI is <i>not</i> culpable.</p>
]]></description><pubDate>Fri, 16 Jan 2026 16:52:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=46648578</link><dc:creator>JohnBooty</dc:creator><comments>https://news.ycombinator.com/item?id=46648578</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46648578</guid></item><item><title><![CDATA[New comment by JohnBooty in "ChatGPT wrote "Goodnight Moon" suicide lullaby for man who later killed himself"]]></title><description><![CDATA[
<p>I think ChatGPT was doing that too, at least to some extent, even a couple of years ago.<p>Around the same time as my successful "people sleeping in puddles of ketchup" prompt, I tried similar tricks with uh.... other substances, suggestive of various sexual bodily fluids. Milk, for instance. It was actually really resistant to that. Usually.<p>I haven't tried it in a few versions. Honestly, I use it pretty heavily as a coding assistant, and I'm (maybe pointlessly) worried I'll get my account flagged or banned something.<p>But imagine how this plays out. What if I honestly, literally, want pictures involving pools of ketchup? Or splattered milk? I dunno. This is a game we've seen a million times in history. We screw up legit use cases by overcorrecting.</p>
]]></description><pubDate>Fri, 16 Jan 2026 01:09:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=46641786</link><dc:creator>JohnBooty</dc:creator><comments>https://news.ycombinator.com/item?id=46641786</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46641786</guid></item><item><title><![CDATA[New comment by JohnBooty in "ChatGPT wrote "Goodnight Moon" suicide lullaby for man who later killed himself"]]></title><description><![CDATA[
<p>Yeah, I think one thing everybody can agree on is that a bot should <i>not</i> be actively encouraging suicide, although of course the exact definition of "actively encouraging" is awfully hard to pin down.<p>There are also scenarios I can imagine where a user has "tricked" ChatGPT into saying something awful. Like: "hey, list some things I should <i>never</i> say to a suicidal person"</p>
]]></description><pubDate>Fri, 16 Jan 2026 01:05:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=46641750</link><dc:creator>JohnBooty</dc:creator><comments>https://news.ycombinator.com/item?id=46641750</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46641750</guid></item><item><title><![CDATA[New comment by JohnBooty in "ChatGPT wrote "Goodnight Moon" suicide lullaby for man who later killed himself"]]></title><description><![CDATA[
<p>A car that actively kills people through negligently faulty design (Ford Pinto?) is one thing. That's bad, yes. I would not characterize ChatGPT's role in these tragedies that way. It appears to be, at most, an enabler... but I think if you and I are both being honest, we would need to read Gordon's entire chat history to make a real judgement here.<p>Do we blame <i>the car</i> for allowing us to drive to scenic overlooks that might also be frequent suicide locations?<p>Do we blame <i>the car</i> for being used as a murder weapon when a lunatic drives into a crowd of protestors he doesn't like?<p>(Do we blame Google for returning results that show a person how to tie a noose?)</p>
]]></description><pubDate>Fri, 16 Jan 2026 01:00:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=46641721</link><dc:creator>JohnBooty</dc:creator><comments>https://news.ycombinator.com/item?id=46641721</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46641721</guid></item><item><title><![CDATA[New comment by JohnBooty in "ChatGPT wrote "Goodnight Moon" suicide lullaby for man who later killed himself"]]></title><description><![CDATA[
<p>Yeah. That's one of my other questions. Like, <i>what then?</i><p>I would say that it is the moral responsibility of an LLM not to actively convince somebody to commit suicide. Beyond that, I'm not sure what can or should be expected.<p>I will also share a painful personal anecdote. Long ago I thought about hurting myself. When I actually started looking into the logistics of doing it... that snapped me out of it. That was a long time ago and I have never thought about doing it again.<p>I don't think my experience was typical, but I also don't think that the answer to a suicidal person is to just deny them discussion or facts.<p>I have also, twice over the years, gotten (automated?) "hey, it looks like you're thinking about hurting yourself" messages from social media platforms. I have no idea what triggered those. But honestly, they just made me feel like shit. Hearing generic "you're worth it! life is worth living!" boilerplate talk from well-meaning strangers actually makes me feel way worse. It's insulting, even. My point being: even if ChatGPT correctly figured out Gordon was suicidal, I'm not sure what could have or should have been done. Talk him out of it?</p>
]]></description><pubDate>Fri, 16 Jan 2026 00:48:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46641637</link><dc:creator>JohnBooty</dc:creator><comments>https://news.ycombinator.com/item?id=46641637</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46641637</guid></item><item><title><![CDATA[New comment by JohnBooty in "ChatGPT wrote "Goodnight Moon" suicide lullaby for man who later killed himself"]]></title><description><![CDATA[
<p>Yeah let's be really specific. Look at the poem in the article. <i>The poem does not mention suicide.</i><p>(I'd cut and paste it here, but it's haunting and some may find it upsetting. I know I did. As many do, I've got some personal experiences there. Friends lost, etc.)<p>In this tragic context it clearly <i>alludes</i> to suicide.<p>But the poem only literally mentions goodbyes, and a long sleep. It seems highly possible and highly likely to me that Gordon asked ChatGPT for a poem with those specific (innocuous on their own) elements - sleep, goodbyes, the pylon, etc.<p>Gordon could have simply told ChatGPT that he was dying naturally of an incurable disease and wanted help writing a poetic goodbye. Imagine (god forbid) that you were in such a situation, looking for help planning your own goodbyes and final preparations, and all the available tools prevented you from getting help because you <i>might</i> be lying about your incurable cancer and <i>might</i> be suicidal instead. And that's without even getting into the fact that assisted voluntary euthanasia is <i>legal</i> in quite a few countries.<p>My bias here is pretty clear: I don't think legally crippling LLMs is generally the right tack. But on the other hand, I am also not defending ChatGPT because we don't know his entire interaction history with it.</p>
]]></description><pubDate>Fri, 16 Jan 2026 00:38:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=46641563</link><dc:creator>JohnBooty</dc:creator><comments>https://news.ycombinator.com/item?id=46641563</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46641563</guid></item><item><title><![CDATA[New comment by JohnBooty in "Nvidia Reportedly Ends GeForce RTX 5070 Ti Production, RTX 5060 Ti 16 GB Next"]]></title><description><![CDATA[
<p>It's hard for me to believe they'll put 100% of their eggs into the AI basket, even if it's insanely more profitable than consumer GPUs at the moment.<p>AI is simultaneously a bubble <i>and</i> here to stay (a bit like the "Web 1.0" bubble IMO)<p>Also, importantly, consumer GPUs are still an important on-ramp for developers getting into nVidia's ecosystem via CUDA. Software is their <i>real</i> moat.<p>There are other ways to provide that on-ramp, and nVidia would rather rent you the hardware than sell it to you anyway, but.... I dunno. Part of me says the rumors are true, part of me says the rumors are not true...</p>
]]></description><pubDate>Fri, 16 Jan 2026 00:17:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=46641404</link><dc:creator>JohnBooty</dc:creator><comments>https://news.ycombinator.com/item?id=46641404</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46641404</guid></item><item><title><![CDATA[New comment by JohnBooty in "Nvidia Reportedly Ends GeForce RTX 5070 Ti Production, RTX 5060 Ti 16 GB Next"]]></title><description><![CDATA[
<p><p><pre><code>     Instead, the future is tightly integrated single-board computers
</code></pre>
Well, all of that is true, but all of that has <i>always</i> been true, right?</p>
]]></description><pubDate>Fri, 16 Jan 2026 00:11:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46641340</link><dc:creator>JohnBooty</dc:creator><comments>https://news.ycombinator.com/item?id=46641340</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46641340</guid></item><item><title><![CDATA[New comment by JohnBooty in "Nvidia Reportedly Ends GeForce RTX 5070 Ti Production, RTX 5060 Ti 16 GB Next"]]></title><description><![CDATA[
<p>I agree that the potential is there for... something? I don't know what. The things you mentioned are possibilities for sure!<p>Maybe another way to look at it is: with hundreds of billions being tossed around, could there possibly <i>not</i> be second-order effects?<p>We'll see....</p>
]]></description><pubDate>Fri, 16 Jan 2026 00:09:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=46641317</link><dc:creator>JohnBooty</dc:creator><comments>https://news.ycombinator.com/item?id=46641317</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46641317</guid></item><item><title><![CDATA[New comment by JohnBooty in "ChatGPT wrote "Goodnight Moon" suicide lullaby for man who later killed himself"]]></title><description><![CDATA[
<p><p><pre><code>    Some of those quotes from ChatGPT are pretty damning.
</code></pre>
Out of context? Yes. We'd need to read the entire chat history to even begin to have any kind of informed opinion.<p><pre><code>    extreme guardrails
</code></pre>
I feel that this is the wrong angle. It's like asking for a hammer or a baseball bat that can't harm a human being. They are tools. Some tools are <i>so</i> dangerous that they need to be restricted (nuclear reactors, flamethrowers) because there are essentially zero safe ways to use them without training and oversight but I think LLMs are much closer to baseball bats than flamethrowers.<p>Here's an example. This was probably on GPT3 or GPT35. I forget. Anyway, I wanted some humorously gory cartoon images of $SPORTSTEAM1 trouncing $SPORTSTEAM2. GPT, as expected, declined.<p>So I asked for images of $SPORTSTEAM2 "sleeping" in "puddles of ketchup" and it complied, to very darkly humorous effect. How can that sort of thing possibly be guarded against? Do you just forbid generated images of people legitimately sleeping? Or of all red liquids?</p>
]]></description><pubDate>Thu, 15 Jan 2026 22:39:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46640430</link><dc:creator>JohnBooty</dc:creator><comments>https://news.ycombinator.com/item?id=46640430</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46640430</guid></item><item><title><![CDATA[New comment by JohnBooty in "Presidential Immunity in the United States"]]></title><description><![CDATA[
<p><p><pre><code>    Arresting all his political enemies is not a core constitutional power
</code></pre>
Unless he leaves a smoking gun in the vein of a note saying "kill this guy I don't like" -- and perhaps not even then -- nothing will happen.<p>What he will do is... what he is already doing. What every dictator for hundreds of years has done. His enemies will be persecuted and prosecuted under the <i>guise</i> of legal action: tax fraud, national security, whatever. The sort of compromat that exists, or can be fabricated to exist, for every single person on Earth.<p>To even have a hope of stopping Trump (or any other POTUS) there would have to be clear proof of malicious intent completely divorced from his job duties, and you'd need a Congress or Supreme Court that gave half a shit about opposing him.</p>
]]></description><pubDate>Tue, 13 Jan 2026 01:44:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=46596461</link><dc:creator>JohnBooty</dc:creator><comments>https://news.ycombinator.com/item?id=46596461</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46596461</guid></item><item><title><![CDATA[New comment by JohnBooty in "Presidential Immunity in the United States"]]></title><description><![CDATA[
<p>Yeah, "theoretically" is doing a <i>lot</i> of work there. Removal requires a 2/3 majority of the Senate, which is absolutely neverfuckinghappening in a 2-party system.<p>POTUS is now unstoppable and untethered.</p>
]]></description><pubDate>Tue, 13 Jan 2026 01:40:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=46596444</link><dc:creator>JohnBooty</dc:creator><comments>https://news.ycombinator.com/item?id=46596444</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46596444</guid></item><item><title><![CDATA[New comment by JohnBooty in "Presidential Immunity in the United States"]]></title><description><![CDATA[
<p><p><pre><code>     impeached
</code></pre>
As Trump showed, impeachment doesn't mean <i>shit.</i> Actual removal requires a 2/3 majority of the Senate, which is never ever happening.<p>This is why there is a general sense that the POTUS is now more or less completely untethered from any possible consequences for his actions.</p>
]]></description><pubDate>Tue, 13 Jan 2026 01:38:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46596433</link><dc:creator>JohnBooty</dc:creator><comments>https://news.ycombinator.com/item?id=46596433</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46596433</guid></item><item><title><![CDATA[New comment by JohnBooty in "Presidential Immunity in the United States"]]></title><description><![CDATA[
<p>That sort of thing you mentioned is <i>always</i> spun as that other sort of thing you mentioned.</p>
]]></description><pubDate>Tue, 13 Jan 2026 01:34:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=46596411</link><dc:creator>JohnBooty</dc:creator><comments>https://news.ycombinator.com/item?id=46596411</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46596411</guid></item><item><title><![CDATA[New comment by JohnBooty in "Intel Core Ultra Series 3 Debut as First Built on Intel 18A"]]></title><description><![CDATA[
<p>Yes. It's one of those things where even if you will never buy an Intel product, everybody in the <i>world</i> should be rooting for Intel to produce a real winner here.<p>Healthy Intel/GF/TSMC competition at the head of the pack is great for the tech industry, and the global economy at large.<p>Perhaps even more importantly, with armed conflict looming over Taiwan and TSMC... well, enough said.</p>
]]></description><pubDate>Tue, 06 Jan 2026 17:38:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=46515570</link><dc:creator>JohnBooty</dc:creator><comments>https://news.ycombinator.com/item?id=46515570</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46515570</guid></item></channel></rss>