<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: gjm11</title><link>https://news.ycombinator.com/user?id=gjm11</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 16 May 2026 11:09:20 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=gjm11" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by gjm11 in "Waymo updates 3,800 robotaxis after they 'drive into standing water'"]]></title><description><![CDATA[
<p>This seems like an odd take. Don't existing self-driving cars already have rather a lot of world-model? It's not like they're just hooking the driving apparatus to the output of an LLM or something.<p>(Of course there is also scope for debate about how much world model today's LLMs have; it seems like it's more than none even though it has to be built out of token-shuffling parts. But that's not relevant here.)</p>
]]></description><pubDate>Fri, 15 May 2026 21:20:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=48154034</link><dc:creator>gjm11</dc:creator><comments>https://news.ycombinator.com/item?id=48154034</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48154034</guid></item><item><title><![CDATA[New comment by gjm11 in "Rumors of my death are slightly exaggerated"]]></title><description><![CDATA[
<p>In case anyone is taking this subthread too seriously: C.S.'s Wikipedia page does not in fact claim that he is dead, and its most recent update was in December 2025. Whatever rumours of his death may be circulating, they do not appear to have infected Wikipedia.</p>
]]></description><pubDate>Fri, 08 May 2026 21:22:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=48068936</link><dc:creator>gjm11</dc:creator><comments>https://news.ycombinator.com/item?id=48068936</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48068936</guid></item><item><title><![CDATA[New comment by gjm11 in "The Frog for Whom the Bell Tolls"]]></title><description><![CDATA[
<p>Donne was a poet (a very good poet, at that) but this particular passage is from a bit of devotional prose, not a poem, and I think it's misleading to format it as if it were poetry. Especially as it's quite unlike the style of Donne's poetry.</p>
]]></description><pubDate>Tue, 05 May 2026 16:40:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=48024942</link><dc:creator>gjm11</dc:creator><comments>https://news.ycombinator.com/item?id=48024942</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48024942</guid></item><item><title><![CDATA[New comment by gjm11 in "Bitmap and tilemap generation from a single example"]]></title><description><![CDATA[
<p>The idea of a "vibe" was around <i>long</i> before the term "vibe coding". It's not that much more surprising to see "vibe" used before "vibe coding" than it would be to see "coding" used before "vibe coding".</p>
]]></description><pubDate>Sat, 02 May 2026 13:00:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47986043</link><dc:creator>gjm11</dc:creator><comments>https://news.ycombinator.com/item?id=47986043</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47986043</guid></item><item><title><![CDATA[New comment by gjm11 in "The Science Behind Honey's Eternal Shelf Life (2013)"]]></title><description><![CDATA[
<p>It also, unsurprisingly, tells a slightly different and less startling story: it's not that glycerine crystallized in one lab and suddenly others around the world had the same problem, it's that glycerine hadn't been crystallizing in one lab but once the lab was sent a sample of crystallized glycerine the stuff always did crystallize there, presumably (assuming the story's true) because of some sort of tiny particles (whether of glycerine or of something else) that float about in the air or adhere to glassware and encourage glycerine to crystallize.</p>
]]></description><pubDate>Thu, 30 Apr 2026 18:49:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47966670</link><dc:creator>gjm11</dc:creator><comments>https://news.ycombinator.com/item?id=47966670</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47966670</guid></item><item><title><![CDATA[New comment by gjm11 in "If America's so rich, how'd it get so sad?"]]></title><description><![CDATA[
<p>What plot? All the plots in the article either (1) show the change for the worse happening in 2020 or later or (2) are explicitly comparing "before 2020" with "after 2020".<p>(I do agree that Mr Trump is a shockingly bad president in oh so many ways. But the malaise being described here doesn't seem to have started in 2016. Not every bad thing is his fault.)</p>
]]></description><pubDate>Thu, 23 Apr 2026 22:29:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47883019</link><dc:creator>gjm11</dc:creator><comments>https://news.ycombinator.com/item?id=47883019</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47883019</guid></item><item><title><![CDATA[New comment by gjm11 in "Help Keep Thunderbird Alive"]]></title><description><![CDATA[
<p>If Thunderbird <i>required</i> users to sign up for an annual subscription, then <i>that specific problem</i> -- not being able to tell what good one's payment would do -- would go away. There would be a very specific reason to pay the money.<p>(In practice, they presumably couldn't do that, at least not effectively, because the code is open source and someone else could fork it. But let's imagine that somehow they could require all Thunderbird users to pay them.)<p>That doesn't, of course, mean that it would be better overall. Thunderbird users would go from getting Thunderbird for free and maybe having reason to donate some money, to having to pay some money just to keep the ability to use Thunderbird: obviously worse for them. There'd probably be more money available for Thunderbird development, which would be good. The overall result might be either good or bad. But it would, indeed, no longer be unclear whether and why a Thunderbird user might choose to pay money to the Thunderbird project.</p>
]]></description><pubDate>Thu, 09 Apr 2026 14:39:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47704388</link><dc:creator>gjm11</dc:creator><comments>https://news.ycombinator.com/item?id=47704388</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47704388</guid></item><item><title><![CDATA[New comment by gjm11 in "Help Keep Thunderbird Alive"]]></title><description><![CDATA[
<p>The reason "nobody questions how corporations use their money" is that in 99.9% of cases when I pay a corporation money for a product, I'm doing it not for the sake of what they can do with the money, but because otherwise I don't get to use the product, at least not legally.<p>If instead I donate to an open-source project, I'm not doing it in order to get access to the product; I already have that. I'm doing it because I hope they will do something with the money that I value. (Possible examples: Developing new features I like. Rewarding people who already developed features I liked. Activism for causes I approve of. Continuing to provide something that benefits everyone and not just me.)<p>And so I care a lot what they're going to do with the money, in a way I don't if I (say) pay money to Microsoft in exchange for the right to use Microsoft Office. Because what they're going to do with the money <i>determines what point there is in my giving it</i>.<p>Sometimes, everything the project does is stuff I think is valuable (for me or for the world). In that case I don't need to ask exactly what they're doing. Sometimes, it's obvious that what happens to the money is that it goes into the developer's pockets and they get to do what they like with it. In that case, I'll donate if the point of my donation is to reward someone who is doing something I'm glad they're doing, and probably not otherwise.<p>In the case of Thunderbird, it's maybe not so obvious. Probably the money will go toward implementing Thunderbird features and bug fixes, but looking at the history of Firefox I might worry that that's going to mean "AI integrations that actual users mostly don't want" or "implementing advertising to help raise funds", and I might have a variety of attitudes to those things. Or it might go toward some sort of internet activism, and again I might have a variety of attitudes to that depending on exactly what they're agitating for. Or maybe I might worry that the money will mostly end up helping to pay the salary of the CEO of Mozilla. (I don't think that's actually possible, but I can imagine situations where Mozilla wants some things done, and if they can pay for them via donations rather than using the company's money they'll do so, so that the net effect of donating is simply to increase Mozilla's profits.)<p>And I don't think anyone's asking for anything very burdensome in the way of transparency. Just more than, well, <i>nothing at all</i> which is what we have at the moment. The text on the actual page says literally nothing beyond "help keep Thunderbird alive". The FAQ says "Thunderbird is the leading open source email and productivity app that is free for business and personal use. Your gift helps ensure it stays that way, and supports ongoing development." which tells us almost nothing. And "MZLA Technologies Corporation is a wholly owned for-profit subsidiary of the Mozilla Foundation and the home of Thunderbird." which tells us that donations go to a for-profit subsidiary of the Mozilla Foundation (which I believe is the same entity that owns the Mozilla Corporation, but like most people I am not an expert on this stuff and don't know what that means in practice about how the Mozilla Foundation, the Mozilla Corporation and MZLA Technologies Corporation actually work together).<p>Maybe donated money will lead to MZLA Technologies Corporation hiring more developers or paying existing developers more? Maybe it'll be used to buy equipment, or licences for patented stuff? Maybe it'll be used to advertise Thunderbird and get it more users? Maybe it'll be used to agitate for the use of open email standards or something like that? Maybe. Maybe some other thing entirely. There's no way to get any inkling.</p>
]]></description><pubDate>Thu, 09 Apr 2026 14:35:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47704336</link><dc:creator>gjm11</dc:creator><comments>https://news.ycombinator.com/item?id=47704336</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47704336</guid></item><item><title><![CDATA[New comment by gjm11 in "AI overly affirms users asking for personal advice"]]></title><description><![CDATA[
<p>For the avoidance of doubt, I wasn't meaning to imply that you downvoted me. (Nor do I mind if you did.) I don't think it's true that people who downvote things are never able to have a constructive discussion, but there's probably some correlation there.<p>Anyway, thanks for giving some indication of what you didn't like.</p>
]]></description><pubDate>Tue, 31 Mar 2026 20:43:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47593238</link><dc:creator>gjm11</dc:creator><comments>https://news.ycombinator.com/item?id=47593238</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47593238</guid></item><item><title><![CDATA[New comment by gjm11 in "AI overly affirms users asking for personal advice"]]></title><description><![CDATA[
<p>Clearly a bunch of other people also disagree profoundly with everything I said, since my comment is currently sitting at 0 having at one point been higher.<p>I vigorously encourage anyone who thinks something I wrote is bad to downvote it as they see fit, but it would be nice if some of those people would tell me what about my comment they found so objectionable. (It all seems pretty well reasoned <i>to me</i> -- but it would, wouldn't it?)<p>[EDITED to fix an inconsequential typo]</p>
]]></description><pubDate>Mon, 30 Mar 2026 22:03:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47580273</link><dc:creator>gjm11</dc:creator><comments>https://news.ycombinator.com/item?id=47580273</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47580273</guid></item><item><title><![CDATA[New comment by gjm11 in "I am definitely missing the pre-AI writing era"]]></title><description><![CDATA[
<p>Right. The LLMs' quirks aren't <i>bad in themselves</i>, they're bad <i>when they're in every damn paragraph</i>. They're mostly things that in moderation actually improve writing, and that if you see them once (without the knowledge that they're things LLMs do) would rightly tend to make you think better of the author. And so, of course, in RLHF training they get rewarded, and unfortunately it's not so easy for an LLM to learn "it's good to do this thing a bit <i>but not too much</i>.<p>The <i>structured</i> thing you mention is the one that bugs me most. I genuinely think that most human writing would be improved by having more of the "signposts" that LLMs overuse. Headings, context-setting sentences, bullet points where appropriate, etc. I was doing "list of bullet points with boldfaced intro for each one" before the LLMs were. But because the LLMs are saturating their writing with it, we'll all learn to take it as a sign of glib superficiality and inauthenticity, and typical good human writing will start avoiding everything of that kind, and therefore get that little bit harder to read. Alas.</p>
]]></description><pubDate>Mon, 30 Mar 2026 11:01:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47572744</link><dc:creator>gjm11</dc:creator><comments>https://news.ycombinator.com/item?id=47572744</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47572744</guid></item><item><title><![CDATA[New comment by gjm11 in "AI overly affirms users asking for personal advice"]]></title><description><![CDATA[
<p>First off, "not adequately described as a mere token-predictor" and "not sentient" are entirely separate things.<p>I can't speak for anyone else, but what <i>I</i> feel when I read yet another glib "it's just a stochastic parrot, of course it isn't doing anything that deserves to be called reasoning" take is much more like <i>bored</i> than it is like <i>upset</i>.<p>Today's LLMs are in some sense "just predicting tokens" in some sense. Likewise, human brains are in some sense "just shuttling neurotransmitters and electrical impulses around" in some sense. Neither of those tells you what the thing can actually do. To figure that out, you have to <i>look at what it can do</i>.<p>Today's best LLMs can do about as well as the best humans on problems from the International Mathematical Olympiad and occasionally solve easyish actual mathematical research problems. They write code about as well as a junior software developer (better in some ways, worse in others) but much faster. They write prose about as well as an average educated person (but with some annoying quirks that are annoying mostly because they are the same quirks over and over again).<p>If it pleases you to call those things "thinking" then you can. If it pleases you to call them "stochastic parroting" then you can. They are the same things either way. They are not, on the face of it, very much like "just repeating things the machine has already seen", or at least not <i>more</i> like that than a lot of things intelligent human beings do that we don't usually describe that way.<p>If you want to know whether an LLM can do some particular thing -- do your job well enough for your boss to fire you, write advertising copy that will successfully sell products, exterminate the human race, whatever -- then it's not enough to say "it's just remixing what it's seen on the internet, therefore it can't do X" unless you also have <i>good reason to believe that that thing can't be done by just "remixing what's on the internet"</i> (in whatever sense of "remixing" the LLM is doing that). And it's turning out that lots of things can be done that way that you absolutely wouldn't have predicted five years ago could be done that way.<p>It seems to me that this should make us very cautious about saying "they can't do X because all they can do is regurgitate a combination of things they've seen in training".<p>(My own view, not that there's any reason why anyone should care what I-in-particular think, is a combination of "what they're doing is less parroting than you might have thought" and "you can do more by parroting than you might have thought".)<p>So, anyway, this particular instance of the stochastic-parrot argument started when someone said: of course the AIs are yes-men, because figuring out when to agree and when not to requires actual logic and thought and the LLMs don't have either of those things.<p>Is it really clear that deciding whether or not to agree when someone says "I think maybe I should break up with my girlfriend" or "I've got this amazing new theory of physics that the establishment is stupidly dismissing" requires <i>more logic and thought</i> than, say, gold-medal performance on IMO problems? It certainly isn't clear to me. Having done a couple of International Mathematical Olympiads myself in my tragically unmisspent youth, I can assure you that solving their problems requires quite a bit of logic and thought, at least for humans. It may well be <i>harder</i> to give a good answer to "should I leave my job?", but it's not exactly "logic and thought" that it needs more of.<p>Someone reported that Claude is much less yes-man-ish than Gemini and ChatGPT. I don't know whether that's true (though it wouldn't surprise me) but: suppose it is; do you want that to oblige you to say that yes, actually, Claude really thinks logically, unlike Gemini and ChatGPT? I don't think you do. And if not, you want to avoid saying "duh, of course, you can't avoid being a yes-man without actually thinking and reasoning, and we all know that LLMs can't do those things".</p>
]]></description><pubDate>Sat, 28 Mar 2026 16:50:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47556273</link><dc:creator>gjm11</dc:creator><comments>https://news.ycombinator.com/item?id=47556273</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47556273</guid></item><item><title><![CDATA[New comment by gjm11 in "Apple discontinues the Mac Pro"]]></title><description><![CDATA[
<p>I think these calculations are a bit bogus.<p>If you spend $35k on a nice computer, and then earn $35k from doing some work using it, that doesn't mean that buying the computer has paid for itself unless the computer is <i>solely responsible</i> for that income. It probably isn't.<p>It's not necessarily even true that after doing that work it's "paid for", in the sense that getting the $35k income means that you were able to afford the $35k computer: that only follows if you didn't need any of that income for other luxuries, such as food and shelter.<p>If you're earning $50/hour, 40hr/week then what you've done after 17.5 weeks is <i>earned enough to buy</i> that $35k computer. Assuming you don't need any of that money for anything else, like food and shelter.<p>If the fancy computer helps you get that income then of course it's perfectly legit to estimate how much difference it makes and decide it pays for itself, but it's not as simple as comparing the price of the computer with your total income.<p>Regardless of how much it contributes, if you have plenty of money then it's also perfectly legit to say "I can comfortably afford this and I want it so I'll but it" but, again, it's not as simple as comparing the price of the computer with your total income.</p>
]]></description><pubDate>Fri, 27 Mar 2026 11:52:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47541591</link><dc:creator>gjm11</dc:creator><comments>https://news.ycombinator.com/item?id=47541591</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47541591</guid></item><item><title><![CDATA[New comment by gjm11 in "Meta’s renewed commitment to jemalloc"]]></title><description><![CDATA[
<p>I'm pretty sure it means something like this: "Because jemalloc is used all over the place in our systems that run at tremendous scale, some hack that improves its performance a little bit while degrading the longer-term maintainability of the code can look very appealing -- look, doing this thing will save us $X,000,000 per year! -- and it takes discipline to avoid giving in to that temptation and to insist on doing things properly even if sometimes it means passing up a chance to make the code 0.1% faster and 10% messier."</p>
]]></description><pubDate>Tue, 17 Mar 2026 16:19:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47414779</link><dc:creator>gjm11</dc:creator><comments>https://news.ycombinator.com/item?id=47414779</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47414779</guid></item><item><title><![CDATA[New comment by gjm11 in "Kagi Translate now supports LinkedIn Speak as an output language"]]></title><description><![CDATA[
<p>I like the fact that the experiments people have tried with this almost all seem to fall into two <i>quite different</i> categories. Category 1: "Fuck you", "I just took a shit", "ass ass ass ass ass ass ass". Category 2: The Gettysburg Address, the Lord's Prayer and the opening of the Gospel according to St John.<p>Which kinda makes sense: it feels natural to do this with (1) things that are trivial or offensive, so that the pretentious LinkedIn versions are as different as possible, or (2) things with some sort of aura of greatness about them, so that the bathos of the sterile LinkedInSpeak is as great a contrast as possible.<p>But for #2 it feels to me like <i>great writing</i> rather than <i>culturally important and revered writing</i> is more the point (though of course there's some tendency for those to go together). I tried a few famous bits of writing to see what happened. Here are some outputs; guessing what the inputs were is left as an exercise for the reader. In most cases this is just the first bit of the LinkedIn translation. I think some are much easier than others.<p>I recently connected with a global traveler from an emerging market who shared some powerful insights on legacy and leadership.<p>It’s a global industry standard: any high-net-worth individual with a solid portfolio is actively looking to scale their personal life and onboard a long-term partner.<p>April is officially the most challenging month for growth.<p>Reflecting on the current landscape: it’s a time of massive disruption and unparalleled opportunity. We’re seeing a true dichotomy—the age of wisdom vs. the age of foolishness.<p>Reflecting on my career journey and personal ROI. When I look at my bandwidth and how my energy is spent before reaching my mid-career milestones in this vast, competitive landscape, I realize that sitting on my unique value proposition is a major career risk.<p>I was out there solo-navigating my journey, scaling high-level peaks and valleys like a cloud, when I suddenly hit a major milestone: I encountered a massive network, a high-performing host of golden daffodils.<p>Big news from Xanadu! Kubla Khan just greenlit a massive, high-end pleasure-dome project.<p>Team, it’s time to lean in and go again! [rocket emoji] We either hit our KPIs or we leave it all on the field. [100% emoji] In a steady market, there’s nothing better than a growth mindset and staying humble. But when the industry gets disrupted, it’s time to activate beast mode. [lion emoji]</p>
]]></description><pubDate>Tue, 17 Mar 2026 15:51:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47414376</link><dc:creator>gjm11</dc:creator><comments>https://news.ycombinator.com/item?id=47414376</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47414376</guid></item><item><title><![CDATA[New comment by gjm11 in "Kagi Translate now supports LinkedIn Speak as an output language"]]></title><description><![CDATA[
<p>Note that "police police police police" is a grammatically valid sentence, with multiple different parsings, one of which we could rephrase as "the people who keep a watchful eye on what the police are doing, keep a watchful eye on what the police are doing" -- that is, the police police are policing the police -- so it's even <i>true</i>.<p>(Cf. <a href="https://en.wikipedia.org/wiki/Buffalo_buffalo_Buffalo_buffalo_buffalo_buffalo_Buffalo_buffalo" rel="nofollow">https://en.wikipedia.org/wiki/Buffalo_buffalo_Buffalo_buffal...</a>.)</p>
]]></description><pubDate>Tue, 17 Mar 2026 15:23:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47413997</link><dc:creator>gjm11</dc:creator><comments>https://news.ycombinator.com/item?id=47413997</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47413997</guid></item><item><title><![CDATA[New comment by gjm11 in "Kagi Small Web"]]></title><description><![CDATA[
<p>I think the joke is that Microsoft <i>did</i> do something very like this -- they call it Windows Recall -- and it got a <i>lot</i> of angry pushback. (Partly, IIRC, because the specific way they did it initially was very bad in terms of security and privacy, but I think a lot of people quite understandably don't trust them to implement it (a) the way they claim they do or (b) competently, so even after they made a bunch of changes aimed at making it less scary it's still viewed with a lot of hostility.)</p>
]]></description><pubDate>Tue, 17 Mar 2026 15:13:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47413840</link><dc:creator>gjm11</dc:creator><comments>https://news.ycombinator.com/item?id=47413840</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47413840</guid></item><item><title><![CDATA[New comment by gjm11 in "MoD sources warn Palantir role at heart of government is threat to UK security"]]></title><description><![CDATA[
<p>I think there may be a bit less to that one than meets the eye. In Swiss law there's some kind of right-of-reply thing where if someone puts something about you in print and you think it's wrong you may be entitled to have some sort of response printed. And AIUI the way this works is that you go before a court and say "we want our response printed, please", and that's what Palantir's done in this case.<p>(Note 1: For all I know it may well be true that the reporting is 100% accurate and Palantir's claim to deserve a reply is 100% bullshit. I'm not saying they're <i>in the right</i> here! But I think the actual story is a bit less horrible than "Palantir is taking these guys to court because they didn't like their reporting" sounds without the relevant context. They're not, e.g., trying to get damages from the newspaper, or trying to get what they wrote retracted, or anything like that.)<p>(Note 2: I am not an expert on Swiss law or on this case, and I am accordingly not 100% confident of any of the above. In the unlikely event that whether I'm right about this <i>matters</i> to anyone reading, they should check it for themselves :-).)</p>
]]></description><pubDate>Mon, 16 Mar 2026 21:22:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47405082</link><dc:creator>gjm11</dc:creator><comments>https://news.ycombinator.com/item?id=47405082</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47405082</guid></item><item><title><![CDATA[New comment by gjm11 in "In Praise of Stupid Questions"]]></title><description><![CDATA[
<p>One thing in the article that strikes me as very strange is this:<p>"<i>I suppose that, on a practical level, a take-home for the practicing mathematician is that if you use ChatGPT, don’t trust it to generate valid proofs, and even when it finds a valid proof, don’t be so sure it’s a good proof. And whatever you do, don’t have ChatGPT create a bibliography for you.</i>"<p>In isolation, that's all very good advice. But however is a take-home from what goes before? You used ChatGPT, it <i>did</i> generate a valid proof, it was "a solid by-the-book argument that employed a method I’ve used myself" which may or may not imply "a <i>good</i> proof" but suggests it was at least OK, and nothing in the story you told involves ChatGPT generating bibliographies at all.</p>
]]></description><pubDate>Sat, 14 Mar 2026 14:38:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47377156</link><dc:creator>gjm11</dc:creator><comments>https://news.ycombinator.com/item?id=47377156</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47377156</guid></item><item><title><![CDATA[New comment by gjm11 in "Qatar helium shutdown puts chip supply chain on a two-week clock"]]></title><description><![CDATA[
<p><a href="https://www.youtube.com/watch?v=kw8SSQHQitg" rel="nofollow">https://www.youtube.com/watch?v=kw8SSQHQitg</a> is Lindsay Graham in 2016 defending the Republicans' refusal to consider a nomination to the Supreme Court in the last year of Obama's presidency, and saying "you can use my words against me and you'd be absolutely right".<p><a href="https://www.youtube.com/watch?v=CR2A6FDiGEA" rel="nofollow">https://www.youtube.com/watch?v=CR2A6FDiGEA</a> is about Lindsey Graham in 2020 defending the Republicans' insistence on pushing through a nomination to the Supreme Court in the last year of Trump's presidency. It also includes a clip of Lindsay Graham in 2018 saying that if a Supreme Court vacancy opens once the primaries have started, "we'll wait till the next election".</p>
]]></description><pubDate>Fri, 13 Mar 2026 20:55:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47369748</link><dc:creator>gjm11</dc:creator><comments>https://news.ycombinator.com/item?id=47369748</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47369748</guid></item></channel></rss>