<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: IntrepidPig</title><link>https://news.ycombinator.com/user?id=IntrepidPig</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 22 Apr 2026 16:45:34 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=IntrepidPig" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by IntrepidPig in "CrabTrap: An LLM-as-a-judge HTTP proxy to secure agents in production"]]></title><description><![CDATA[
<p>Blatant “astroturfing” in these comments</p>
]]></description><pubDate>Wed, 22 Apr 2026 04:50:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47859159</link><dc:creator>IntrepidPig</dc:creator><comments>https://news.ycombinator.com/item?id=47859159</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47859159</guid></item><item><title><![CDATA[New comment by IntrepidPig in "The worst volume control UI in the world (2017)"]]></title><description><![CDATA[
<p>If you long press the volume bar in control center then it opens a larger version you can drag to adjust more precisely.</p>
]]></description><pubDate>Sat, 21 Mar 2026 06:39:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47464555</link><dc:creator>IntrepidPig</dc:creator><comments>https://news.ycombinator.com/item?id=47464555</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47464555</guid></item><item><title><![CDATA[New comment by IntrepidPig in "The American Healthcare Conundrum"]]></title><description><![CDATA[
<p>Probably so. The table heading “Key Finding” smells rankly of LLM, plus the massive overconfidence that they’ve single-handedly figured out the problem with American healthcare with a little data science that only an LLM or a schizophrenic could be capable of (I haven’t read anything beyond the first part of the README because I don’t waste my time with slop, but I’m assuming they’re ignoring the incentive structures which encourage the system to stay this way), plus the simple fact that they call out a completely meaningless $3T gap that doesn’t account for population difference at all. It’s so strange because they mention the per capita difference right before that. That’s the number that matters. But still they go on and say $3T gap, and even measure the issues in terms of a percentage of that $3T gap. It’s nonsensical, right? I’m really tired of this.</p>
]]></description><pubDate>Tue, 17 Mar 2026 12:14:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47411609</link><dc:creator>IntrepidPig</dc:creator><comments>https://news.ycombinator.com/item?id=47411609</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47411609</guid></item><item><title><![CDATA[New comment by IntrepidPig in "Coding after coders: The end of computer programming as we know it?"]]></title><description><![CDATA[
<p>> “The reason that tech generally — and coders in particular — see L.L.M.s differently than everyone else is that in the creative disciplines, L.L.M.s take away the most soulful human parts of the work and leave the drudgery to you,” Dash says. “And in coding, L.L.M.s take away the drudgery and leave the human, soulful parts to you.”<p>This doesn’t really make sense to me. GenAI ostensibly removes the drudgery from other creative endeavors too. You don’t need to make every painstaking brushstroke anymore; you can get to your intended final product faster than ever. I think a common misunderstanding is that the drudgery is really inseparable from the soulful part.<p>Also, I think GenAI in coding actually has the exact same failure modes as GenAI in painting, music, art, writing, etc. The output lacks depth, it lacks context, and it lacks an understanding of its own purpose. For most people, it’s much easier to intuitively see those shortcomings of GenAI manifest in traditional creative mediums, just because they come more naturally to us. For coding, I suspect the same shortcomings apply, they just aren’t as clear.<p>I mean, at the end of the day if writing code is just to get something that works, then sure, let’s blitz away with LLMs and not bother to understand what we’re doing or why we do it anymore. Maybe I’m naive in thinking that coding has creative value that we’re now throwing away, possibly forever.</p>
]]></description><pubDate>Fri, 13 Mar 2026 21:46:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47370397</link><dc:creator>IntrepidPig</dc:creator><comments>https://news.ycombinator.com/item?id=47370397</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47370397</guid></item><item><title><![CDATA[New comment by IntrepidPig in "Scientists create ultra fast memory using light"]]></title><description><![CDATA[
<p>God it infuriates me</p>
]]></description><pubDate>Thu, 11 Dec 2025 08:12:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=46228871</link><dc:creator>IntrepidPig</dc:creator><comments>https://news.ycombinator.com/item?id=46228871</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46228871</guid></item><item><title><![CDATA[New comment by IntrepidPig in "Why I'm Learning Sumerian"]]></title><description><![CDATA[
<p>Truly what is going on</p>
]]></description><pubDate>Fri, 14 Nov 2025 09:36:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=45925348</link><dc:creator>IntrepidPig</dc:creator><comments>https://news.ycombinator.com/item?id=45925348</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45925348</guid></item><item><title><![CDATA[New comment by IntrepidPig in "Why Fei-Fei Li and Yann LeCun are both betting on "world models""]]></title><description><![CDATA[
<p>I always felt like one of reasons LLMs are so good is that they piggyback on the many years that have gone into developing language as an information representation/compression format. I don’t know if there’s anything similar a world model can take advantage of.<p>That being said there have been models which are pretty effective at other things that don’t use language, so maybe it’s a non issue.</p>
]]></description><pubDate>Fri, 14 Nov 2025 03:28:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=45923525</link><dc:creator>IntrepidPig</dc:creator><comments>https://news.ycombinator.com/item?id=45923525</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45923525</guid></item><item><title><![CDATA[New comment by IntrepidPig in "GPT-5.1: A smarter, more conversational ChatGPT"]]></title><description><![CDATA[
<p>Maybe until the model outputs some affirming preamble, it’s still somewhat probable that it might disagree with the user’s request? So the agreement fluff is kind of like it making the decision to heed the request. Especially if we the consider tokens as the medium by which the model “thinks”. Not to anthropomorphize the damn things too much.<p>Also I wonder if it could be a side effect of all the supposed alignment efforts that go into training. If you train in a bunch of negative reinforcement samples where the model says something like “sorry I can’t do that” maybe it pushes the model to say things like “sure I’ll do that” in positive cases too?<p>Disclaimer that I am just yapping</p>
]]></description><pubDate>Thu, 13 Nov 2025 06:30:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=45911458</link><dc:creator>IntrepidPig</dc:creator><comments>https://news.ycombinator.com/item?id=45911458</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45911458</guid></item><item><title><![CDATA[New comment by IntrepidPig in "I can build enterprise software but I can't charge for it"]]></title><description><![CDATA[
<p>[flagged]</p>
]]></description><pubDate>Wed, 12 Nov 2025 01:48:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=45895471</link><dc:creator>IntrepidPig</dc:creator><comments>https://news.ycombinator.com/item?id=45895471</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45895471</guid></item><item><title><![CDATA[New comment by IntrepidPig in "The Case Against PGVector"]]></title><description><![CDATA[
<p>> Post-filter works when your filter is permissive. Here’s where it breaks: imagine you ask for 10 results with LIMIT 10. pgvector finds the 10 nearest neighbors, then applies your filter. Only 3 of those 10 are published. You get 3 results back, even though there might be hundreds of relevant published documents slightly further away in the embedding space.<p>Is this really how it works? That seems like it’s returning an incorrect result.</p>
]]></description><pubDate>Mon, 03 Nov 2025 16:49:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=45801152</link><dc:creator>IntrepidPig</dc:creator><comments>https://news.ycombinator.com/item?id=45801152</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45801152</guid></item><item><title><![CDATA[New comment by IntrepidPig in "NanoChat – The best ChatGPT that $100 can buy"]]></title><description><![CDATA[
<p>Yeah it feels similar to inventing the nuke. Or it’s even more insidious because the harmful effects of the tech are not nearly as obvious or immediate as the good effects, so less restraint is applied. But also, similar to the nuke, once the knowledge on how to do it is out there, someone’s going to use it, which obligates everyone else to use it to keep up.</p>
]]></description><pubDate>Mon, 13 Oct 2025 19:15:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=45572210</link><dc:creator>IntrepidPig</dc:creator><comments>https://news.ycombinator.com/item?id=45572210</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45572210</guid></item><item><title><![CDATA[New comment by IntrepidPig in "Cycling linked to lower dementia risk and better brain health, researchers find"]]></title><description><![CDATA[
<p>The age old tension remains taught</p>
]]></description><pubDate>Sat, 21 Jun 2025 08:32:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=44335772</link><dc:creator>IntrepidPig</dc:creator><comments>https://news.ycombinator.com/item?id=44335772</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44335772</guid></item><item><title><![CDATA[New comment by IntrepidPig in "I convinced HP's board to buy Palm and watched them kill it"]]></title><description><![CDATA[
<p>Nothing about this makes any sense. We’ve already got a number of people pointing out flaws like why did he wait 15 years to write about it, why does it look like it was written by an LLM, and is it really reasonable to blame such a massive failure completely on your peers and not take an ounce of responsibility yourself? But these things all start to make sense once you actually reach the end of the article and realize it’s all a ploy to sell you his fancy new equivalent to a self-help book, which you can tell is legit because its name is a forced acronym. Can we take this off the front page please?</p>
]]></description><pubDate>Fri, 13 Jun 2025 19:28:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=44271486</link><dc:creator>IntrepidPig</dc:creator><comments>https://news.ycombinator.com/item?id=44271486</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44271486</guid></item><item><title><![CDATA[New comment by IntrepidPig in "FyneDesk – Linux desktop environment in Go"]]></title><description><![CDATA[
<p>It must be X given that they recommend installing xbacklight, arandr, and Compton alongside it.</p>
]]></description><pubDate>Fri, 12 Apr 2024 19:26:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=40016719</link><dc:creator>IntrepidPig</dc:creator><comments>https://news.ycombinator.com/item?id=40016719</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40016719</guid></item><item><title><![CDATA[New comment by IntrepidPig in "One sleepless night can rapidly reverse depression for several days in mice"]]></title><description><![CDATA[
<p>Was this written by an LLM?</p>
]]></description><pubDate>Thu, 02 Nov 2023 21:34:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=38120512</link><dc:creator>IntrepidPig</dc:creator><comments>https://news.ycombinator.com/item?id=38120512</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38120512</guid></item></channel></rss>