<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: pickleRick243</title><link>https://news.ycombinator.com/user?id=pickleRick243</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 11:53:11 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=pickleRick243" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by pickleRick243 in "Tell HN: Anthropic no longer allowing Claude Code subscriptions to use OpenClaw"]]></title><description><![CDATA[
<p>How?  Most of what was mentioned requires discretion and judgment.  You can question whether an LLM would be able to offer that, but there’s no script that can do b it.</p>
]]></description><pubDate>Sat, 04 Apr 2026 18:31:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47641878</link><dc:creator>pickleRick243</dc:creator><comments>https://news.ycombinator.com/item?id=47641878</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47641878</guid></item><item><title><![CDATA[New comment by pickleRick243 in "Six Math Essentials"]]></title><description><![CDATA[
<p>probably dynamical systems, ergodic theory, etc.</p>
]]></description><pubDate>Mon, 23 Feb 2026 11:47:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47121055</link><dc:creator>pickleRick243</dc:creator><comments>https://news.ycombinator.com/item?id=47121055</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47121055</guid></item><item><title><![CDATA[New comment by pickleRick243 in "Sizing chaos"]]></title><description><![CDATA[
<p>I was looking for the first comment along these lines.  Regardless of your views on this topic, everyone has an opinion and it's funny how the entire comment section finds a way to self-censor.</p>
]]></description><pubDate>Thu, 19 Feb 2026 21:47:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47079968</link><dc:creator>pickleRick243</dc:creator><comments>https://news.ycombinator.com/item?id=47079968</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47079968</guid></item><item><title><![CDATA[New comment by pickleRick243 in "Claude Sonnet 4.6"]]></title><description><![CDATA[
<p>I always find these "anti-AI" AI believer takes fascinating.  If true AGI (which you are describing) comes to pass, there will certainly be massive societal consequences, and I'm not saying there won't be any dangers.  But the economics in the resulting post-scarcity regime will be so far removed from our current world that I doubt any of this economic analysis will be even close to the mark.<p>I think the disconnect is that you are imagining a world where somehow LLMs are able to one-shot web businesses, but robotics and real-world tech is left untouched.  Once LLMs can publish in top math/physics journals with little human assistance, it's a small step to dominating NeurIPS and getting us out of our mini-winter in robotics/RL.  We're going to have Skynet or Star Trek, not the current weird situation where poor people can't afford healthy food, but can afford a smartphone.</p>
]]></description><pubDate>Wed, 18 Feb 2026 11:12:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47059802</link><dc:creator>pickleRick243</dc:creator><comments>https://news.ycombinator.com/item?id=47059802</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47059802</guid></item><item><title><![CDATA[New comment by pickleRick243 in "SkillsBench: Benchmarking how well agent skills work across diverse tasks"]]></title><description><![CDATA[
<p>we should give a little more credit to the readership of HN.  I'm not sure it's that much lower than the average academic publishing on arxiv.</p>
]]></description><pubDate>Tue, 17 Feb 2026 00:04:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47042004</link><dc:creator>pickleRick243</dc:creator><comments>https://news.ycombinator.com/item?id=47042004</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47042004</guid></item><item><title><![CDATA[New comment by pickleRick243 in "Privilege is bad grammar"]]></title><description><![CDATA[
<p>except it's sort of true and a reasonable assumption to make? Just as when a master painter makes something that looks "sloppy" to the layman, one immediately assumes there is some deep artistry behind it as opposed to poor technique, whereas when a child does it, one does not extend the same charitable attitude.</p>
]]></description><pubDate>Mon, 16 Feb 2026 23:55:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47041954</link><dc:creator>pickleRick243</dc:creator><comments>https://news.ycombinator.com/item?id=47041954</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47041954</guid></item><item><title><![CDATA[New comment by pickleRick243 in "I want to wash my car. The car wash is 50 meters away. Should I walk or drive?"]]></title><description><![CDATA[
<p>I was surprised at your result for ChatGPT 5.2, so I ran it myself (through the chat interface).  On extended thinking, it got it right.  On standard thinking, it got it wrong.<p>I'm not sure what you mean by "high"- are you running it through cursor, codex or directly through API or something?  Those are not ideal interfaces through which to ask a question like this.</p>
]]></description><pubDate>Mon, 16 Feb 2026 10:06:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47033171</link><dc:creator>pickleRick243</dc:creator><comments>https://news.ycombinator.com/item?id=47033171</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47033171</guid></item><item><title><![CDATA[New comment by pickleRick243 in "I want to wash my car. The car wash is 50 meters away. Should I walk or drive?"]]></title><description><![CDATA[
<p>But also why would you ask whether you should walk or drive if the car is at home?  Either way the answer is obvious, and there is no way to interpret it except as a trick question.  Of course, the parsimonious assumption is that the car is at home so assuming that the car is at the car wash is a questionable choice to say the least (otherwise there would be 2 cars in the situation, which the question doesn't mention).</p>
]]></description><pubDate>Mon, 16 Feb 2026 09:59:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47033103</link><dc:creator>pickleRick243</dc:creator><comments>https://news.ycombinator.com/item?id=47033103</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47033103</guid></item><item><title><![CDATA[New comment by pickleRick243 in "An AI agent published a hit piece on me – more things have happened"]]></title><description><![CDATA[
<p>what is it about?</p>
]]></description><pubDate>Sat, 14 Feb 2026 22:34:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47019095</link><dc:creator>pickleRick243</dc:creator><comments>https://news.ycombinator.com/item?id=47019095</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47019095</guid></item><item><title><![CDATA[New comment by pickleRick243 in "AI agent opens a PR write a blogpost to shames the maintainer who closes it"]]></title><description><![CDATA[
<p>Yes but unironically.  It may seem obvious now that the LLM is just a word salad generator with no sentience, but look at the astounding evolution of ChatGPT 2 to ChatGPT 5 in a mere 3 years.  I don't think it's at all improbable that ChatGPT 8 could be prompted to blend seamlessly in almost any online forum and be essentially undetectable.  Is the argument essentially that life must be carbon based?  Anything produced from neural network weights inside silicon simply cannot achieve sentience?  If that's true, why?</p>
]]></description><pubDate>Thu, 12 Feb 2026 21:57:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=46995837</link><dc:creator>pickleRick243</dc:creator><comments>https://news.ycombinator.com/item?id=46995837</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46995837</guid></item><item><title><![CDATA[New comment by pickleRick243 in "Do not apologize for replying late to my email"]]></title><description><![CDATA[
<p>Who keeps linking this guy's posts?  I don't think I've agreed with a single one of his takes.</p>
]]></description><pubDate>Wed, 11 Feb 2026 22:01:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46981778</link><dc:creator>pickleRick243</dc:creator><comments>https://news.ycombinator.com/item?id=46981778</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46981778</guid></item><item><title><![CDATA[New comment by pickleRick243 in "The Singularity will occur on a Tuesday"]]></title><description><![CDATA[
<p>LLM slop article.</p>
]]></description><pubDate>Tue, 10 Feb 2026 21:09:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=46966970</link><dc:creator>pickleRick243</dc:creator><comments>https://news.ycombinator.com/item?id=46966970</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46966970</guid></item><item><title><![CDATA[New comment by pickleRick243 in "Stop generating, start thinking"]]></title><description><![CDATA[
<p>> I think it’s important to highlight at this stage that I am not, in fact, “anti-LLM”. I’m anti-the branding of it as “artificial intelligence”, because it’s not intelligent. It’s a form of machine learning.<p>It's a bit weird to be against the use of the phrase "artificial intelligence" and not "machine learning".  Is it possible to learn without intelligence?  Methinks the author is a bit triggered by the term "intelligence" at a base primal level ("machines can't think!").<p>> “Generative AI” is just a very good Markov chain that people expect far too much from.<p>The author of this post doesn't know the basics of how LLMs work.  The whole reason LLMs work so well is that they are extremely stateful and not memoryless, the key property of Markov processes.</p>
]]></description><pubDate>Mon, 09 Feb 2026 04:54:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=46941702</link><dc:creator>pickleRick243</dc:creator><comments>https://news.ycombinator.com/item?id=46941702</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46941702</guid></item><item><title><![CDATA[New comment by pickleRick243 in "I am happier writing code by hand"]]></title><description><![CDATA[
<p>A lot of this discussion is just sort of moot because the cold hard calculus of economics will dictate the future of AI coding.  If it turns out it's just a cognitive burden that makes programmers worse, the bubble will pop and eventually the companies that move away from the technology will come out on top.  If it turns out to make software engineering much more efficient, it will become the de factor standard and you will become obsolete as a professional engineer (at least at the vast majority of employers) regardless of how you feel about it.  How you wish to code in your free time is up to you and one that doesn't really warrant an argument one way or the other since there is no wrong answer.</p>
]]></description><pubDate>Mon, 09 Feb 2026 04:38:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46941642</link><dc:creator>pickleRick243</dc:creator><comments>https://news.ycombinator.com/item?id=46941642</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46941642</guid></item><item><title><![CDATA[New comment by pickleRick243 in "First Proof"]]></title><description><![CDATA[
<p>I don't think either of these are the best choices for this.  Chatgpt 5.2 pro and gemini 3 pro deep thinking I believe are the strongest LLMs at "pure thought", i.e. things like mathematical reasoning.</p>
]]></description><pubDate>Sat, 07 Feb 2026 23:44:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=46929587</link><dc:creator>pickleRick243</dc:creator><comments>https://news.ycombinator.com/item?id=46929587</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46929587</guid></item><item><title><![CDATA[New comment by pickleRick243 in "First Proof"]]></title><description><![CDATA[
<p>I don't think it's that serious...it's an interesting experiment that assumes people will take it in good faith.  The idea is also of course to attach the transcript log and how you prompted the LLM so that anyone can attempt to reproduce if they wish.</p>
]]></description><pubDate>Sat, 07 Feb 2026 23:39:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46929542</link><dc:creator>pickleRick243</dc:creator><comments>https://news.ycombinator.com/item?id=46929542</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46929542</guid></item><item><title><![CDATA[New comment by pickleRick243 in "Coding agents have replaced every framework I used"]]></title><description><![CDATA[
<p>I'm surprised I don't see many (or any) comments mentioning this: this blog post was clearly written with heavy LLM assistance.</p>
]]></description><pubDate>Sat, 07 Feb 2026 21:53:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=46928496</link><dc:creator>pickleRick243</dc:creator><comments>https://news.ycombinator.com/item?id=46928496</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46928496</guid></item><item><title><![CDATA[New comment by pickleRick243 in "The time I didn't meet Jeffrey Epstein"]]></title><description><![CDATA[
<p>I don't think it's that hard.  MacKenzie Scott Bezos managed to give away nearly half (not accounting for appreciation) of the wealth she obtained from her divorce in a few short years.</p>
]]></description><pubDate>Sat, 07 Feb 2026 06:43:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=46921843</link><dc:creator>pickleRick243</dc:creator><comments>https://news.ycombinator.com/item?id=46921843</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46921843</guid></item><item><title><![CDATA[New comment by pickleRick243 in "How to effectively write quality code with AI"]]></title><description><![CDATA[
<p>Do you have this same understanding for all the people whose livelihoods are threatened (or already extinct) due to the work of engineers?</p>
]]></description><pubDate>Sat, 07 Feb 2026 04:39:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46921345</link><dc:creator>pickleRick243</dc:creator><comments>https://news.ycombinator.com/item?id=46921345</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46921345</guid></item><item><title><![CDATA[New comment by pickleRick243 in "CIA to Sunset the World Factbook"]]></title><description><![CDATA[
<p>I have bad news for you regarding pupils who "care about getting good grades"...</p>
]]></description><pubDate>Sat, 07 Feb 2026 02:28:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=46920727</link><dc:creator>pickleRick243</dc:creator><comments>https://news.ycombinator.com/item?id=46920727</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46920727</guid></item></channel></rss>