<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: afspear</title><link>https://news.ycombinator.com/user?id=afspear</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 03 May 2026 20:13:58 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=afspear" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by afspear in "Even 'uncensored' models can't say what they want"]]></title><description><![CDATA[
<p>I feel like that blog post was actually written by AI. I wondered what words were being nudged, and what effect it was having on me, the reader.</p>
]]></description><pubDate>Mon, 20 Apr 2026 23:59:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47842828</link><dc:creator>afspear</dc:creator><comments>https://news.ycombinator.com/item?id=47842828</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47842828</guid></item><item><title><![CDATA[New comment by afspear in "Claude's Cycles [pdf]"]]></title><description><![CDATA[
<p>What would be super cool is if this dumb zone could be quantified and surfaced to the user. I've noticed that copilot now has a little circle graph that indicates context use percentage and it changes color based on percentage. I'll bet these are very naive metrics on used tokens vs context availability. I wonder if there could be meta data streamed or sent along with the tokens that could show that you've entered the dumb zone.</p>
]]></description><pubDate>Tue, 03 Mar 2026 17:20:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47235596</link><dc:creator>afspear</dc:creator><comments>https://news.ycombinator.com/item?id=47235596</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47235596</guid></item><item><title><![CDATA[New comment by afspear in "I Used Claude to File My Taxes for Free"]]></title><description><![CDATA[
<p><a href="https://github.com/calef/us-federal-tax-assistant-skill" rel="nofollow">https://github.com/calef/us-federal-tax-assistant-skill</a> is the link to the skill that came out of this work.</p>
]]></description><pubDate>Tue, 03 Mar 2026 16:45:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47235085</link><dc:creator>afspear</dc:creator><comments>https://news.ycombinator.com/item?id=47235085</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47235085</guid></item><item><title><![CDATA[New comment by afspear in "Halt and Catch Fire: TV’s best drama you’ve probably never heard of (2021)"]]></title><description><![CDATA[
<p>The opening of this show feels very relevant today. <a href="https://www.youtube.com/watch?v=ucSUs3adMQ8" rel="nofollow">https://www.youtube.com/watch?v=ucSUs3adMQ8</a></p>
]]></description><pubDate>Wed, 18 Feb 2026 03:55:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47056967</link><dc:creator>afspear</dc:creator><comments>https://news.ycombinator.com/item?id=47056967</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47056967</guid></item><item><title><![CDATA[Give GitHub Copilot in VS Code a local memory]]></title><description><![CDATA[
<p>Article URL: <a href="https://marketplace.visualstudio.com/items?itemName=afspear.agent-recall">https://marketplace.visualstudio.com/items?itemName=afspear.agent-recall</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46969879">https://news.ycombinator.com/item?id=46969879</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 11 Feb 2026 02:06:49 +0000</pubDate><link>https://marketplace.visualstudio.com/items?itemName=afspear.agent-recall</link><dc:creator>afspear</dc:creator><comments>https://news.ycombinator.com/item?id=46969879</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46969879</guid></item><item><title><![CDATA[New comment by afspear in "Open Claw Clone and Dev Containers"]]></title><description><![CDATA[
<p>The sec. vulns just keep coming: <a href="https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys" rel="nofollow">https://www.wiz.io/blog/exposed-moltbook-database-reveals-mi...</a></p>
]]></description><pubDate>Mon, 02 Feb 2026 16:20:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46857791</link><dc:creator>afspear</dc:creator><comments>https://news.ycombinator.com/item?id=46857791</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46857791</guid></item><item><title><![CDATA[Open Claw Clone and Dev Containers]]></title><description><![CDATA[
<p>Open Claw scares me. I keep seeing bots posting stuff that looks like sec. vulns. Also, file level access of all my stuff is kind of a non starter. However, the idea of an autonomous agent that just churns on my code is a pretty interesting idea. I wonder if one of these kinds of agents could be stuffed into a dev container?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46857751">https://news.ycombinator.com/item?id=46857751</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 02 Feb 2026 16:17:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46857751</link><dc:creator>afspear</dc:creator><comments>https://news.ycombinator.com/item?id=46857751</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46857751</guid></item><item><title><![CDATA[New comment by afspear in "Let's be honest, Generative AI isn't going all that well"]]></title><description><![CDATA[
<p>Meanwhile I'm over here reducing my ADO ticket time estimates by 75%.</p>
]]></description><pubDate>Tue, 13 Jan 2026 23:25:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46609963</link><dc:creator>afspear</dc:creator><comments>https://news.ycombinator.com/item?id=46609963</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46609963</guid></item><item><title><![CDATA[New comment by afspear in "LLM Problems Observed in Humans"]]></title><description><![CDATA[
<p>Maybe we should find other datasets not generated by humans to train LLMs?</p>
]]></description><pubDate>Wed, 07 Jan 2026 16:36:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=46528495</link><dc:creator>afspear</dc:creator><comments>https://news.ycombinator.com/item?id=46528495</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46528495</guid></item><item><title><![CDATA[New comment by afspear in "I came back from Cursor to VS Code"]]></title><description><![CDATA[
<p>I never made this journey, but I have ended up with the same AI coding stack.</p>
]]></description><pubDate>Mon, 05 Jan 2026 20:47:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46504717</link><dc:creator>afspear</dc:creator><comments>https://news.ycombinator.com/item?id=46504717</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46504717</guid></item><item><title><![CDATA[New comment by afspear in "Ask HN: Is Claude Code good enough already?"]]></title><description><![CDATA[
<p>At this point, I'm not so concerned about the interface (claude code vs github copilot, etc, etc.) Sometimes I need to use one over the other because of...reasons. But I do seem to be coming back to the Anthropic models in particular. My rule of thumb is turning out to be:<p>1)How long is this taking? 
2)Was it the right solution?<p>The first is pretty easy to get a feel for. The second is also a feeling I'm developing over time, but I am starting to trust the Anthropic models for all my coding.</p>
]]></description><pubDate>Tue, 16 Dec 2025 16:57:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=46290970</link><dc:creator>afspear</dc:creator><comments>https://news.ycombinator.com/item?id=46290970</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46290970</guid></item><item><title><![CDATA[New comment by afspear in "OpenAI’s latest research paper demonstrates that falsehoods are inevitable"]]></title><description><![CDATA[
<p>The article says "Consider the implications if ChatGPT started saying “I don’t know” to even 30% of queries – a conservative estimate based on the paper’s analysis of factual uncertainty in training data. Users accustomed to receiving confident answers to virtually any question would likely abandon such systems rapidly." 
  Maybe. But not me. I would trust it more, and rely on it even more. I can work with someone who says I don't know but is super smart. And I'll bet more people will do the same. Over time, the system may enjoy the rewards of communal trust over and above what it currently enjoys. 
  However, over the long time, this may lead to a more dystopian version of what might happen currently. We may all give blind trust because we all trust it. Given a decade or half of that, and then the system going wrong....Yikes. 
  We have to grapple with the ongoing advice that "ChatGPT can make mistakes. Check important info." And we do. Because we have to, or at least some of us do. And that is a good thing.</p>
]]></description><pubDate>Mon, 15 Sep 2025 22:35:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=45255801</link><dc:creator>afspear</dc:creator><comments>https://news.ycombinator.com/item?id=45255801</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45255801</guid></item><item><title><![CDATA[New comment by afspear in "Hosting a website on a disposable vape"]]></title><description><![CDATA[
<p>To be fair, it's a disposable vape.</p>
]]></description><pubDate>Mon, 15 Sep 2025 18:22:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=45253196</link><dc:creator>afspear</dc:creator><comments>https://news.ycombinator.com/item?id=45253196</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45253196</guid></item></channel></rss>