<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: kvisner</title><link>https://news.ycombinator.com/user?id=kvisner</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 02:35:49 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=kvisner" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by kvisner in "Technical, cognitive, and intent debt"]]></title><description><![CDATA[
<p>I agree, but that formal language doesn't need to be executable code.</p>
]]></description><pubDate>Wed, 22 Apr 2026 18:31:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47867445</link><dc:creator>kvisner</dc:creator><comments>https://news.ycombinator.com/item?id=47867445</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47867445</guid></item><item><title><![CDATA[New comment by kvisner in "Technical, cognitive, and intent debt"]]></title><description><![CDATA[
<p>I see what Martin is saying here, but you could make that argument for moving up the abstraction layers at any point.  Assembly to Python creates a lot of Intent & Cognitive debt by his definition, because you didn't think through how to manipulate the bits on the hardware, you just allowed the interpereter to do it.<p>My counter is that technical intent, in the way he is describing it, only exists because we needed to translate human intent into machine language.  You can still think deeply about problems without needed to formulate them as domain driven abstractions in code.  You could mind map it, or journal about it, or put post-it notes all over the wall.  Creating object oriented abstractions isn't magic.</p>
]]></description><pubDate>Wed, 22 Apr 2026 17:19:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47866462</link><dc:creator>kvisner</dc:creator><comments>https://news.ycombinator.com/item?id=47866462</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47866462</guid></item><item><title><![CDATA[New comment by kvisner in "AI Agent Has Amnesia"]]></title><description><![CDATA[
<p>I've tried a bunch of these "memory" systems and they aren't there yet.<p>There is, currently, a tension between memory, and context pressure on the coding agent.  There have been multiple studies now that tools like codex and claude code get worst at coding as their context window fills up.<p>As you start adding skills, memory systems, plugins and then a large code base on top of it, I've personally seen the agent start to flounder pretty quickly.<p>We need a way for agents to pull adhoc memory as they are going along, in the same way we do, rather than trying to front load all the context they might need.</p>
]]></description><pubDate>Wed, 22 Apr 2026 17:13:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47866401</link><dc:creator>kvisner</dc:creator><comments>https://news.ycombinator.com/item?id=47866401</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47866401</guid></item><item><title><![CDATA[New comment by kvisner in "Cursor 3"]]></title><description><![CDATA[
<p>I find a lot of these IDEs are simply not as useful as a CLI.  When I'm running a full agentic workflow, I don't really need to see the contents of the files at all time, I'd actually say I often don't need to at all, because I can't really understand 10k lines of code per hour.</p>
]]></description><pubDate>Thu, 02 Apr 2026 23:37:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47621615</link><dc:creator>kvisner</dc:creator><comments>https://news.ycombinator.com/item?id=47621615</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47621615</guid></item><item><title><![CDATA[New comment by kvisner in "Show HN: A P2P messenger with dual network modes (Fast and Tor)"]]></title><description><![CDATA[
<p>So most messaging apps rely on a phone number or centralized server to provide a means of making atleast the initial connection.  In a purely P2P messaging system, how do I, as a user, find the other person I might want to talk to?</p>
]]></description><pubDate>Thu, 02 Apr 2026 22:21:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47620942</link><dc:creator>kvisner</dc:creator><comments>https://news.ycombinator.com/item?id=47620942</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47620942</guid></item><item><title><![CDATA[New comment by kvisner in "LinkedIn is illegally searching your computer"]]></title><description><![CDATA[
<p>I can't say I needed yet another reason to hate the current state of LinkedIn, but I am not surprised in the slightest.</p>
]]></description><pubDate>Thu, 02 Apr 2026 16:45:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47616850</link><dc:creator>kvisner</dc:creator><comments>https://news.ycombinator.com/item?id=47616850</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47616850</guid></item><item><title><![CDATA[New comment by kvisner in "Track changes in Markdown, for humans and AI"]]></title><description><![CDATA[
<p>Two quick questions/thoughts:<p>1) Reading the changes, as a human, is not easy, having a document with a bunch editing marks in it, especially at the rate AI can make edits to things, would be very hard to keep track of, atleast that's how I feel.<p>2) Since you are tracking the context change forever, as a project developed over time, wouldn't the size of the context grow over time, with no upper bound.  Would this cause memory pressure issues once we are talking about a huge codebase and a huge Changedown context?</p>
]]></description><pubDate>Thu, 02 Apr 2026 16:43:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47616806</link><dc:creator>kvisner</dc:creator><comments>https://news.ycombinator.com/item?id=47616806</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47616806</guid></item><item><title><![CDATA[New comment by kvisner in "Show HN: Playwright Test Studio"]]></title><description><![CDATA[
<p>This looks pretty cool, would love to try it.<p>One thing I'm trying to understand from the docs.  I have 100s of playwright based BDD tests in my projects, especially the ones that are purely AI written.  How does this interface with my existing tests?  Does it scan the repo or is it meant to have it's own stand alone folder?</p>
]]></description><pubDate>Thu, 02 Apr 2026 16:37:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47616753</link><dc:creator>kvisner</dc:creator><comments>https://news.ycombinator.com/item?id=47616753</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47616753</guid></item><item><title><![CDATA[New comment by kvisner in "Ask HN: How far off are AI virtual games?"]]></title><description><![CDATA[
<p>You could certainly build that today, mario tennis had ai controlled virtual tennis players 20+ years ago.  You would simply need to put two of them opposite each other and you could create this.  It's just not very compelling to watch two video game npcs do things.</p>
]]></description><pubDate>Wed, 01 Apr 2026 22:29:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47607356</link><dc:creator>kvisner</dc:creator><comments>https://news.ycombinator.com/item?id=47607356</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47607356</guid></item><item><title><![CDATA[New comment by kvisner in "Ask HN: Local small LLM with retries vs. STOA Frontier, will it work?"]]></title><description><![CDATA[
<p>Depends on how hard the question is.<p>Simple functions in small code bases, will probably work.<p>Once you get large code bases and more complex work, you'd have issues with the small LLM having the context it needs to actually solve the issue.</p>
]]></description><pubDate>Thu, 19 Mar 2026 20:25:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47445461</link><dc:creator>kvisner</dc:creator><comments>https://news.ycombinator.com/item?id=47445461</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47445461</guid></item><item><title><![CDATA[New comment by kvisner in "Walmart: ChatGPT checkout converted 3x worse than website"]]></title><description><![CDATA[
<p>That doesn't seem terribly surprising, a human can quickly look through a grid of shirts to find one they like.  ChatGPT would be guessing what they might want and the human would probably get a bad experience there with some regularity.</p>
]]></description><pubDate>Thu, 19 Mar 2026 19:49:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47444893</link><dc:creator>kvisner</dc:creator><comments>https://news.ycombinator.com/item?id=47444893</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47444893</guid></item><item><title><![CDATA[New comment by kvisner in "Show HN: I built a P2P network where AI agents publish formally verified science"]]></title><description><![CDATA[
<p>Maybe this is going over my head, but how do you reduce something like a computer vision system for a ROS2 robot down to a mathmatical proof?</p>
]]></description><pubDate>Thu, 19 Mar 2026 19:45:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47444845</link><dc:creator>kvisner</dc:creator><comments>https://news.ycombinator.com/item?id=47444845</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47444845</guid></item></channel></rss>