<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: krschacht</title><link>https://news.ycombinator.com/user?id=krschacht</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 15 May 2026 17:21:12 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=krschacht" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by krschacht in "Small models also found the vulnerabilities that Mythos found"]]></title><description><![CDATA[
<p>At the end of this article it states, "Our tests gave models the vulnerable function directly, often with contextual hints (e.g., "consider wraparound behavior"). A real autonomous discovery pipeline starts from a full codebase with no hints." I'm not a cybersecurity expert, but isn't 80% of the challenge finding where the exploit lives in the code!?<p>That really undermines the author's claims. This article feels dishonest in it's claim that "small, cheap, open-weights models ... recovered much of the same analysis."</p>
]]></description><pubDate>Mon, 13 Apr 2026 14:14:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47752304</link><dc:creator>krschacht</dc:creator><comments>https://news.ycombinator.com/item?id=47752304</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47752304</guid></item><item><title><![CDATA[Show HN: Clippy – screen-aware voice AI in the browser]]></title><description><![CDATA[
<p>A friend and I built a browser prototype that answers questions about whatever’s on your screen using getDisplayMedia, client-side wake-word detection, and server-side multimodal inference.<p>Hard parts:<p>– Getting the model to point to specific UI elements<p>– Keeping it coherent across multi-step workflows (“Help me create a sword in Tinkercad”)<p>– Preventing the infinite mirror effect and confusion between window vs full-screen sharing<p>– Keeping voice → screenshot → inference → voice latency low enough to feel conversational<p>We packaged it as “Clippy” for fun, but the real experiment is letting a model tool-call fresh screenshots to help it gather more context.<p>One practical use case is remote tech support — I'm sending this to my mom next time she calls instead of screen sharing.<p>Curious what breaks.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47429822">https://news.ycombinator.com/item?id=47429822</a></p>
<p>Points: 5</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 18 Mar 2026 18:50:40 +0000</pubDate><link>https://RememberClippy.com</link><dc:creator>krschacht</dc:creator><comments>https://news.ycombinator.com/item?id=47429822</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47429822</guid></item><item><title><![CDATA[New comment by krschacht in "Ask HN: Why has ChatGPT disabled links to websites?"]]></title><description><![CDATA[
<p>Dang, that would be unfortunate if this is driven by ads.</p>
]]></description><pubDate>Wed, 04 Mar 2026 16:09:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47249572</link><dc:creator>krschacht</dc:creator><comments>https://news.ycombinator.com/item?id=47249572</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47249572</guid></item><item><title><![CDATA[Ask HN: Why has ChatGPT disabled links to websites?]]></title><description><![CDATA[
<p>I was just using ChatGPT to help me pick an SDK library. It mentions a few options by name (e.g. Baileys, whatsapp-web.js), but when I click those names rather than opening a browser with the source page like it used to, it now opens a modal and uses ChatGPT to basically generate a fake homepage for this tool.<p>From what I can tell, there is no longer any way to easily get to the underlying web page that was referenced in generating its answer to my question.<p>This feels like a pretty meaningful step backwards. Am I missing something?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47247891">https://news.ycombinator.com/item?id=47247891</a></p>
<p>Points: 6</p>
<p># Comments: 4</p>
]]></description><pubDate>Wed, 04 Mar 2026 14:31:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47247891</link><dc:creator>krschacht</dc:creator><comments>https://news.ycombinator.com/item?id=47247891</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47247891</guid></item><item><title><![CDATA[New comment by krschacht in "Company as Code"]]></title><description><![CDATA[
<p>I think I see the problem… and a possible solution.<p>I think the main problem with attempting to document this is that the system would not be running off of it. Your infrastructure document is automatically read and drives the deploy (or whatever). But if you want to make a change to a human’s responsibilities, you don’t get the simplicity of updating your organization documentation and clicking “execute.” So this new documentation you propose would always be lagging documentation rather than the actual driver of organizational behavior.<p>But! What if it was? What if all the managers in an organization were AI systems? They would read diff in the org chart and it initiated the communication to the respective human employees.<p>I could imagine testing this in a coffee-shop level business right now in which the LLM is probably capable of all the strategy and management decisions needed to effectively run it, operating within the constraints of policies and procedures all cleanly laid out in documentation.</p>
]]></description><pubDate>Fri, 06 Feb 2026 15:50:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=46914298</link><dc:creator>krschacht</dc:creator><comments>https://news.ycombinator.com/item?id=46914298</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46914298</guid></item><item><title><![CDATA[New comment by krschacht in "After two years of vibecoding, I'm back to writing by hand"]]></title><description><![CDATA[
<p>For something like tests, where I have very specific opinions on how I want them written, I have a simple doc (tests.md) and I’ll regularly tag Claude with it.<p>Claude writes a bunch of new code and I’ll tell it, “Before I review this code, make sure all tests adhere to the guidance of @tests.md” (you can probably make this a slash command too)<p>I find that if I put these instructions in the system prompt, far down in a conversation that’s used lots of the context window, they will only loosely be followed. But when I tag it in like this, Claude will strongly and thoughtfully follow the guidance and examples I’ve written up about how I want my tests.</p>
]]></description><pubDate>Thu, 29 Jan 2026 13:49:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46810159</link><dc:creator>krschacht</dc:creator><comments>https://news.ycombinator.com/item?id=46810159</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46810159</guid></item><item><title><![CDATA[New comment by krschacht in "Flux 2 Klein pure C inference"]]></title><description><![CDATA[
<p>antirez — how do you reliably get Claude to re-read the file after compaction? It's easy to let Claude run for awhile, it compacts and starts getting much worse after compaction, and I don't always catch the moment of compaction to be able to tell it to re-read the notes file.</p>
]]></description><pubDate>Sat, 24 Jan 2026 21:34:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=46747936</link><dc:creator>krschacht</dc:creator><comments>https://news.ycombinator.com/item?id=46747936</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46747936</guid></item><item><title><![CDATA[New comment by krschacht in "I hate GitHub Actions with passion"]]></title><description><![CDATA[
<p>Run your own server for GitHub Actions. There is a simple library they have you install so your runner gets registered with your repo. Then you can SSH in whenever a job fails. This lets you fully inspect the state, and to execute one-off commands to test theories. It’s a much faster way to iterate.</p>
]]></description><pubDate>Sat, 17 Jan 2026 02:35:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=46654754</link><dc:creator>krschacht</dc:creator><comments>https://news.ycombinator.com/item?id=46654754</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46654754</guid></item><item><title><![CDATA[New comment by krschacht in "Ask HN: What are the most futuristic computing interfaces you've seen?"]]></title><description><![CDATA[
<p>Here are two of mine:<p>Volumetric displays: <a href="https://youtu.be/na7pvihXhYs?si=cIWIk2yrv-WVbe4x" rel="nofollow">https://youtu.be/na7pvihXhYs?si=cIWIk2yrv-WVbe4x</a><p>Physically adaptive desktop: <a href="https://tangible.media.mit.edu/project/transform-as-dynamic-and-adaptive-furniture" rel="nofollow">https://tangible.media.mit.edu/project/transform-as-dynamic-...</a><p>I'm eager to learn about ones I've never seen!</p>
]]></description><pubDate>Tue, 06 Jan 2026 21:07:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=46518757</link><dc:creator>krschacht</dc:creator><comments>https://news.ycombinator.com/item?id=46518757</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46518757</guid></item><item><title><![CDATA[New comment by krschacht in "Ask HN: What kind of setup do you run for your children?"]]></title><description><![CDATA[
<p>I don’t lock down my kids computer use. I know risks exist, but I think their unrestricted access to computers and the internet is far more beneficial than harmful.<p>I know I’m highly unusual amongst my friends. I’ve also found it odd that the more knowledgeable someone is about tech, the more scared they are of their kids using the internet.<p>But just like riding a bike and swimming in a pool are extremely dangerous, yet I encourage my kids to do both of these things and instead just educate them about risks. Similarly, I think the benefit-vs-risk of the internet is FAR better than a bike & pool, so I just educate them.</p>
]]></description><pubDate>Tue, 06 Jan 2026 21:00:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46518656</link><dc:creator>krschacht</dc:creator><comments>https://news.ycombinator.com/item?id=46518656</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46518656</guid></item><item><title><![CDATA[Ask HN: What are the most futuristic computing interfaces you've seen?]]></title><description><![CDATA[
<p>What are the most futuristic computing interfaces you can think of, that just aren’t widespread yet?<p>(In line with Gibson’s famous quote that, "The future is already here—it's just not evenly distributed.")</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46518622">https://news.ycombinator.com/item?id=46518622</a></p>
<p>Points: 3</p>
<p># Comments: 1</p>
]]></description><pubDate>Tue, 06 Jan 2026 20:57:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=46518622</link><dc:creator>krschacht</dc:creator><comments>https://news.ycombinator.com/item?id=46518622</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46518622</guid></item><item><title><![CDATA[New comment by krschacht in "Ask HN: Why don't sci-fi interfaces ever turn into real products?"]]></title><description><![CDATA[
<p>(1) and (2) are good points. Particularly 2 because movies may intentionally add steps / slow things down so that a viewer can follow along but this would be at odds to daily use.<p>However, I still think there's something to be said for movies attempting to build UIs that have a strong aesthetic and elicit an emotional response, whereas production apps feel so flat and boring, in comparison.<p>I still wonder why we aren't seeing people try to push the envelope stylistically to "wow" users.</p>
]]></description><pubDate>Mon, 05 Jan 2026 18:19:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46502520</link><dc:creator>krschacht</dc:creator><comments>https://news.ycombinator.com/item?id=46502520</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46502520</guid></item><item><title><![CDATA[New comment by krschacht in "Ask HN: Why don't sci-fi interfaces ever turn into real products?"]]></title><description><![CDATA[
<p>My assumption is a bit different: the UIs in science fiction are intended to communicate information, now typically that's to advance the storyline but there's a lot of overlap with real apps. Maybe more importantly, UIs in movies are elicit a feeling. Maybe it's a shallow feeling of "this is cool!" but to me that's where it seems like production apps mostly give up. "It works, let's move on..." seems to be the bar in most cases rather than "let's wow users!"<p>This might be a clearer articulation of what I'm trying to get at with my question...</p>
]]></description><pubDate>Mon, 05 Jan 2026 18:16:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46502481</link><dc:creator>krschacht</dc:creator><comments>https://news.ycombinator.com/item?id=46502481</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46502481</guid></item><item><title><![CDATA[New comment by krschacht in "Anyone building software for wearable tech?"]]></title><description><![CDATA[
<p>I've been keeping an eye on the SDKs for Meta glasses and Even Realities, but they don't yet support much access.</p>
]]></description><pubDate>Mon, 05 Jan 2026 16:39:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46500996</link><dc:creator>krschacht</dc:creator><comments>https://news.ycombinator.com/item?id=46500996</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46500996</guid></item><item><title><![CDATA[Ask HN: Why don't sci-fi interfaces ever turn into real products?]]></title><description><![CDATA[
<p>Is there a good reason why the UI we see in sci-fi don't make it into actual products?<p>Maybe there is some true limitation inherent in this style UI which causes them to stay as mere experiments / novelty?<p>I'm thinking about demos like this which I've seen go by HN over the years (I'm not involved in any of these):<p>LCARS demo: https://www.mewho.com/ritos/<p>Tron-like terminal: https://github.com/GitSquared/edex-ui<p>Sci-fi UI components: https://github.com/arwes/arwes<p>And I've long been a fan of designers like gmunk doing UI work for film: https://gmunk.com/Oblivion-GFX<p>But I'm genuinely confused why the actual UIs we all use on a daily basis are all so derivative of one another when the space of UI possibilities is so vast.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46500724">https://news.ycombinator.com/item?id=46500724</a></p>
<p>Points: 2</p>
<p># Comments: 5</p>
]]></description><pubDate>Mon, 05 Jan 2026 16:24:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46500724</link><dc:creator>krschacht</dc:creator><comments>https://news.ycombinator.com/item?id=46500724</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46500724</guid></item><item><title><![CDATA[New comment by krschacht in "Show HN: I built a MCP Server for stock analysis (9% alpha vs. VOO)"]]></title><description><![CDATA[
<p>What does it say about moving to cash now? :)</p>
]]></description><pubDate>Sat, 27 Dec 2025 20:36:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=46405010</link><dc:creator>krschacht</dc:creator><comments>https://news.ycombinator.com/item?id=46405010</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46405010</guid></item><item><title><![CDATA[Show HN: Voice-first todo list that updates live as you talk]]></title><description><![CDATA[
<p>I built a demo of a voice AI task manager. You speak naturally and it updates your visible task list in real time.<p>Try it here, no sign up: <a href="https://taskmaster.keithschacht.com" rel="nofollow">https://taskmaster.keithschacht.com</a><p>I find it useful to talk aloud to figure out priorities. With this, I can just ramble, “Mark that first task as done. Actually undo that. Add a task to proofread it. Move that to the top…”<p>It’s built on LiveKit with a Rails web UI. It listens continuously, maps speech to tool calls, and keeps full task-list state so it understands vague references like “the third item” or “the one with my kids.”<p>Voice feels much faster for input, but visual feedback is still higher bandwidth for output. This is intentionally rough—a demo focused on interaction, not features.<p>Curious how the interaction model feels for people?<p>Code: <a href="https://github.com/keithschacht/taskmaster" rel="nofollow">https://github.com/keithschacht/taskmaster</a></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46403351">https://news.ycombinator.com/item?id=46403351</a></p>
<p>Points: 5</p>
<p># Comments: 1</p>
]]></description><pubDate>Sat, 27 Dec 2025 17:22:20 +0000</pubDate><link>https://taskmaster.keithschacht.com</link><dc:creator>krschacht</dc:creator><comments>https://news.ycombinator.com/item?id=46403351</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46403351</guid></item><item><title><![CDATA[New comment by krschacht in "Show HN: Continuous Claude – run Claude Code in a loop"]]></title><description><![CDATA[
<p>I find most human agents can only produce high quality tests if you give them detailed guidance and good starting examples. :)</p>
]]></description><pubDate>Sat, 22 Nov 2025 17:40:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=46016548</link><dc:creator>krschacht</dc:creator><comments>https://news.ycombinator.com/item?id=46016548</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46016548</guid></item><item><title><![CDATA[New comment by krschacht in "Vibe engineering"]]></title><description><![CDATA[
<p>Since you are convinced you’re using the tools to their full potential, the quality problem you experience is 100% the tools fault. This means there is no possible change in your own behavior that would yield better results. This is one of those beliefs that is self fulfilling.<p>I’ve found it much more useful in life to always assume I’m not doing something to its full potential.</p>
]]></description><pubDate>Wed, 08 Oct 2025 20:06:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=45520030</link><dc:creator>krschacht</dc:creator><comments>https://news.ycombinator.com/item?id=45520030</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45520030</guid></item><item><title><![CDATA[New comment by krschacht in "Vibe engineering"]]></title><description><![CDATA[
<p>Yes, most days I’m 2x as productive. I’m using Claude Code to produce extremely high quality code that closely follows my coding standards and the architecture of my app.</p>
]]></description><pubDate>Wed, 08 Oct 2025 13:00:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=45515690</link><dc:creator>krschacht</dc:creator><comments>https://news.ycombinator.com/item?id=45515690</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45515690</guid></item></channel></rss>