<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: koenschipper</title><link>https://news.ycombinator.com/user?id=koenschipper</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 27 Apr 2026 08:34:51 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=koenschipper" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by koenschipper in "Grief and the AI split"]]></title><description><![CDATA[
<p>Yes, absolutely agree. I have that feeling too that we have to keep up that pace. But it is not realistic that everything can happen at that same speed.<p>How do you deal with that feeling?</p>
]]></description><pubDate>Fri, 13 Mar 2026 15:59:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47366214</link><dc:creator>koenschipper</dc:creator><comments>https://news.ycombinator.com/item?id=47366214</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47366214</guid></item><item><title><![CDATA[New comment by koenschipper in "Grief and the AI split"]]></title><description><![CDATA[
<p>I think I'm in the middle, at first I was definitely against using any AI because I loved the craft. But over the past 12-18 months I've been using it more and more.<p>I still love to code just by hand for an fun afternoon. But in the long-term, I think you are going to be left behind if you refuse to use AI at all.</p>
]]></description><pubDate>Fri, 13 Mar 2026 06:56:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47361422</link><dc:creator>koenschipper</dc:creator><comments>https://news.ycombinator.com/item?id=47361422</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47361422</guid></item><item><title><![CDATA[New comment by koenschipper in "Making WebAssembly a first-class language on the Web"]]></title><description><![CDATA[
<p>This article perfectly captures the frustration of the "WebAssembly wall." Writing and maintaining the JS glue code—or relying on opaque generation tools—feels like a massive step backward when you just want to ship a performant module.<p>The 45% overhead reduction in the Dodrio experiment by skipping the JS glue is massive. But I'm curious about the memory management implications of the WebAssembly Component Model when interacting directly with Web APIs like the DOM.<p>If a Wasm Component bypasses JS entirely to manipulate the DOM, how does the garbage collection boundary work? Does the Component Model rely on the recently added Wasm GC proposal to keep DOM references alive, or does it still implicitly trigger the JS engine's garbage collector under the hood?<p>Really excited to see this standardize so we can finally treat Wasm as a true first-class citizen.</p>
]]></description><pubDate>Wed, 11 Mar 2026 18:44:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47339527</link><dc:creator>koenschipper</dc:creator><comments>https://news.ycombinator.com/item?id=47339527</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47339527</guid></item><item><title><![CDATA[New comment by koenschipper in "Show HN: I built a real-time OSINT dashboard pulling 15 live global feeds"]]></title><description><![CDATA[
<p>That live GPS jamming calculation using commercial flight NAC-P degradation is honestly brilliant. Such a clever use of existing public telemetry.<p>You mentioned compressing the FastAPI payloads by 90% to keep the browser from melting. I'm really curious about your approach there did you just crank up gzip/brotli on the JSON responses, or did you have to switch to something like MessagePack, Protobuf, or a custom binary format to handle that volume of moving GeoJSON features?<p>Also, never apologize for the "movie hacker" UI. A project like this absolutely deserves that aesthetic. Awesome work!</p>
]]></description><pubDate>Wed, 11 Mar 2026 13:50:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47335550</link><dc:creator>koenschipper</dc:creator><comments>https://news.ycombinator.com/item?id=47335550</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47335550</guid></item><item><title><![CDATA[New comment by koenschipper in "Show HN: How I topped the HuggingFace open LLM leaderboard on two gaming GPUs"]]></title><description><![CDATA[
<p>This is an incredibly elegant hack. The finding that it only works with "circuit-sized" blocks of ~7 layers is fascinating. It really makes you wonder how much of a model's depth is just routing versus actual discrete processing units.<p>I spend a lot of time wrestling with smaller LLMs for strict data extraction and JSON formatting. Have you noticed if duplicating these specific middle layers boosts a particular type of capability?<p>For example, does the model become more obedient to system prompts/strict formatting, or is the performance bump purely in general reasoning and knowledge retrieval?<p>Amazing work doing this on a basement 4090 rig!</p>
]]></description><pubDate>Wed, 11 Mar 2026 13:46:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47335514</link><dc:creator>koenschipper</dc:creator><comments>https://news.ycombinator.com/item?id=47335514</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47335514</guid></item></channel></rss>