<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: loaderchips</title><link>https://news.ycombinator.com/user?id=loaderchips</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 13 May 2026 15:24:13 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=loaderchips" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by loaderchips in "Reimagining the mouse pointer for the AI era"]]></title><description><![CDATA[
<p>It's beautiful how the human mind can take something very obvious but overlooked and make it into this fantastic innovation. Fab stuff.</p>
]]></description><pubDate>Tue, 12 May 2026 18:20:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=48112191</link><dc:creator>loaderchips</dc:creator><comments>https://news.ycombinator.com/item?id=48112191</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48112191</guid></item><item><title><![CDATA[New comment by loaderchips in "Push events into a running session with channels"]]></title><description><![CDATA[
<p>Very well put. I love Claude but anthtopic as a company sucks.</p>
]]></description><pubDate>Fri, 20 Mar 2026 11:52:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47453305</link><dc:creator>loaderchips</dc:creator><comments>https://news.ycombinator.com/item?id=47453305</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47453305</guid></item><item><title><![CDATA[New comment by loaderchips in "[dead]"]]></title><description><![CDATA[
<p>TL;DR<p>The Problem: When your AI fails, "the algorithm did it" won't fly. Insurance, courts, and regulators need a human name.
The Pattern: Ships got captains. Bridges got licensed engineers. Planes got pilots. Medicine got attending physicians. Same reason: you can't punish "the team."
The Solution: System Liability Engineer (SLE) = one person who understands the system, has veto power, signs their name, and faces career consequences if it causes serious harm.
The Timeline: Insurance exclusions already at 28%. Courts asking "who was responsible?" by 2026. Mandatory by 2030. You can get ahead or get dragged.
The Litmus Test: Ask them: "If this system causes serious harm, are you prepared to explain it publicly and accept being fired?" If not "yes," they're not SLE.
Why It Works: AI can fake text, images, and code. It can't fake: years building reputation, a specific human body signing documents, finite career at stake, real legal consequences.
What To Do: Name one person SLE for your highest-stakes AI system this week. Give them veto power in writing. Have them map "who gets hurt, how badly." That's it—you're 80% there.
The Real Reason: When making truth-claims costs nothing, only institutions grounded in irreversible human cost survive. The SLE is that cost.</p>
]]></description><pubDate>Mon, 05 Jan 2026 10:24:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46497173</link><dc:creator>loaderchips</dc:creator><comments>https://news.ycombinator.com/item?id=46497173</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46497173</guid></item><item><title><![CDATA[Generative Intuition]]></title><description><![CDATA[
<p>Article URL: <a href="https://nikitph.medium.com/generative-intuition-a1048fb2b820">https://nikitph.medium.com/generative-intuition-a1048fb2b820</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46443747">https://news.ycombinator.com/item?id=46443747</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 31 Dec 2025 13:03:28 +0000</pubDate><link>https://nikitph.medium.com/generative-intuition-a1048fb2b820</link><dc:creator>loaderchips</dc:creator><comments>https://news.ycombinator.com/item?id=46443747</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46443747</guid></item><item><title><![CDATA[New comment by loaderchips in "Why “negative vectors” can't delete data in FAISS – but weighted kernels can"]]></title><description><![CDATA[
<p>Thank you for the thoughtful comment. Your questions are valid given the title, which I used to make the post more accessible to a general HN audience. To clarify: the core distinction here is not kernelization vs kNN, but field evaluation vs point selection (or selection vs superposition as retrieval semantics). The kernel is just a concrete example.<p>FAISS implements selection (argmax ⟨q,v⟩), so vectors are discrete atoms and deletion must be structural. The weighted formulation represents a field: vectors act as sources whose influence superposes into a potential. Retrieval evaluates that field (or follows its gradient), not a point identity. In this regime, deletion is algebraic (append -v for cancellation), evaluation is sparse/local, and no index rebuild is required.<p>The paper goes into this in more detail.</p>
]]></description><pubDate>Mon, 22 Dec 2025 04:26:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46351313</link><dc:creator>loaderchips</dc:creator><comments>https://news.ycombinator.com/item?id=46351313</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46351313</guid></item><item><title><![CDATA[Why “negative vectors” can't delete data in FAISS – but weighted kernels can]]></title><description><![CDATA[
<p>The fix for machine unlearning in vector databases turns out to be conceptually simple, but it requires changing the semantics of retrieval.<p>Standard FAISS-style indices store vectors and compute:<p>argmax ⟨q, vᵢ⟩<p>If you insert -v, nothing happens. It’s just another point. The original vector is still maximally similar to itself and remains rank-1.<p>This isn’t a bug—it’s a consequence of selection-based retrieval.<p>If instead you store (vector, weight) pairs and evaluate:
φ(q) = Σ wᵢ · K(q, vᵢ)<p>you get a different object entirely: a field, not a selection. Now inserting the same vector with w = −1 causes destructive interference. The contribution cancels. The attractor disappears.<p>Deletion becomes O(1) append-only (add the inverse), not a structural rebuild.<p>FAISS-style:   Vec<Vec<f32>>              → argmax   (selection)
Weighted form: Vec<(Vec<f32>, f32)>       → Σ        (field)<p>We validated this on 100k vectors:
 • FAISS: target stays rank-1 after “deletion”
 • Field-based model: exact cancellation (φ → 0), target unretrievable<p>The deeper point is that this isn’t a trick—it’s a semantic separation.
 • FAISS implements a selection operator over discrete points.
 • The weighted version implements a field operator where vectors act as kernels in a continuous potential.
 • Retrieval becomes gradient ascent to local maxima.
 • Deletion becomes destructive interference that removes attractors.<p>This shifts deletion from structural (modify index, rebuild, filter) to algebraic (append an inverse element). You get append-only logs, reversible unlearning, and auditable deletion records. The negative weight is the proof.<p>Implication: current vector DBs can’t guarantee GDPR/CCPA erasure without reconstruction. Field-based retrieval can—provably.<p>Paper with proofs:
<a href="https://github.com/nikitph/bloomin/blob/master/negative-vector-experiment/paper/main.pdf" rel="nofollow">https://github.com/nikitph/bloomin/blob/master/negative-vect...</a></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46298855">https://news.ycombinator.com/item?id=46298855</a></p>
<p>Points: 21</p>
<p># Comments: 4</p>
]]></description><pubDate>Wed, 17 Dec 2025 06:26:59 +0000</pubDate><link>https://github.com/nikitph/bloomin/tree/master/negative-vector-experiment</link><dc:creator>loaderchips</dc:creator><comments>https://news.ycombinator.com/item?id=46298855</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46298855</guid></item><item><title><![CDATA[Transformers Must Hallucinate]]></title><description><![CDATA[
<p>Article URL: <a href="https://medium.com/@nikitph/why-transformers-must-hallucinate-7c2a8fc3b3be">https://medium.com/@nikitph/why-transformers-must-hallucinate-7c2a8fc3b3be</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46228210">https://news.ycombinator.com/item?id=46228210</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 11 Dec 2025 06:08:18 +0000</pubDate><link>https://medium.com/@nikitph/why-transformers-must-hallucinate-7c2a8fc3b3be</link><dc:creator>loaderchips</dc:creator><comments>https://news.ycombinator.com/item?id=46228210</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46228210</guid></item><item><title><![CDATA[New comment by loaderchips in "DeepSeek OCR"]]></title><description><![CDATA[
<p>not sure why i m getting downvoted. Would love to have a technical discussion on the validity of my suggestions.</p>
]]></description><pubDate>Mon, 20 Oct 2025 12:24:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=45643102</link><dc:creator>loaderchips</dc:creator><comments>https://news.ycombinator.com/item?id=45643102</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45643102</guid></item><item><title><![CDATA[New comment by loaderchips in "DeepSeek OCR"]]></title><description><![CDATA[
<p>Great work guys, how about we replace the global encoder with a Mamba (state-space) vision backbone to eliminate the O(n²) attention bottleneck, enabling linear-complexity encoding of high-resolution documents. Pair this with a non-autoregressive (Non-AR) decoder—such as Mask-Predict or iterative refinement—that generates all output tokens in parallel instead of sequentially. Together, this creates a fully parallelizable vision-to-text pipeline, The combination addresses both major bottlenecks in DeepSeek-OCR.</p>
]]></description><pubDate>Mon, 20 Oct 2025 12:16:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=45643027</link><dc:creator>loaderchips</dc:creator><comments>https://news.ycombinator.com/item?id=45643027</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45643027</guid></item><item><title><![CDATA[New comment by loaderchips in "Mercury: Ultra-fast language models based on diffusion"]]></title><description><![CDATA[
<p>i wonder how fast this would be when run on something like groq</p>
]]></description><pubDate>Tue, 08 Jul 2025 07:52:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=44497991</link><dc:creator>loaderchips</dc:creator><comments>https://news.ycombinator.com/item?id=44497991</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44497991</guid></item><item><title><![CDATA[Ragged – Leveraging Video Container Formats for Efficient Vector DB Distribution]]></title><description><![CDATA[
<p>Hello HN community,<p>Longtime lurker and really happy to be writing this post. I'm excited to share a proof of concept I've been working on for efficient vector database distribution called Ragged. In my paper and PoC, I explore leveraging the MP4 video container format to store and distribute high-dimensional vectors for semantic search applications.<p>The idea behind Ragged is to encode vectors and their metadata into MP4 files using custom tracks, allowing seamless distribution through existing Content Delivery Networks (CDNs). This approach maintains compatibility with standard video infrastructure while achieving comparable search performance to traditional vector databases.<p>Key highlights of my work include:
- A novel encoding scheme for high-dimensional vectors and metadata into MP4 container formats.
- CDN-optimized architecture with HTTP range requests, fragment-based access patterns, and intelligent prefetching.
- Comprehensive evaluation showing significant improvements in cold-start latency and global accessibility.
- An open-source implementation to facilitate reproduction and adoption.<p>I was inspired by the innovative work of Memvid (https://github.com/Olow304/memvid), which demonstrated the potential of using video formats for data storage. My project builds on this concept with a focus on CDNs and semantic search.<p>I believe Ragged offers a promising solution for deploying semantic search capabilities in edge computing and serverless environments, leveraging the mature video distribution ecosystem. Also sharing indexed knowledge bases in the form of offline MP4 can unlock a new class of applications.<p>I'm eager to hear your thoughts, feedback, and any potential use cases you envision for this approach. You can find the full paper and implementation details [here](https://github.com/nikitph/ragged).<p>Thank you for your time fellows<p>Nikit</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44403785">https://news.ycombinator.com/item?id=44403785</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 28 Jun 2025 11:02:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=44403785</link><dc:creator>loaderchips</dc:creator><comments>https://news.ycombinator.com/item?id=44403785</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44403785</guid></item><item><title><![CDATA[New comment by loaderchips in "CoreNet: A library for training deep neural networks"]]></title><description><![CDATA[
<p>You have articulated what i have been feeling towards apple really well. I like their products But their philosophy and approach is not up to par</p>
]]></description><pubDate>Wed, 24 Apr 2024 03:44:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=40140302</link><dc:creator>loaderchips</dc:creator><comments>https://news.ycombinator.com/item?id=40140302</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40140302</guid></item></channel></rss>