<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: mskkm</title><link>https://news.ycombinator.com/user?id=mskkm</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 27 Apr 2026 17:41:25 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=mskkm" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by mskkm in "TurboQuant: A first-principles walkthrough"]]></title><description><![CDATA[
<p>The public comments on Openreview now include explicit allegations that the TurboQuant paper knowingly misrepresented RaBitQ and understated RaBitQ’s results. The RaBitQ authors also report in a technical note that several of TurboQuant’s runtime and recall numbers do not reproduce from the released code under the paper’s stated setup. In the note, TurboQuant generally loses to RaBitQ: <a href="https://arxiv.org/abs/2604.19528" rel="nofollow">https://arxiv.org/abs/2604.19528</a>. If these public allegations hold up, then this is not just overhype or sloppy citation practice, but points to a distorted comparison and benchmark claims that do not survive reproduction.</p>
]]></description><pubDate>Mon, 27 Apr 2026 08:20:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47919017</link><dc:creator>mskkm</dc:creator><comments>https://news.ycombinator.com/item?id=47919017</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47919017</guid></item><item><title><![CDATA[New comment by mskkm in "KV Cache Compression 900000x Beyond TurboQuant and Per-Vector Shannon Limit"]]></title><description><![CDATA[
<p>went through ICLR review: scores 4 4 6 10, serious?
open-source implementations: where is the official code?
CUDA kernels: where?</p>
]]></description><pubDate>Tue, 21 Apr 2026 21:14:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47854582</link><dc:creator>mskkm</dc:creator><comments>https://news.ycombinator.com/item?id=47854582</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47854582</guid></item><item><title><![CDATA[New comment by mskkm in "TurboQuant: Redefining AI efficiency with extreme compression"]]></title><description><![CDATA[
<p>seems to be a scam<p>"The TurboQuant paper (ICLR 2026) contains serious issues in how it describes RaBitQ, including incorrect technical claims and misleading theory/experiment comparisons.
We flagged these issues to the authors before submission. They acknowledged them, but chose not to fix them. The paper was later accepted and widely promoted by Google, reaching tens of millions of views.<p>We’re speaking up now because once a misleading narrative spreads, it becomes much harder to correct. We’ve written a public comment on openreview (<a href="https://openreview.net/forum?id=tO3ASKZlok" rel="nofollow">https://openreview.net/forum?id=tO3ASKZlok</a>).<p>We would greatly appreciate your attention and help in sharing it."<p><a href="https://x.com/gaoj0017/status/2037532673812443214" rel="nofollow">https://x.com/gaoj0017/status/2037532673812443214</a></p>
]]></description><pubDate>Mon, 30 Mar 2026 09:29:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47572239</link><dc:creator>mskkm</dc:creator><comments>https://news.ycombinator.com/item?id=47572239</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47572239</guid></item><item><title><![CDATA[New comment by mskkm in "TurboQuant: Redefining AI efficiency with extreme compression"]]></title><description><![CDATA[
<p>This is not an LLM inference result. Table 2 is the part I find most questionable. Claiming orders-of-magnitude improvements in vector search over standard methods is an extraordinary claim. If it actually held up in practice, I would have expected to see independent reproductions or real-world adoption by now. It’s been about a year since the paper came out, and I haven’t seen much of either. That doesn’t prove the claim is false, but it certainly doesn’t inspire confidence.</p>
]]></description><pubDate>Wed, 25 Mar 2026 13:39:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47517221</link><dc:creator>mskkm</dc:creator><comments>https://news.ycombinator.com/item?id=47517221</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47517221</guid></item><item><title><![CDATA[New comment by mskkm in "TurboQuant: Redefining AI efficiency with extreme compression"]]></title><description><![CDATA[
<p>They confirmed on the accuracy on NIAH but didn't reproduce the claimed 8x efficiency.</p>
]]></description><pubDate>Wed, 25 Mar 2026 10:40:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47515571</link><dc:creator>mskkm</dc:creator><comments>https://news.ycombinator.com/item?id=47515571</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47515571</guid></item><item><title><![CDATA[New comment by mskkm in "TurboQuant: Redefining AI efficiency with extreme compression"]]></title><description><![CDATA[
<p>Pied Piper vibes. As far as I can tell, this algorithm is hardly compatible with modern GPU architectures. My guess is that’s why the paper reports accuracy-vs-space, but conveniently avoids reporting inference wall-clock time. The baseline numbers also look seriously underreported. “several orders of magnitude” speedups for vector search? Really? anyone has actually reproduced these results?</p>
]]></description><pubDate>Wed, 25 Mar 2026 09:05:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47514995</link><dc:creator>mskkm</dc:creator><comments>https://news.ycombinator.com/item?id=47514995</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47514995</guid></item><item><title><![CDATA[New comment by mskkm in "Rug pulls, forks, and open-source feudalism"]]></title><description><![CDATA[
<p>true</p>
]]></description><pubDate>Fri, 12 Sep 2025 14:11:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=45222356</link><dc:creator>mskkm</dc:creator><comments>https://news.ycombinator.com/item?id=45222356</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45222356</guid></item></channel></rss>