<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: betula_ai</title><link>https://news.ycombinator.com/user?id=betula_ai</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 00:07:44 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=betula_ai" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by betula_ai in "Show HN: Use Claude's projects feature with any LLM"]]></title><description><![CDATA[
<p>Very timely. One thing I struggle with these days, is even my starred chats are way too many. My titles aren't sufficient - would be helpful to add search. You mention "easy to find your conversation" in your site - perhaps you mean search (I haven't registered yet).</p>
]]></description><pubDate>Fri, 21 Feb 2025 20:20:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=43132450</link><dc:creator>betula_ai</dc:creator><comments>https://news.ycombinator.com/item?id=43132450</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43132450</guid></item><item><title><![CDATA[New comment by betula_ai in "Show HN: Benchmarking VLMs vs. Traditional OCR"]]></title><description><![CDATA[
<p>Thank you for sharing this. Some of the other public models that we can host ourselves may perform in practice better than the models listed - e.g. Qwen 2.5 VL <a href="https://github.com/QwenLM/Qwen2.5-VL?tab=readme-ov-file">https://github.com/QwenLM/Qwen2.5-VL?tab=readme-ov-file</a></p>
]]></description><pubDate>Fri, 21 Feb 2025 19:20:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=43131719</link><dc:creator>betula_ai</dc:creator><comments>https://news.ycombinator.com/item?id=43131719</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43131719</guid></item><item><title><![CDATA[New comment by betula_ai in "I think Yann Lecun was right about LLMs (but perhaps only by accident)"]]></title><description><![CDATA[
<p>Thank you for this informative and thoughtful post. An interesting twist to the increasing error accumulation as autoregressive models generate more output, is the recent success of language diffusion models for predicting multiple tokens simultaneously. They have a remasking strategy at every step of the reviser process, that masks low confidence tokens.  Regardless your observations perhaps still apply. <a href="https://arxiv.org/pdf/2502.09992" rel="nofollow">https://arxiv.org/pdf/2502.09992</a></p>
]]></description><pubDate>Fri, 21 Feb 2025 18:50:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=43131365</link><dc:creator>betula_ai</dc:creator><comments>https://news.ycombinator.com/item?id=43131365</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43131365</guid></item></channel></rss>