<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: priowise</title><link>https://news.ycombinator.com/user?id=priowise</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 21 Apr 2026 16:44:22 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=priowise" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by priowise in "LoGeR – 3D reconstruction from extremely long videos (DeepMind, UC Berkeley)"]]></title><description><![CDATA[
<p>Makes sense, thanks for the explanation. The compressed map representation + test-time training part sounds especially interesting.<p>Does the approach hold up well when the environment changes over time (lighting, objects moved, etc.), or does it assume mostly static scenes?</p>
]]></description><pubDate>Wed, 11 Mar 2026 14:29:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47336033</link><dc:creator>priowise</dc:creator><comments>https://news.ycombinator.com/item?id=47336033</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47336033</guid></item></channel></rss>