<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: totalview</title><link>https://news.ycombinator.com/user?id=totalview</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 16 May 2026 09:50:35 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=totalview" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by totalview in "Image-blaster: Creates 3D environments, SFX, and meshes from a single image"]]></title><description><![CDATA[
<p>There are not good workflows for doing this for a whole scene, especially where accuracy needs to be preserved. One shot models seem to be very good at providing 3D from a single angle, but multishot is very shoddy at this point, and without seeing behind things you have no clue about what is actually there.<p>Also, try meshy and look at how many polygons or triangles you get from any of the model objects. Hundreds of thousands, when you retopologize still goes to the high tens of thousands.</p>
]]></description><pubDate>Fri, 15 May 2026 23:28:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=48155261</link><dc:creator>totalview</dc:creator><comments>https://news.ycombinator.com/item?id=48155261</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48155261</guid></item><item><title><![CDATA[Ppisp: Cleaner Representations of Gaussian Splats and NeRFs]]></title><description><![CDATA[
<p>Article URL: <a href="https://research.nvidia.com/labs/sil/projects/ppisp/">https://research.nvidia.com/labs/sil/projects/ppisp/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47064884">https://news.ycombinator.com/item?id=47064884</a></p>
<p>Points: 7</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 18 Feb 2026 19:04:22 +0000</pubDate><link>https://research.nvidia.com/labs/sil/projects/ppisp/</link><dc:creator>totalview</dc:creator><comments>https://news.ycombinator.com/item?id=47064884</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47064884</guid></item><item><title><![CDATA[Meta Locate Objects in 3D]]></title><description><![CDATA[
<p>Article URL: <a href="https://locate3d.atmeta.com/">https://locate3d.atmeta.com/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43944398">https://news.ycombinator.com/item?id=43944398</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 10 May 2025 09:42:31 +0000</pubDate><link>https://locate3d.atmeta.com/</link><dc:creator>totalview</dc:creator><comments>https://news.ycombinator.com/item?id=43944398</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43944398</guid></item><item><title><![CDATA[New comment by totalview in "Red Cat wins U.S. Army next-gen drone contract over Skydio"]]></title><description><![CDATA[
<p>This is a big loss for skydio, as the contract was in excess of $100MM value, and comes right as they raise another large round of funding.</p>
]]></description><pubDate>Sat, 30 Nov 2024 01:46:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=42278745</link><dc:creator>totalview</dc:creator><comments>https://news.ycombinator.com/item?id=42278745</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42278745</guid></item><item><title><![CDATA[Red Cat wins U.S. Army next-gen drone contract over Skydio]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.therobotreport.com/red-cat-wins-u-s-army-next-gen-drone-contract-over-skydio/">https://www.therobotreport.com/red-cat-wins-u-s-army-next-gen-drone-contract-over-skydio/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42278741">https://news.ycombinator.com/item?id=42278741</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Sat, 30 Nov 2024 01:44:34 +0000</pubDate><link>https://www.therobotreport.com/red-cat-wins-u-s-army-next-gen-drone-contract-over-skydio/</link><dc:creator>totalview</dc:creator><comments>https://news.ycombinator.com/item?id=42278741</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42278741</guid></item><item><title><![CDATA[New comment by totalview in "Fix photogrammetry bridges so that they are not "solid" underneath (2020)"]]></title><description><![CDATA[
<p>I forgot to mention you could also perform terrestrial laser scanning or SLAM underneath the bridge and fuse those together with the aerial photogrammetry to get a unified model, but this is even more effort and more post process.<p>I have to wonder at what level you want a truly accurate representation of all bridges so that anyone could see them. Bad actors can ruin a lot of cool stuff.</p>
]]></description><pubDate>Tue, 01 Oct 2024 03:00:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=41704261</link><dc:creator>totalview</dc:creator><comments>https://news.ycombinator.com/item?id=41704261</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41704261</guid></item><item><title><![CDATA[New comment by totalview in "Fix photogrammetry bridges so that they are not "solid" underneath (2020)"]]></title><description><![CDATA[
<p>This is what happens to photogrammetry reconstruction around water and other areas where there are not enough camera angles seeing underneath the bridge to determine the 3-D geometry. This is usually rectified post processed by Google or someone serious about representing these bridges nicely. Often this is a 3-D modelers time, or if you are really enterprising, you can fly underneath the same bridge with a smaller drone and have the camera pointing up so that you can get really accurate photos underneath from which to do SfM/COLMAP. I wish this was solvable by GenAI, but the whole thing of garbage in garbage out really applies here. You don’t know what the structure is of that bridge looks like underneath the roadway unless you’ve taken multiple photos of it, and no two bridges are identical. I’m sure someone could train an AI on every bridge imaginable and we could get something better?</p>
]]></description><pubDate>Tue, 01 Oct 2024 02:19:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=41704094</link><dc:creator>totalview</dc:creator><comments>https://news.ycombinator.com/item?id=41704094</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41704094</guid></item><item><title><![CDATA[New comment by totalview in "Gaussian Splatting SLAM"]]></title><description><![CDATA[
<p>I will believe this when I can actually measure scenes from Gaussians accurately (I have tried multiple papers worth of experiments with dismal results). No one in the reality capture industry uses splats for anything else other than visualization of water and sky heavy scenes because this is where a Gaussian splat actually renders in a nice way. I look forward to the advancements that Nerf and GS but for now there is no foundational reason why they can extrapolate any more data than COLMAP or GLOMAP when the input data is the major factor in defining scene details.</p>
]]></description><pubDate>Wed, 14 Aug 2024 03:10:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=41242179</link><dc:creator>totalview</dc:creator><comments>https://news.ycombinator.com/item?id=41242179</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41242179</guid></item><item><title><![CDATA[New comment by totalview in "Gaussian Splatting SLAM"]]></title><description><![CDATA[
<p>I love the “3D Gaussian Visualisation” section that illustrates the difference between photos of the mono data and the splat data. The splats are like a giant point cloud under the hood, except unlike point clouds which have uniform size, different splats have different sizes.<p>This all is well and good when you are just using for a pretty visualization, but it appears gaussians have the same weakness as point clouds processed with structure from motion, in that you need lots of camera angles to get quality surface reconstruction accuracy.</p>
]]></description><pubDate>Mon, 12 Aug 2024 14:19:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=41224681</link><dc:creator>totalview</dc:creator><comments>https://news.ycombinator.com/item?id=41224681</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41224681</guid></item><item><title><![CDATA[New comment by totalview in "SAM 2: Segment Anything in Images and Videos"]]></title><description><![CDATA[
<p>We are using it to segment different pieces of an industrial facility (pipes valves, etc.) before classification</p>
]]></description><pubDate>Tue, 30 Jul 2024 00:00:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=41104846</link><dc:creator>totalview</dc:creator><comments>https://news.ycombinator.com/item?id=41104846</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41104846</guid></item><item><title><![CDATA[New comment by totalview in "HybridNeRF: Efficient Neural Rendering"]]></title><description><![CDATA[
<p>Those LiDAR sensors on phones and VR headsets are low resolution and mainly used to improve the photos and depth information from the camera. Different objective than mapping a space, which is mainly being disrupted by improvements from the self driving car and ADAS industries</p>
]]></description><pubDate>Sat, 22 Jun 2024 21:18:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=40762310</link><dc:creator>totalview</dc:creator><comments>https://news.ycombinator.com/item?id=40762310</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40762310</guid></item><item><title><![CDATA[New comment by totalview in "HybridNeRF: Efficient Neural Rendering"]]></title><description><![CDATA[
<p>You are correct; most of these new techniques are using a camera. In my line of work I consider a camera sensor a scanner of sorts, as we do a lot of photogrammetry and “scan” with a 45MP full frame. The inferred 3D from cameras is pretty bad when it comes to accuracy, especially from dimly lit areas or where you dip into a closet or closed space that doesn’t have a good structural tie back to the main space you are trying to recreate in 3D. Laser scanners are far preferable to tie your photo pose estimation to, and most serious reality capture for video games is done with both a camera a and $40,000+ LiDAR Scanner.  Have you ever tried to scan every corner of a house with only a traditional DSLR or point and shoot camera? I have and the results are pretty bad from a 3D standpoint without a ton of post process.<p>The collision detection problem is related heavily to having clean 3D as mentioned above. My company is doing development on computing collision on reality capture right now in a clean way and I would be interested in any thoughts you have. We are chunking collision on the dataset at a fixed distance from the player character (can’t go too fast in a vehicle or it will outpace the collision and fall thru the floor) and have a tunable LOD that influences collision resolution.</p>
]]></description><pubDate>Sat, 22 Jun 2024 19:35:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=40761548</link><dc:creator>totalview</dc:creator><comments>https://news.ycombinator.com/item?id=40761548</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40761548</guid></item><item><title><![CDATA[New comment by totalview in "HybridNeRF: Efficient Neural Rendering"]]></title><description><![CDATA[
<p>I work in the rendering and gaming industry and also run a 3D scanning company. I have similarly wished for this capability, especially the destructability part. What you speak of is still pretty far off for several reasons:<p>-No Collision/poor collision on NERFs and GS: to have a proper interactive world, you usually need accurate character collision so that your character or vehicle can move along the floor/ground (as opposed to falling thru it) run into walls, go through door frames, etc. NERFs suffer from the same issues as photogrammetry in that they need “structure from motion” (COLMAP or similar) to give them a mesh or 3-D output that can be meshed for collision to register off of. The mesh from reality capture is noisy, and is not simple geometry. Think millions of triangles from a laser scanner or camera for “flat” ground that a video game would use 100 triangles for.<p>-Scanning: there’s no scanner available that provides both good 3-D information and good photo realistic textures at a price people will want to pay. Scanning every square inch of playable space in even a modest sized house is a pain, and people will look behind the television, underneath the furniture and everywhere else that most of these scanning videos and demos never go. There are a lot of ugly angles that these videos omit where a player would go.<p>-Post Processing: of you scan your house or any other real space, you will have poor lighting unless you took the time to do your own custom lighting and color setup. That will all need to be corrected in post process so that you can dynamically light your environment. Lighting is one of the most next generation things that people associate with games and you will be fighting prebaked shadows throughout the entire house or area that you have scanned. You don’t get away from this with NERFs or gaussian splats, because those scenes also have prebaked lighting in them that is static.<p>Object Destruction and Physics: I Love the game teardown, and if you want to see what it’s like to actually bust up and destroy structures that have been physically scanned, there is a plug-in to import reality capture models directly into the game with a little bit of modding. That said, teardown is voxel based, and is one of the most advanced engines that has been built to do such a thing.  I have seen nothing else capable of doing cool looking destruction of any object, scanned or 3D modeled, without a large studio effort and a ton of optimization.</p>
]]></description><pubDate>Sat, 22 Jun 2024 17:30:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=40760639</link><dc:creator>totalview</dc:creator><comments>https://news.ycombinator.com/item?id=40760639</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40760639</guid></item><item><title><![CDATA[New comment by totalview in "DJI ban passes the House and moves on to the Senate"]]></title><description><![CDATA[
<p>Wider implications beyond just the consumer drone market. My company and several of our clients (who are all much larger companies than us) have $100,000s - $X,000,000s of DJI enterprise drones, batteries, and payloads (Matrice 300, 350, zenmuse p1 camera, etc)<p>I know the bill still has to go through the senate, but this is going to be a sore subject for a lot of American companies who use DJI equipment.</p>
]]></description><pubDate>Tue, 18 Jun 2024 13:32:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=40717656</link><dc:creator>totalview</dc:creator><comments>https://news.ycombinator.com/item?id=40717656</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40717656</guid></item><item><title><![CDATA[New comment by totalview in "El Prado Museum – Virtual Tour"]]></title><description><![CDATA[
<p>I wonder what camera sensor was used to produce this. A DSLR on a pano/360 rig? Or something purpose built like a Weiss AG Civetta 230MP 360 scanning camera.</p>
]]></description><pubDate>Thu, 18 Apr 2024 20:17:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=40080258</link><dc:creator>totalview</dc:creator><comments>https://news.ycombinator.com/item?id=40080258</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40080258</guid></item><item><title><![CDATA[New comment by totalview in "Show HN: Atlas – GIS and interactive maps in the browser"]]></title><description><![CDATA[
<p>How does this handle 3D reality capture datasets such as las, obj, slpk/i3S? I saw support for 3D building tilesets but could not find anything else</p>
]]></description><pubDate>Tue, 23 Jan 2024 21:33:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=39110193</link><dc:creator>totalview</dc:creator><comments>https://news.ycombinator.com/item?id=39110193</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39110193</guid></item><item><title><![CDATA[New comment by totalview in "Drones are the new drug mules"]]></title><description><![CDATA[
<p>I notice a DJI Matrice 300 (larger drone) and a DJI Mavic 2 (can’t be totally sure if it’s a pro model). DJI is well known as the top market leader in UAV space because of its relative low cost ($13K for the Matrice, under $3K for the Mavic) for high utility/reliability (something you care about if you have a payload worth $10K plus). Similar drones made in the USA or EU are 2X-4X the cost and do not come with any other features that DJI hasn’t already thought of.<p>I guess that being a market leader means you will have a wide range of customers using your product</p>
]]></description><pubDate>Fri, 05 Jan 2024 20:45:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=38884680</link><dc:creator>totalview</dc:creator><comments>https://news.ycombinator.com/item?id=38884680</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38884680</guid></item><item><title><![CDATA[New comment by totalview in "Winter & Cold Weather EV Range Loss in 7,000 Cars"]]></title><description><![CDATA[
<p>The data in their nicely colored graph is wildly inaccurate. I have a long range Tesla model 3 (in Alaska) and the range is over 290 miles (it says 310 miles when full, but doesn’t quite get that, even in the summer). This is information you can trivially look up…<p>I know Tesla gets a lot of flak, but it’s batteries outperform most of the other cars that are listed.</p>
]]></description><pubDate>Sun, 18 Dec 2022 16:01:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=34038896</link><dc:creator>totalview</dc:creator><comments>https://news.ycombinator.com/item?id=34038896</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34038896</guid></item><item><title><![CDATA[New comment by totalview in "I Posted on YouTube Consistently for 1 Month. This Is What Happened"]]></title><description><![CDATA[
<p>This is solid gold coming from a YouTube Legend. Thanks for the comment, love your content.</p>
]]></description><pubDate>Thu, 03 Nov 2022 20:39:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=33457171</link><dc:creator>totalview</dc:creator><comments>https://news.ycombinator.com/item?id=33457171</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33457171</guid></item><item><title><![CDATA[New comment by totalview in "Show HN: A new search engine UX I've been working on in my free time"]]></title><description><![CDATA[
<p>This would be fabulous for switching between brave, duck duck go, and Google. I love the UX, finally something worthwhile in the browser space. Available for purchase at any point?</p>
]]></description><pubDate>Wed, 19 Oct 2022 15:30:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=33263650</link><dc:creator>totalview</dc:creator><comments>https://news.ycombinator.com/item?id=33263650</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33263650</guid></item></channel></rss>