<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: wiricon</title><link>https://news.ycombinator.com/user?id=wiricon</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 19 Apr 2026 20:23:18 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=wiricon" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by wiricon in "The size of BYD's factory"]]></title><description><![CDATA[
<p>I don’t think anyone disputes that China can out-manufacture us. Does that make them “better”, they the winner and the West the loser? Obviously not; things don’t end, history continues and things can go in different directions in the future. Not to mention that there are other dimensions that matter besides manufacturing prowess, namely quality of life, individual freedom, etc.</p>
]]></description><pubDate>Mon, 25 Nov 2024 00:43:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=42232048</link><dc:creator>wiricon</dc:creator><comments>https://news.ycombinator.com/item?id=42232048</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42232048</guid></item><item><title><![CDATA[New comment by wiricon in "Amazon ditches 'just walk out' checkouts at its grocery stores"]]></title><description><![CDATA[
<p>How well does simulated data work in this space? My first stab at doing this scalably would be as follows: given a new product, physically obtain a single instance of the product (or ideally a 3d model, but seems like a big ask from manufacturers at this stage), capture images of it from every conceivable angle and a variety of lighting conditions (seems like you could automate this data capture pretty well with a robotic arm to rotate the object and some kind of lighting rig), get an instance mask for each image (using either human annotator or a 3d reconstruction method or a FG-BG segmentation model), paste those instances on random background images (e.g. from any large image dataset), add distractor objects and other augmentations, and finally train a model on the resulting dataset. Helps that many grocery items are relatively rigid (boxes, bottles, etc). I guess this would only work for e.g. boxes and bottles, which always look the same, you'd need a lot more variety for things like fruit and veg that are non rigid and have a lot of variety in their appearance, and we'd need to take into account changing packaging as well.</p>
]]></description><pubDate>Wed, 03 Apr 2024 03:17:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=39913277</link><dc:creator>wiricon</dc:creator><comments>https://news.ycombinator.com/item?id=39913277</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39913277</guid></item><item><title><![CDATA[New comment by wiricon in "Good old-fashioned AI remains viable in spite of the rise of LLMs"]]></title><description><![CDATA[
<p>Can link to any methods for deriving reliable uncertainty estimates? Sounds useful.</p>
]]></description><pubDate>Sun, 03 Dec 2023 05:45:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=38505134</link><dc:creator>wiricon</dc:creator><comments>https://news.ycombinator.com/item?id=38505134</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38505134</guid></item><item><title><![CDATA[New comment by wiricon in "The AI Misinformation Epidemic"]]></title><description><![CDATA[
<p>Care to share links to those labs' websites/relevant papers? I'm a deep learning researcher, and I've been noticing this gap that you mentioned between the DL community and what everyone else in AI is doing, and think it might be worthwhile trying to bridge that gap.</p>
]]></description><pubDate>Tue, 28 Mar 2017 12:08:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=13975859</link><dc:creator>wiricon</dc:creator><comments>https://news.ycombinator.com/item?id=13975859</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=13975859</guid></item></channel></rss>