<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: xoofoog</title><link>https://news.ycombinator.com/user?id=xoofoog</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 04 May 2026 16:15:20 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=xoofoog" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by xoofoog in "OpenAI’s o1 correctly diagnosed 67% of ER patients vs. 50-55% by triage doctors"]]></title><description><![CDATA[
<p>I would love to replace my doctors with AI. Today. Please. I have had Long Covid for over a year now, which is a shitty shitty condition. It’s complicated and not super well understood. But you know who understands it way better than any doctor I’ve ever seen? Every AI I’ve talked to about it. Because there is tons of research going on, and the AI is (with minor prompting) fully up to date on all of it.<p>I take treatment ideas to real doctors.  They are skeptical, and don’t have the time to read the actual research, and refuse to act. Or give me trite advice which has been proven actively harmful like “you just need to hit the gym.”  Umm, my heart rate doubles when I stand up because of POTS. “Then use the rowing machine so can stay reclined.”  If I did what my human doctors have told me without doing my own research I would be way sicker than I am.<p>I don’t need empathy. I don’t need bedside manner. Or intuition. Or a warm hug. I need somebody who will read all the published research, and reason carefully about what’s going on in my body, and develop a treatment plan. At this, AI beats human doctors today by a long shot.</p>
]]></description><pubDate>Sun, 03 May 2026 22:27:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=48002260</link><dc:creator>xoofoog</dc:creator><comments>https://news.ycombinator.com/item?id=48002260</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48002260</guid></item><item><title><![CDATA[New comment by xoofoog in "Judge finalizes order for Greenpeace to pay $345M in ND oil pipeline case"]]></title><description><![CDATA[
<p>But restricting supply raises prices and naturally encourages sustainable energy. That kind of change is self reinforcing. Government incentives disappear at the change of every administration.</p>
]]></description><pubDate>Mon, 02 Mar 2026 16:14:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47219906</link><dc:creator>xoofoog</dc:creator><comments>https://news.ycombinator.com/item?id=47219906</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47219906</guid></item><item><title><![CDATA[New comment by xoofoog in "Show HN: Hyperparam: OSS Tools for Exploring Datasets Locally in the Browser"]]></title><description><![CDATA[
<p>Wow - that's super clever.  How do you get away with loading part of the file?  Which part do you load?</p>
]]></description><pubDate>Thu, 01 May 2025 16:29:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=43859920</link><dc:creator>xoofoog</dc:creator><comments>https://news.ycombinator.com/item?id=43859920</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43859920</guid></item><item><title><![CDATA[New comment by xoofoog in "Show HN: r1_vlm – Open-Source Framework for Visual Reasoning with GRPO"]]></title><description><![CDATA[
<p>What do you mean LLMs are bad at images?  GPT or Claude can read text perfectly, and describe what's in a picture in a lot of detail.  I feel like replacing OCR is one of the few things you can actually trust them for.</p>
]]></description><pubDate>Sat, 08 Mar 2025 16:53:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=43301552</link><dc:creator>xoofoog</dc:creator><comments>https://news.ycombinator.com/item?id=43301552</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43301552</guid></item><item><title><![CDATA[New comment by xoofoog in "The Google incentive mismatch: Problems with promotion-oriented cultures"]]></title><description><![CDATA[
<p>Okay, Googler, how about some data to back up that assertion?  Let's have a googley-argument where we're respectful in showing that we're smarter than the person we're putting down.<p>Here's my data.  Google values consensus - fact.  This "wisdom of the crowds" philosophy was core to founding google - Larry & Sergey's brilliant insight that the collective votes of hyperlinks was a stronger signal than things like H1 tags and HTML titles in picking good search results.<p>But consensus means, in a literal sense, that everybody needs to agree on the right thing to do.  Problem is in reality people don't always agree.  So what happens when the group needs to reach consensus but people disagree?  Since Google doesn't have a respectful way to disagree, the holders of divergent opinions must be minimized - either pushed out of the group or proven to be not smart enough for their opinions to be valid.<p>God I hate myself while I'm writing this.  I left for a reason.</p>
]]></description><pubDate>Wed, 04 May 2022 17:00:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=31263064</link><dc:creator>xoofoog</dc:creator><comments>https://news.ycombinator.com/item?id=31263064</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31263064</guid></item><item><title><![CDATA[New comment by xoofoog in "The Google incentive mismatch: Problems with promotion-oriented cultures"]]></title><description><![CDATA[
<p>The sexist part is not the "hard problems" but the competitiveness.  I don't think sexism is intrinsic to solving hard problems - certainly gender-inclusive companies work on hard problems successfully.  But in my experience those companies tend to be more collaborative.<p>Google needs to celebrate heroes.  Like Jeff Dean.  Or Sanjay Ghemawat.  Those are Great Men.  They are "Living Gods" because they solved Really Hard Problems.  Getting into that class of people requires being a "lead" which necessarily means other people around you aren't leading.  So you need to prove why you're worthy of being the "lead" which means proving the others around you aren't.  This is toxic masculinity.  Not solving hard problems.</p>
]]></description><pubDate>Wed, 04 May 2022 16:52:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=31262957</link><dc:creator>xoofoog</dc:creator><comments>https://news.ycombinator.com/item?id=31262957</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31262957</guid></item><item><title><![CDATA[New comment by xoofoog in "The Google incentive mismatch: Problems with promotion-oriented cultures"]]></title><description><![CDATA[
<p>Former Googler here.  This person has correctly identified that a key reason why google sucks is that people very often...<p>> choose between doing what’s best for users or what’s best for their career<p>But the root cause isn't that people want to get promoted.  It's that Google promotes people for the wrong reasons.  Put very simply, the problem is that Google promotes people for "solving hard problems" not for solving USEFUL problems.<p>Imagine if people did get promoted for fixing bugs instead of building a new product (to be abandoned)!  Or if maintaining an existing system was somehow on par with building a new system (which is just a bigger more complicated version of something perfectly good).  The googler would say "well those useful problems are too easy to merit a promotion.  Anybody can solve easy problems - we're google, and we're too smart to work on those easy problems."  Grow up.<p>Y'all value the wrong things.  That's why your culture is broken.</p>
]]></description><pubDate>Wed, 04 May 2022 16:11:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=31262428</link><dc:creator>xoofoog</dc:creator><comments>https://news.ycombinator.com/item?id=31262428</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31262428</guid></item></channel></rss>