<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: a_wild_dandan</title><link>https://news.ycombinator.com/user?id=a_wild_dandan</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 05 Apr 2026 21:12:08 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=a_wild_dandan" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by a_wild_dandan in "Tailscale Peer Relays is now generally available"]]></title><description><![CDATA[
<p>You’re not stupid. That’s terrible UX. The button is completely disconnected from its modal, and is placed in a bizarre/nonstandard location.</p>
]]></description><pubDate>Wed, 18 Feb 2026 18:54:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47064728</link><dc:creator>a_wild_dandan</dc:creator><comments>https://news.ycombinator.com/item?id=47064728</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47064728</guid></item><item><title><![CDATA[New comment by a_wild_dandan in "Qwen3-Coder-Next"]]></title><description><![CDATA[
<p>Speaking of tricks, does anyone here know how many angels can dance on the head of a pin?</p>
]]></description><pubDate>Tue, 03 Feb 2026 21:19:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=46877443</link><dc:creator>a_wild_dandan</dc:creator><comments>https://news.ycombinator.com/item?id=46877443</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46877443</guid></item><item><title><![CDATA[New comment by a_wild_dandan in "How China built its ‘Manhattan Project’ to rival the West in AI chips"]]></title><description><![CDATA[
<p>Taiwan’s geopolitical position is vastly more complex than the fantasy where invasion would follow merely from fab parity.</p>
]]></description><pubDate>Fri, 19 Dec 2025 04:08:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46322174</link><dc:creator>a_wild_dandan</dc:creator><comments>https://news.ycombinator.com/item?id=46322174</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46322174</guid></item><item><title><![CDATA[New comment by a_wild_dandan in "GPT-5.2"]]></title><description><![CDATA[
<p>> Unlike the previous GPT-5.1 model, GPT-5.2 has new features for managing what the model "knows" and "remembers to improve accuracy.<p>Dumb nit, but why not put your own press release through your model to prevent basic things like missing quote marks? Reminds me of that time an OAI released wildly inaccurate copy/pasted bar charts.</p>
]]></description><pubDate>Thu, 11 Dec 2025 19:24:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=46235954</link><dc:creator>a_wild_dandan</dc:creator><comments>https://news.ycombinator.com/item?id=46235954</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46235954</guid></item><item><title><![CDATA[New comment by a_wild_dandan in "If you're going to vibe code, why not do it in C?"]]></title><description><![CDATA[
<p>Businesses do whatever’s cheap. AI labs will continue making their models smarter, more persuasive. Maybe the SWE profession will thrive/transform/get massacred. We don’t know.</p>
]]></description><pubDate>Tue, 09 Dec 2025 18:59:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46209032</link><dc:creator>a_wild_dandan</dc:creator><comments>https://news.ycombinator.com/item?id=46209032</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46209032</guid></item><item><title><![CDATA[New comment by a_wild_dandan in "Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?"]]></title><description><![CDATA[
<p>No. I like being able to ignore them. I can’t do that if people chop off their disclaimers to avoid comment removal.</p>
]]></description><pubDate>Tue, 09 Dec 2025 16:28:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=46206888</link><dc:creator>a_wild_dandan</dc:creator><comments>https://news.ycombinator.com/item?id=46206888</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46206888</guid></item><item><title><![CDATA[New comment by a_wild_dandan in "Pebble Index 01 – External memory for your brain"]]></title><description><![CDATA[
<p>Someone will make a killing on a rechargeable version of this. The ergonomics are a good idea.</p>
]]></description><pubDate>Tue, 09 Dec 2025 15:56:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=46206347</link><dc:creator>a_wild_dandan</dc:creator><comments>https://news.ycombinator.com/item?id=46206347</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46206347</guid></item><item><title><![CDATA[New comment by a_wild_dandan in "Zebra-Llama – Towards efficient hybrid models"]]></title><description><![CDATA[
<p>If the claims in the abstract are true, then this is legitimately revolutionary. I don’t believe it. There are probably some major constraints/caveats that keep these results from generalizing. I’ll read through the paper carefully this time instead of a skim and come back with thoughts after I’ve digested it.</p>
]]></description><pubDate>Sat, 06 Dec 2025 22:45:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46177279</link><dc:creator>a_wild_dandan</dc:creator><comments>https://news.ycombinator.com/item?id=46177279</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46177279</guid></item><item><title><![CDATA[New comment by a_wild_dandan in "I was right about dishwasher pods and now I can prove it [video]"]]></title><description><![CDATA[
<p>His specific thesis is that pods fundamentally clean worse than powder because they're inherently single-stage releases of detergent in machines designed for two-stage releases. Despite this, he <i>still</i> explicitly says that pods have their uses. So I'm unclear on how his goal is "proving that everyone is wrong." Did we watch different videos?</p>
]]></description><pubDate>Wed, 05 Nov 2025 23:36:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=45829541</link><dc:creator>a_wild_dandan</dc:creator><comments>https://news.ycombinator.com/item?id=45829541</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45829541</guid></item><item><title><![CDATA[New comment by a_wild_dandan in "Meet the real screen addicts: the elderly"]]></title><description><![CDATA[
<p>How does having management strategies over an alleged addiction imply that it isn’t an addiction?</p>
]]></description><pubDate>Sat, 25 Oct 2025 10:49:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=45702788</link><dc:creator>a_wild_dandan</dc:creator><comments>https://news.ycombinator.com/item?id=45702788</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45702788</guid></item><item><title><![CDATA[New comment by a_wild_dandan in "Gemini 2.5 Computer Use model"]]></title><description><![CDATA[
<p>Intelligence is whatever an LLM can’t do yet. Fluid intelligence is the capacity to quickly move goal posts.</p>
]]></description><pubDate>Tue, 07 Oct 2025 23:33:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=45510292</link><dc:creator>a_wild_dandan</dc:creator><comments>https://news.ycombinator.com/item?id=45510292</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45510292</guid></item><item><title><![CDATA[New comment by a_wild_dandan in "Estimating AI energy use"]]></title><description><![CDATA[
<p>I would bet that it's far lower now. Inference is expensive we've made extraordinary efficiency gains through techniques like distillation. That said, GPT-5 is a reasoning model, and those are notorious for high token burn. So who knows, it could be a wash. But selective pressures to optimize for scale/growth/revenue/independence from MSFT/etc makes me think that OpenAI is chasing those watt-hours pretty doggedly. So 0.34 is probably high...<p>...but then Sora came out.</p>
]]></description><pubDate>Mon, 06 Oct 2025 02:05:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=45486970</link><dc:creator>a_wild_dandan</dc:creator><comments>https://news.ycombinator.com/item?id=45486970</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45486970</guid></item><item><title><![CDATA[New comment by a_wild_dandan in "NIST's DeepSeek "evaluation" is a hit piece"]]></title><description><![CDATA[
<p>This might be a dumb question but like...why does it matter? Are <i>other</i> companies reporting training run costs including amortized equipment/labor/research/etc expenditures? If so, then I get it. DeepSeek is inviting an apples-and-oranges comparison. If <i>not</i>, then these gotcha articles feel like pointless "well ackshually" criticisms. Akin to complaining about the cost of a fishing trip because the captain didn't include the price of their boat.</p>
]]></description><pubDate>Sun, 05 Oct 2025 20:56:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=45485179</link><dc:creator>a_wild_dandan</dc:creator><comments>https://news.ycombinator.com/item?id=45485179</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45485179</guid></item><item><title><![CDATA[New comment by a_wild_dandan in "Fp8 runs ~100 tflops faster when the kernel name has "cutlass" in it"]]></title><description><![CDATA[
<p>Thank you for explaining. I was <i>so</i> confused at how AMD was improving Quake performance with duck-like monikers.</p>
]]></description><pubDate>Fri, 03 Oct 2025 08:39:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=45460541</link><dc:creator>a_wild_dandan</dc:creator><comments>https://news.ycombinator.com/item?id=45460541</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45460541</guid></item><item><title><![CDATA[New comment by a_wild_dandan in "Ford CEO on his ‘epiphany’ after talking to factory workers in 2023"]]></title><description><![CDATA[
<p>You're right! China is presently <i>terrified</i> of involution. They're dealing with wage deflation and immense debt right now. Beijing is telling its companies to scale back subsidies and stop price wars. The flood of cheap batteries, solar panels, etc is about to change.</p>
]]></description><pubDate>Thu, 02 Oct 2025 16:05:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=45451514</link><dc:creator>a_wild_dandan</dc:creator><comments>https://news.ycombinator.com/item?id=45451514</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45451514</guid></item><item><title><![CDATA[New comment by a_wild_dandan in "Cursor 1.7"]]></title><description><![CDATA[
<p>Is it possible to run Cursor entirely with local models? My Mac can comfortably run relatively massive models. I would experiment so much more with AI in my codebases knowing that I won't slam into a brick wall due to quotas, connection issues, etc.</p>
]]></description><pubDate>Wed, 01 Oct 2025 18:25:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=45441285</link><dc:creator>a_wild_dandan</dc:creator><comments>https://news.ycombinator.com/item?id=45441285</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45441285</guid></item><item><title><![CDATA[New comment by a_wild_dandan in "AMD claims Arm ISA doesn't offer efficiency advantage over x86"]]></title><description><![CDATA[
<p>That's absolutely wild. I've been loving using the 96GB of (V)RAM in my MacBook + Apple's mlx framework to run quantized AI reasoning models like glm-4.5-air. Running models with hundreds of billions of parameters (at ~14 tok/s) on my damn laptop feels like magic.</p>
]]></description><pubDate>Tue, 09 Sep 2025 16:47:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=45184702</link><dc:creator>a_wild_dandan</dc:creator><comments>https://news.ycombinator.com/item?id=45184702</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45184702</guid></item><item><title><![CDATA[New comment by a_wild_dandan in "Claude Sonnet will ship in Xcode"]]></title><description><![CDATA[
<p>> What am I doing wrong?<p>Providing a woefully inadequate descriptions to others (Claude & us) and still expecting useful responses?</p>
]]></description><pubDate>Fri, 29 Aug 2025 05:43:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=45060639</link><dc:creator>a_wild_dandan</dc:creator><comments>https://news.ycombinator.com/item?id=45060639</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45060639</guid></item><item><title><![CDATA[New comment by a_wild_dandan in "Open models by OpenAI"]]></title><description><![CDATA[
<p>GLM-4.5-air produces tokens far faster than I can read on my MacBook. That's plenty fast enough for me, but YMMV.</p>
]]></description><pubDate>Tue, 05 Aug 2025 18:10:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=44801926</link><dc:creator>a_wild_dandan</dc:creator><comments>https://news.ycombinator.com/item?id=44801926</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44801926</guid></item><item><title><![CDATA[New comment by a_wild_dandan in "Open models by OpenAI"]]></title><description><![CDATA[
<p>Oh absolutely, AI labs certainly talk their books, including any safety angles. The controversy/outrage extended far beyond those incentivized companies too. Many people had good faith worries about Llama. Open-weight models are now <i>vastly</i> more powerful than Llama-1, yet the sky hasn't fallen. It's just fascinating to me how apocalyptic people are.<p>I just feel lucky to be around in what's likely the most important decade in human history. Shit odds on that, so I'm basically a lotto winner. Wild times.</p>
]]></description><pubDate>Tue, 05 Aug 2025 18:07:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=44801880</link><dc:creator>a_wild_dandan</dc:creator><comments>https://news.ycombinator.com/item?id=44801880</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44801880</guid></item></channel></rss>