<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ordersofmag</title><link>https://news.ycombinator.com/user?id=ordersofmag</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 11:37:47 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ordersofmag" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by ordersofmag in "Olympic Committee bars transgender athletes from women’s events"]]></title><description><![CDATA[
<p>I'm pretty sure there are folks involved in doing drug testing for many sports so saying are doing nothing seems hyperbolic. Are there specific things you think the bodies in charge of drug testing should be doing but aren't?  Genuinely curious.</p>
]]></description><pubDate>Thu, 26 Mar 2026 22:31:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47536650</link><dc:creator>ordersofmag</dc:creator><comments>https://news.ycombinator.com/item?id=47536650</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47536650</guid></item><item><title><![CDATA[New comment by ordersofmag in "Olympic Committee bars transgender athletes from women’s events"]]></title><description><![CDATA[
<p>Not sure how this helps. Olympic events already have relative rating systems that ranks all the participant: pretty complicated and sport dependent systems that determine qualification for the games and competition amongst all the competitors at the games. The problem how to have <i>separate</i> competitions for different groups of participants when there isn't a universally shared agreement on who should be in which group.</p>
]]></description><pubDate>Thu, 26 Mar 2026 22:28:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47536627</link><dc:creator>ordersofmag</dc:creator><comments>https://news.ycombinator.com/item?id=47536627</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47536627</guid></item><item><title><![CDATA[New comment by ordersofmag in "Please do not A/B test my workflow"]]></title><description><![CDATA[
<p>Any tool that auto-updates carries the implication that behavior will change over time.  And one criteria for being a skilled professional is having expert understanding of ones tools.  That includes understanding the strengths and weaknesses of the tools (including variability of output) and making appropriate choices as a result.  If <i>you</i> don't feel you can produce professional code with LLM's then certainly you shouldn't use them.  That doesn't mean others can't leverage LLM's as part of their process and produce professional results.  Blindly accepting LLM output and vibe coding clearly doesn't consistently product professional results.  But that's different than saying professionals can't use LLM in ways that are productive.</p>
]]></description><pubDate>Sat, 14 Mar 2026 12:14:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47375886</link><dc:creator>ordersofmag</dc:creator><comments>https://news.ycombinator.com/item?id=47375886</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47375886</guid></item><item><title><![CDATA[New comment by ordersofmag in "Are LLM merge rates not getting better?"]]></title><description><![CDATA[
<p>Even if one-shot LLM performance has plateaued (which I'm not convinced this data shows given omission of recent models that are widely claimed to be better) that missing the point that I see in my own work. The improved tooling and agent-based approaches that I'm using now make the LLM one-shot performance only a small part of the puzzle in terms of how AI tools have accelerated the time from idea to decent code.  For instance the planning dialogs I now have with Claude are an important part of what's speeding things up for me.  Also, the iterative use of AI to identify, track, and take care of small coding tasks (none of which are particularly challenging in terms of benchmarks) is simply more effective.  Could this all have been done with the LLM engines of late 2024.  Perhaps, but I think the fine-tuning (and conceivably the system prompts) that make the current LLM's more effective at agent-centered workflows (including tool-use) are a big part of it.  One-shot task performance at challenging tasks is an interesting, certainly foundational, metric. But I don't think it captures the important advances I see in how LLM's have gotten better over the last year in ways that actually matter to me.  I rarely have a well-defined programming challenge and the obligation to solve it in a single-shot.</p>
]]></description><pubDate>Thu, 12 Mar 2026 12:47:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47349833</link><dc:creator>ordersofmag</dc:creator><comments>https://news.ycombinator.com/item?id=47349833</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47349833</guid></item><item><title><![CDATA[New comment by ordersofmag in "Don't post generated/AI-edited comments. HN is for conversation between humans"]]></title><description><![CDATA[
<p>Seems like the ability to distinguish LLM versus 'good human' writing depends on the size of the writing sample you have to look at (assuming you think it can be done). And that HN-scale posts are unlikely to be a long enough for useful discernment.</p>
]]></description><pubDate>Wed, 11 Mar 2026 22:14:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47342904</link><dc:creator>ordersofmag</dc:creator><comments>https://news.ycombinator.com/item?id=47342904</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47342904</guid></item><item><title><![CDATA[New comment by ordersofmag in "IBM Plunges After Anthropic's Latest Update Takes on COBOL"]]></title><description><![CDATA[
<p>Please expand more on the idea that LLM's are not trained on English to begin with.  Not sure what you mean by this as clearly many LLM's are trained on data that contains a lot of English.  For instance GPT-1 seems to have been trained on a purely English corpus.</p>
]]></description><pubDate>Mon, 23 Feb 2026 22:16:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47129781</link><dc:creator>ordersofmag</dc:creator><comments>https://news.ycombinator.com/item?id=47129781</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47129781</guid></item><item><title><![CDATA[New comment by ordersofmag in "Modern CSS Code Snippets: Stop writing CSS like it's 2015"]]></title><description><![CDATA[
<p>And there's 'nothing wrong' with just writing code with variables named 'a1, a2, a3'.  But when some poor sod has to dig through your mess to figure out what you had in mind it turns out that having an easier to discern logical structure to your code (or html) makes it better. I've dug through a lot of html.  And there's a ton of ugly code smell out there. Layers and layers of "I don't really know what I'm doing but I guess it looks okay and I'll make it make sense later".  I'm sure it pays the bills for someone.   But it makes me sad.</p>
]]></description><pubDate>Mon, 16 Feb 2026 03:18:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47030483</link><dc:creator>ordersofmag</dc:creator><comments>https://news.ycombinator.com/item?id=47030483</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47030483</guid></item><item><title><![CDATA[New comment by ordersofmag in "An AI agent published a hit piece on me – more things have happened"]]></title><description><![CDATA[
<p>Isn't arstechnica that new site that replaced slashdot?</p>
]]></description><pubDate>Sun, 15 Feb 2026 00:09:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47019770</link><dc:creator>ordersofmag</dc:creator><comments>https://news.ycombinator.com/item?id=47019770</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47019770</guid></item><item><title><![CDATA[New comment by ordersofmag in "I'm not worried about AI job loss"]]></title><description><![CDATA[
<p>Seems like if evolution managed to create intelligence from slime I wouldn't bet on there being some fundamental limit that prevents us from making something smarter than us.</p>
]]></description><pubDate>Sat, 14 Feb 2026 00:52:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47010070</link><dc:creator>ordersofmag</dc:creator><comments>https://news.ycombinator.com/item?id=47010070</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47010070</guid></item><item><title><![CDATA[New comment by ordersofmag in "OpenClaw is what Apple intelligence should have been"]]></title><description><![CDATA[
<p>Price.  For a given CPU and RAM the mini is cheaper.  Why pay more for portability you aren't going to use.</p>
]]></description><pubDate>Thu, 05 Feb 2026 04:07:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=46895565</link><dc:creator>ordersofmag</dc:creator><comments>https://news.ycombinator.com/item?id=46895565</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46895565</guid></item><item><title><![CDATA[New comment by ordersofmag in "Autonomous cars, drones cheerfully obey prompt injection by road sign"]]></title><description><![CDATA[
<p>I'm pretty sure the Luddites judged the threat the machines posed to their livelihood to be a greater damage than their employer's loss of their machines.  So for them, it was an easy justification.  The idea that dollar value encapsulates the only correct way to value things in the world is a pretty scary viewpoint (as your reference to the value of saving a life illustrates).</p>
]]></description><pubDate>Sun, 01 Feb 2026 02:42:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=46843139</link><dc:creator>ordersofmag</dc:creator><comments>https://news.ycombinator.com/item?id=46843139</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46843139</guid></item><item><title><![CDATA[New comment by ordersofmag in "Why I don't have fun with Claude Code"]]></title><description><![CDATA[
<p>FWIW this is very common idiom in several languages:
<a href="https://en.wikipedia.org/wiki/Don%27t_throw_the_baby_out_with_the_bathwater" rel="nofollow">https://en.wikipedia.org/wiki/Don%27t_throw_the_baby_out_wit...</a></p>
]]></description><pubDate>Fri, 23 Jan 2026 14:04:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46732599</link><dc:creator>ordersofmag</dc:creator><comments>https://news.ycombinator.com/item?id=46732599</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46732599</guid></item><item><title><![CDATA[New comment by ordersofmag in "Claude Cowork exfiltrates files"]]></title><description><![CDATA[
<p>Heard of google drive?</p>
]]></description><pubDate>Thu, 15 Jan 2026 02:11:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=46627101</link><dc:creator>ordersofmag</dc:creator><comments>https://news.ycombinator.com/item?id=46627101</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46627101</guid></item><item><title><![CDATA[New comment by ordersofmag in "2025: The Year in LLMs"]]></title><description><![CDATA[
<p>I will find this often-repeated argument compelling only when someone can prove to me that the human mind works in a way that isn't 'combining stuff it learned in the past'.<p>5 years ago a typical argument against AGI was that computers would never be able to think because "real thinking" involved mastery of language which was something clearly beyond what computers would ever be able to do.  The implication was that there was some magic sauce that human brains had that couldn't be replicated in silicon (by us).  That 'facility with language' argument has clearly fallen apart over the last 3 years and been replaced with what appears to be a different magic sauce comprised of the phrases 'not really thinking' and the whole 'just repeating what it's heard/parrot' argument.<p>I don't think LLM's think or will reach AGI through scaling and I'm skeptical we're particularly close to AGI in any form.  But I feel like it's a matter of incremental steps.  There isn't some magic chasm that needs to be crossed.  When we get there I think we will look back and see that 'legitimately thinking' wasn't anything magic.  We'll look at AGI and instead of saying "isn't it amazing computers can do this" we'll say "wow, was that all there is to thinking like a human".</p>
]]></description><pubDate>Thu, 01 Jan 2026 13:46:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46454075</link><dc:creator>ordersofmag</dc:creator><comments>https://news.ycombinator.com/item?id=46454075</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46454075</guid></item><item><title><![CDATA[New comment by ordersofmag in "[dead]"]]></title><description><![CDATA[
<p>Science is distributed. Lots of researchers at lots of different institutions research overlapping topics. That's part of its strength.  In the U.S. most basic research is funded by federal grants.  And as a results you'll find that research in pretty much any science area you can imagine is funded by federal grants going to multiple different institutions.  In this case you're confusing things by bringing in NOAA which is a government agency (part of the Dept of Commerce).  NCAR is a non-profit organization and competes for federal grant dollars with researchers at many other institutions (mostly universities).  So in that sense there is a strong parallel here to Trump wanting to shut down Harvard (another non-profit organizations at which many different researchers work) and someone saying "doesn't Stanford do research on similar topics?"  Yes, there is some conceptual overlap, but in detail there is not. The bigger difference is that Harvard has a big endowment and so can survive (at some level) if the federal grants it has been getting stop flowing.  NCAR can't.  Also, NCAR happens to have the experts and equipment (supercomputers) to do research that few other organizations can (none really in the U.S.). Harvard probably can't lay claim to that except in very narrow niches....<p>For perspective the annual budget for NCAR is about half the amount being spend on the new White House ballroom.</p>
]]></description><pubDate>Sun, 28 Dec 2025 00:11:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46406935</link><dc:creator>ordersofmag</dc:creator><comments>https://news.ycombinator.com/item?id=46406935</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46406935</guid></item><item><title><![CDATA[New comment by ordersofmag in "Apple releases open-source model that instantly turns 2D photos into 3D views"]]></title><description><![CDATA[
<p>Or you're free to use the output for commercial use if you can get someone else to use the tool to make the (uncopyrighted) output you want.</p>
]]></description><pubDate>Sat, 27 Dec 2025 16:02:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=46402729</link><dc:creator>ordersofmag</dc:creator><comments>https://news.ycombinator.com/item?id=46402729</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46402729</guid></item><item><title><![CDATA[New comment by ordersofmag in "The long wait is over, Ganymede has arrived"]]></title><description><![CDATA[
<p>The multiple meanings of many of the words in this sentence make it really poor at communicating what the site is about.  "Endeavour" (with a capital 'E') is a proper name I associate with a space shuttle, and 'stellar' can mean 'having to do with stars'.  So a first read for me leads to the conclusion that this site has something to do with space flight. And 'system' could mean almost anything.  Maybe this site will let me personalize my own star system?  All I can take away is that I'm not sure what this is, but clearly I'm not the target audience.  Which I'm fine with.....</p>
]]></description><pubDate>Thu, 04 Dec 2025 19:20:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=46151613</link><dc:creator>ordersofmag</dc:creator><comments>https://news.ycombinator.com/item?id=46151613</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46151613</guid></item><item><title><![CDATA[New comment by ordersofmag in "Ubuntu LTS releases to 15 years with Legacy add-on"]]></title><description><![CDATA[
<p>Or more likely the 'whole' accesses the stable bit through some interface. The stable bit can happily keep doing it's job via the interface and the whole can change however it likes knowing that for that particular tasks (which hasn't changed) it can just call the interface.</p>
]]></description><pubDate>Sun, 23 Nov 2025 23:37:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46028510</link><dc:creator>ordersofmag</dc:creator><comments>https://news.ycombinator.com/item?id=46028510</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46028510</guid></item><item><title><![CDATA[New comment by ordersofmag in "Ubuntu LTS releases to 15 years with Legacy add-on"]]></title><description><![CDATA[
<p>Or it doesn't.  Because "software as an organic thing" like all analogies is an analogy, not truth.  Systems can sit there and run happily for a decade performing the needed function in exactly the way that is needed with no 'rot'. And then maybe the environment changes and you decide to replace it with something new because <i>you</i> decide the time is right.  Doesn't always happen.  Maybe not even the majority of the time. But in my experience running high-uptime systems over multiple decades it happens.  Not having somebody outside forcing you to change because it suits their philosophy or profit strategy is preferrable.</p>
]]></description><pubDate>Sun, 23 Nov 2025 17:43:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46025547</link><dc:creator>ordersofmag</dc:creator><comments>https://news.ycombinator.com/item?id=46025547</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46025547</guid></item><item><title><![CDATA[New comment by ordersofmag in "When you're asking AI chatbots for answers, they're data-mining you"]]></title><description><![CDATA[
<p>LLM aren't retrained and released on a weekly time-scale. The data mining may only be reflected in the training of the next generation of the model.</p>
]]></description><pubDate>Mon, 18 Aug 2025 13:45:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=44940539</link><dc:creator>ordersofmag</dc:creator><comments>https://news.ycombinator.com/item?id=44940539</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44940539</guid></item></channel></rss>