<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: Art9681</title><link>https://news.ycombinator.com/user?id=Art9681</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 14 May 2026 20:27:36 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=Art9681" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by Art9681 in "The US is winning the AI race where it matters most: commercialization"]]></title><description><![CDATA[
<p>The US is winning the AI race in all matters.</p>
]]></description><pubDate>Thu, 14 May 2026 00:00:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=48129336</link><dc:creator>Art9681</dc:creator><comments>https://news.ycombinator.com/item?id=48129336</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48129336</guid></item><item><title><![CDATA[New comment by Art9681 in "US Government releases first batch of UAP documents and videos"]]></title><description><![CDATA[
<p>It wouldn't have the same appeal if they reported seeing "larger than average size squids we didnt know existed". Every story was embellished in a world without pocket cameras. And the further you go back, the more grand the fiction was. Tales of men splitting the sea and walking on water.<p>It is all very highly entertaining.</p>
]]></description><pubDate>Sun, 10 May 2026 04:36:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=48081040</link><dc:creator>Art9681</dc:creator><comments>https://news.ycombinator.com/item?id=48081040</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48081040</guid></item><item><title><![CDATA[New comment by Art9681 in "US Government releases first batch of UAP documents and videos"]]></title><description><![CDATA[
<p>This is the truth. If these "phenomena" where real we wouldn't question it. None of these reports would be necessary. It would just be common knowledge.<p>It's the modern day equivalent of Big Foot or Nessie and its relevance will wither away with the current generations.</p>
]]></description><pubDate>Sun, 10 May 2026 04:30:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=48081015</link><dc:creator>Art9681</dc:creator><comments>https://news.ycombinator.com/item?id=48081015</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48081015</guid></item><item><title><![CDATA[New comment by Art9681 in "US Government releases first batch of UAP documents and videos"]]></title><description><![CDATA[
<p>Behold. The most comprehensive reports of high tech equipment malfunction in the history of humanity. It is no surprise the most incompetent administration in the history of the United States would release this obvious PsyOp to the public in order to distract from their own lack of competency.<p>The house of cards is crumbling and they are desperate.<p>Don't worry. It will be over in 3 years and we will have plenty of entertainment watching the current heads of state plead the fifth in their corruption trials.<p>Hang in there!</p>
]]></description><pubDate>Sun, 10 May 2026 04:24:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=48080995</link><dc:creator>Art9681</dc:creator><comments>https://news.ycombinator.com/item?id=48080995</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48080995</guid></item><item><title><![CDATA[New comment by Art9681 in "LLMs corrupt your documents when you delegate"]]></title><description><![CDATA[
<p>Any rando can publish research nowadays. It means nothing. Just like "X country published N research papers last year". It is noise. In a world where it was required to attach age, experience level, and country of origin to every comment, research paper, or post on the internet, it would shatter the conviction we mistakenly have towards the information we receive.<p>This team is inexperienced and it shows.<p>The noise to signal ratio will get worse, even in "academia". Brace yourselves. The kids are growing up in this new world.</p>
]]></description><pubDate>Sun, 10 May 2026 04:06:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=48080903</link><dc:creator>Art9681</dc:creator><comments>https://news.ycombinator.com/item?id=48080903</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48080903</guid></item><item><title><![CDATA[New comment by Art9681 in "LLMs corrupt your documents when you delegate"]]></title><description><![CDATA[
<p>Remind yourselves that most research papers are written by career students with no real world practical experience. That is all.</p>
]]></description><pubDate>Sun, 10 May 2026 03:58:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=48080877</link><dc:creator>Art9681</dc:creator><comments>https://news.ycombinator.com/item?id=48080877</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48080877</guid></item><item><title><![CDATA[New comment by Art9681 in "Appearing productive in the workplace"]]></title><description><![CDATA[
<p>Counterpoint: If humans shipped perfect products they would no longer havejobs. The majority of time spent in an organization is fixing problems humans caused. For good reasons and bad excuses. We are not machines.<p>What we, collectively as a species are building now with AI is a mirror that reflects the failures and successes we contributed to.<p>No engineer here has a perfect record. No senior or principal either. We make a ton of mistakes that are rarely written about.<p>This is an opportunity for the ones that assume they have mastered the craft to put up or shut up. Anyone can write a blog with or without AI.<p>Put your skills to work and implement the system that solves the problem you lament. Otherwise, get off my lawn.<p>Its another voice screaming into the void without offering a solution. The solution is not to build a faster horse. It is not to reminisce about the past. That ship sailed.<p>Fix the problem. It's the 100th blog repeating the same thing we've read for two years. Nothing was accomplished here except wasting time on the obvious to pat yourself on the back.<p>A lot of time is being wasted writing blogs raising red flags.<p>That's the easy part.</p>
]]></description><pubDate>Thu, 07 May 2026 04:03:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=48045262</link><dc:creator>Art9681</dc:creator><comments>https://news.ycombinator.com/item?id=48045262</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48045262</guid></item><item><title><![CDATA[New comment by Art9681 in "Microsoft and OpenAI end their exclusive and revenue-sharing deal"]]></title><description><![CDATA[
<p>We can because the reality is that America has led in AI since the beginning and has had the best frontier models. It's not like some other country held the top spot for any given period of time. No one in Europe or China. I'd give it the benefit of the doubt if there was precedent. But the only logical position to take is the lead is widening and while most AI's will go over some threshold where it is good enough for most people, the actual frontier will remain firmly in American soil.</p>
]]></description><pubDate>Mon, 27 Apr 2026 19:54:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47926496</link><dc:creator>Art9681</dc:creator><comments>https://news.ycombinator.com/item?id=47926496</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47926496</guid></item><item><title><![CDATA[New comment by Art9681 in "OpenAI releases GPT-5.5 and GPT-5.5 Pro in the API"]]></title><description><![CDATA[
<p>A junior tinkering in their garage in domains they have little experience executed a flawed test and decided to call it a benchmark. It's extremely common nowadays because words dont mean anything anymore. The forums that used to be filled with technical people doing real work are now filled with the masses of vibe researchers doing this kind of stuff. This is what happens when anything goes over some popularity threshold.<p>HN is the last bastion of serious inquiry these days. But its not immune as OPs comment proves.</p>
]]></description><pubDate>Sat, 25 Apr 2026 13:54:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47901591</link><dc:creator>Art9681</dc:creator><comments>https://news.ycombinator.com/item?id=47901591</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47901591</guid></item><item><title><![CDATA[New comment by Art9681 in "Filing the corners off my MacBooks"]]></title><description><![CDATA[
<p>I have no dog in this fight but a friendly reminder that temperature and weather are not synonymous.</p>
]]></description><pubDate>Sat, 11 Apr 2026 01:22:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47726251</link><dc:creator>Art9681</dc:creator><comments>https://news.ycombinator.com/item?id=47726251</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47726251</guid></item><item><title><![CDATA[New comment by Art9681 in "OpenAI's fall from grace as investors race to Anthropic"]]></title><description><![CDATA[
<p>Aside from the fabricated drama and the trend chasing, OpenAI still has the best overall model and API service. Anthropic is really good, no doubt. But gpt-5.4 is a better model than even Opus, even if its a marginal advantage. I use both.</p>
]]></description><pubDate>Mon, 06 Apr 2026 03:55:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47656812</link><dc:creator>Art9681</dc:creator><comments>https://news.ycombinator.com/item?id=47656812</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47656812</guid></item><item><title><![CDATA[New comment by Art9681 in "Codex pricing to align with API token usage, instead of per-message"]]></title><description><![CDATA[
<p>That is just idealism. Being "open" doesnt get you any advantage in the real world. You're not going to meaningfully compete in the new economy using "lesser" models. The economy does not care about principles or ethics. No one is going to build a long term business that provides actual value on open models. They can try. They can hype. And they can swindle and grift and scalp some profit before they become irrelevant. But it will not last.<p>Why? Because what was built with an open model can be sneezed into existence by a frontier model ran via first party API with the best practice configurations the providers publish in usage guides that no one seems to know exist.<p>The difference between the best frontier model (gpt-5.4-xhigh or opus 4.6) and the best open model is vast.<p>But that is only obvious when your use case is actually pushing the frontier.<p>If you're building a crud app, or the modern equivalent of a TODO app, even a lemon can produce that nowadays so you will assume open has caught up to closed because your use case never required frontier intelligence.</p>
]]></description><pubDate>Mon, 06 Apr 2026 03:27:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47656656</link><dc:creator>Art9681</dc:creator><comments>https://news.ycombinator.com/item?id=47656656</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47656656</guid></item><item><title><![CDATA[New comment by Art9681 in "Caveman: Why use many token when few token do trick"]]></title><description><![CDATA[
<p>This was an experiment conducted during gpt-3.5 era, and again during the gpt-4 era.<p>There is a reason it is not a common/popular technique.</p>
]]></description><pubDate>Mon, 06 Apr 2026 00:32:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47655492</link><dc:creator>Art9681</dc:creator><comments>https://news.ycombinator.com/item?id=47655492</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47655492</guid></item><item><title><![CDATA[New comment by Art9681 in "Qwen3.6-Plus: Towards real world agents"]]></title><description><![CDATA[
<p>How convenient of them to compare themselves to the last generation Opus and GPT models to make their model look better than it really is.</p>
]]></description><pubDate>Thu, 02 Apr 2026 14:54:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47615314</link><dc:creator>Art9681</dc:creator><comments>https://news.ycombinator.com/item?id=47615314</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47615314</guid></item><item><title><![CDATA[New comment by Art9681 in "Miasma: A tool to trap AI web scrapers in an endless poison pit"]]></title><description><![CDATA[
<p>Can't we simple parse and remove any style="display: none;", aria-hidden="true", and tabindex="1" attributes before the text is processed and get around this trick? What am I missing?</p>
]]></description><pubDate>Sun, 29 Mar 2026 15:18:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47563898</link><dc:creator>Art9681</dc:creator><comments>https://news.ycombinator.com/item?id=47563898</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47563898</guid></item><item><title><![CDATA[New comment by Art9681 in "Local Stack Archived their GitHub repo and requires an account to run"]]></title><description><![CDATA[
<p>That solution can be recreated by a skilled AI boosted senior platform engineer in a few days and parity achieved in a few weeks. Nothing of value was lost.</p>
]]></description><pubDate>Tue, 24 Mar 2026 03:26:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47498293</link><dc:creator>Art9681</dc:creator><comments>https://news.ycombinator.com/item?id=47498293</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47498293</guid></item><item><title><![CDATA[New comment by Art9681 in "LLMs learn what programmers create, not how programmers work"]]></title><description><![CDATA[
<p>Is "how programmers work" a useful and provable metric? No? Then it belongs in philosophy discussions. How you work and how I work is different. Your work may have ended up in the LLM training and my work did not. Or vice versa.<p>Can you objectively analyze how VSCode adapts to your way of working without our interference?<p>Did you test your theory with the actual frontier LLMs (which Kimi K2.5 is not BTW?)</p>
]]></description><pubDate>Tue, 24 Mar 2026 03:22:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47498274</link><dc:creator>Art9681</dc:creator><comments>https://news.ycombinator.com/item?id=47498274</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47498274</guid></item><item><title><![CDATA[New comment by Art9681 in "I'm 60 years old. Claude Code killed a passion"]]></title><description><![CDATA[
<p>I enjoy the journey too. The journey is building systems, not coding. Coding was always the most tedious and least interesting part of it. Thinking about the system, thinking about its implementation details, iterating and making it better and better. Nothing has changed with AI. My ambition grew with the technology. Now I don't waste time on simple systems. I can get to work doing what I've always thought would be impossible, or take years. I can fail faster than ever and pivot sooner.<p>It's the best thing to happen to systems engineering.</p>
]]></description><pubDate>Sun, 15 Mar 2026 13:17:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47387116</link><dc:creator>Art9681</dc:creator><comments>https://news.ycombinator.com/item?id=47387116</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47387116</guid></item><item><title><![CDATA[New comment by Art9681 in "Claude is an Electron App because we've lost native"]]></title><description><![CDATA[
<p>Should've used Go.</p>
]]></description><pubDate>Wed, 04 Mar 2026 01:53:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47241986</link><dc:creator>Art9681</dc:creator><comments>https://news.ycombinator.com/item?id=47241986</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47241986</guid></item><item><title><![CDATA[New comment by Art9681 in "MacBook Pro with M5 Pro and M5 Max"]]></title><description><![CDATA[
<p>It's going to be faster no matter what. My M3 MAX prints tokens faster than I can read for the new MoE models. It's the prompt processing that kills it when the context grows beyond a threshold which is easy to do in the modern agentic loops.</p>
]]></description><pubDate>Wed, 04 Mar 2026 01:48:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47241950</link><dc:creator>Art9681</dc:creator><comments>https://news.ycombinator.com/item?id=47241950</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47241950</guid></item></channel></rss>