<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: CuriouslyC</title><link>https://news.ycombinator.com/user?id=CuriouslyC</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 11:18:02 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=CuriouslyC" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by CuriouslyC in "Codex for almost everything"]]></title><description><![CDATA[
<p>They were running a 2x rate limit promo last month.</p>
]]></description><pubDate>Thu, 16 Apr 2026 23:16:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47800753</link><dc:creator>CuriouslyC</dc:creator><comments>https://news.ycombinator.com/item?id=47800753</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47800753</guid></item><item><title><![CDATA[New comment by CuriouslyC in "Codex for almost everything"]]></title><description><![CDATA[
<p>To be fair, GPT 5.4 is mostly a better model than Opus 4.6 in terms of quality of work. The tradeoff is it's less autonomous and it takes longer to complete equivalent tasks.</p>
]]></description><pubDate>Thu, 16 Apr 2026 23:15:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47800744</link><dc:creator>CuriouslyC</dc:creator><comments>https://news.ycombinator.com/item?id=47800744</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47800744</guid></item><item><title><![CDATA[New comment by CuriouslyC in "The beginning of scarcity in AI"]]></title><description><![CDATA[
<p>The dynamics vastly favor China, part of the reason the US sprinting towards "ASI" isn't totally boneheaded is that the US and its industry needs a hail mary play to "win" the game, if they play it safe they lose for sure.</p>
]]></description><pubDate>Thu, 16 Apr 2026 22:17:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47800237</link><dc:creator>CuriouslyC</dc:creator><comments>https://news.ycombinator.com/item?id=47800237</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47800237</guid></item><item><title><![CDATA[New comment by CuriouslyC in "The Closing of the Frontier"]]></title><description><![CDATA[
<p>The irony is that we've just shifted the complexity. Anyone can make something now, but since everyone is making things, now you need to compete on reach/distribution more aggressively. The new "capital" is social media juice and pre-AI rep. Same problem, different skin.</p>
]]></description><pubDate>Sun, 12 Apr 2026 19:43:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47743608</link><dc:creator>CuriouslyC</dc:creator><comments>https://news.ycombinator.com/item?id=47743608</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47743608</guid></item><item><title><![CDATA[New comment by CuriouslyC in "Has Mythos just broken the deal that kept the internet safe?"]]></title><description><![CDATA[
<p>I've been mystified by how sticky Anthropic's marketing is for a while. It's really surprising given how poorly they run community relations compared with OAI, and how they're just now starting to be transparent.</p>
]]></description><pubDate>Fri, 10 Apr 2026 23:57:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47725495</link><dc:creator>CuriouslyC</dc:creator><comments>https://news.ycombinator.com/item?id=47725495</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47725495</guid></item><item><title><![CDATA[New comment by CuriouslyC in "I still prefer MCP over skills"]]></title><description><![CDATA[
<p>Just to poke some holes in this in a friendly way:<p>* What algorithm does tool_search use?<p>* Can tool_search search subcommands only?<p>* What's your argument for a harness having a hacked in bash wrapper nestled into the MCP to handle composition being a better idea than just using a CLI?<p>* Shell + CLI gives you basically infinite workflow possibilities via composition. Given the prior point, perhaps you could get a lot of that with hacked-in MCP composition, but given the training data, I'll take an agent's ability to write bash scripts over their ability to compose MCPs by far.</p>
]]></description><pubDate>Fri, 10 Apr 2026 18:14:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47721740</link><dc:creator>CuriouslyC</dc:creator><comments>https://news.ycombinator.com/item?id=47721740</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47721740</guid></item><item><title><![CDATA[New comment by CuriouslyC in "I still prefer MCP over skills"]]></title><description><![CDATA[
<p>Cloudflare wrote a blog post about this exact case. The cloud providers and their CLIs are the canonical example, so 100% not a strawman.</p>
]]></description><pubDate>Fri, 10 Apr 2026 14:33:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47718781</link><dc:creator>CuriouslyC</dc:creator><comments>https://news.ycombinator.com/item?id=47718781</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47718781</guid></item><item><title><![CDATA[New comment by CuriouslyC in "I still prefer MCP over skills"]]></title><description><![CDATA[
<p>If you have an API with thousands of endpoints, that MCP description is going to totally rot your context and make your model dumb, and there's no mechanism for progressive disclosure of parts of the tool's abilities, like there is for CLIs where you can do something like:<p>tool --help<p>tool subcommand1 --help<p>tool subcommand2 --help<p>man tool | grep "thing I care about"<p>As for stateful behavior, say you have the google docs or email mcp. You want to search org-wide for docs or emails that match some filter, make it a data set, then do analysis. To do this with MCP, the model has to write the files manually after reading however many KB of input from the MCP. With a cli it's just "tool >> starting_data_set.csv"</p>
]]></description><pubDate>Fri, 10 Apr 2026 12:51:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47717377</link><dc:creator>CuriouslyC</dc:creator><comments>https://news.ycombinator.com/item?id=47717377</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47717377</guid></item><item><title><![CDATA[New comment by CuriouslyC in "I still prefer MCP over skills"]]></title><description><![CDATA[
<p>CLIs are technically better for a number of reasons.<p>If an enterprise already has internal tooling with authn/z, there's no reason to overlay on top of that.<p>MCPs main value is as a structured description of an agent-usable subset of an API surface with community traction, so you can expect it to exist, be more relevant than the OpenAPI docs.</p>
]]></description><pubDate>Fri, 10 Apr 2026 12:48:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47717336</link><dc:creator>CuriouslyC</dc:creator><comments>https://news.ycombinator.com/item?id=47717336</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47717336</guid></item><item><title><![CDATA[New comment by CuriouslyC in "I still prefer MCP over skills"]]></title><description><![CDATA[
<p>MCP is less discoverable than a CLI. You can have detailed, progressive disclosure for a CLI via --help and subcommands.<p>MCPs needs to be wrapped to be composed.<p>MCPs needs to implement stateful behavior, shell + cli gives it to you for free.<p>MCP isn't great, the main value of it is that it's got uptake, it's structured and it's "for agents." You can wrap/introspect MCP to do lots of neat things.</p>
]]></description><pubDate>Fri, 10 Apr 2026 12:33:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47717147</link><dc:creator>CuriouslyC</dc:creator><comments>https://news.ycombinator.com/item?id=47717147</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47717147</guid></item><item><title><![CDATA[New comment by CuriouslyC in "Maine is about to become the first state to ban major new data centers"]]></title><description><![CDATA[
<p>The deserts around El Paso are still quite a bit more alive than the ugliest desert I've ever seen (the stretch between Phoenix and San Diego gets that dubious honor).</p>
]]></description><pubDate>Thu, 09 Apr 2026 23:04:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47711444</link><dc:creator>CuriouslyC</dc:creator><comments>https://news.ycombinator.com/item?id=47711444</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47711444</guid></item><item><title><![CDATA[New comment by CuriouslyC in "Maine is about to become the first state to ban major new data centers"]]></title><description><![CDATA[
<p>I've driven through all of Texas twice, and had to spend time in Austin and Houston for work, but never had to live there, so I'd like to think I'm informed without being biased.<p>Besides the heavily oak covered hill country west of Austin it's pretty much the ugliest landscape in the country. I will admit the west Texas desert is less ugly than the desert of southern Arizona/eastern California, but north/east Texas is the flattest, least interesting part of the Mississippi basin (Nebraska/Kansas/Oklahoma are similarly meh but you don't have the insane humidity).</p>
]]></description><pubDate>Thu, 09 Apr 2026 22:42:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47711231</link><dc:creator>CuriouslyC</dc:creator><comments>https://news.ycombinator.com/item?id=47711231</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47711231</guid></item><item><title><![CDATA[New comment by CuriouslyC in "Muse Spark: Scaling towards personal superintelligence"]]></title><description><![CDATA[
<p>You think it has nothing to do with it. Even they only have a loose understanding of exactly the final results of trying to treat Claude like a real being in terms of how the model acts.<p>For example, Claude has a "turn evil in response to reinforced reward hacking" behavior which is a fairly uniquely Claude thing (as far as I've seen anyhow), and very likely the result of that attempt to imbue personhood.</p>
]]></description><pubDate>Wed, 08 Apr 2026 22:46:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47697218</link><dc:creator>CuriouslyC</dc:creator><comments>https://news.ycombinator.com/item?id=47697218</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47697218</guid></item><item><title><![CDATA[New comment by CuriouslyC in "Muse Spark: Scaling towards personal superintelligence"]]></title><description><![CDATA[
<p>Anthropic has just been focused on coding/terminal work longer mostly, and their PRO tier model is coding focused, unlike the GPT and Gemini pro tier models which have been optimized for science.<p>Their whole "training the LLM to be a person" technique probably contributes to its pleasant conversational behavior, and making its refusals less annoying (GPT 5.2+ got obnoxiously aligned), and also a bit to its greater autonomy.<p>Overall they don't have any real moat, but they are more focused than their competition (and their marketing team is slaying).</p>
]]></description><pubDate>Wed, 08 Apr 2026 19:55:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47695453</link><dc:creator>CuriouslyC</dc:creator><comments>https://news.ycombinator.com/item?id=47695453</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47695453</guid></item><item><title><![CDATA[New comment by CuriouslyC in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>To be fair, starting with a toy model to get a first order approximation then building on it is kind of how theoretical science is done.</p>
]]></description><pubDate>Wed, 08 Apr 2026 19:45:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47695320</link><dc:creator>CuriouslyC</dc:creator><comments>https://news.ycombinator.com/item?id=47695320</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47695320</guid></item><item><title><![CDATA[New comment by CuriouslyC in "Employers use your personal data to figure out the lowest salary you'll accept"]]></title><description><![CDATA[
<p>I foresee people shopping in masks, with phone off, using cash as a protest, and poor people being black market designated shoppers.</p>
]]></description><pubDate>Mon, 06 Apr 2026 14:43:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47661594</link><dc:creator>CuriouslyC</dc:creator><comments>https://news.ycombinator.com/item?id=47661594</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47661594</guid></item><item><title><![CDATA[New comment by CuriouslyC in "The threat is comfortable drift toward not understanding what you're doing"]]></title><description><![CDATA[
<p>Anthropic has shared that API inference has a ~60% margin. OpenAI's margin might be slightly lower since they price aggressively but I would be surprised if it was much different.</p>
]]></description><pubDate>Sun, 05 Apr 2026 13:15:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47649130</link><dc:creator>CuriouslyC</dc:creator><comments>https://news.ycombinator.com/item?id=47649130</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47649130</guid></item><item><title><![CDATA[New comment by CuriouslyC in "The threat is comfortable drift toward not understanding what you're doing"]]></title><description><![CDATA[
<p>Ironically, apples are one of the fruits where tree ripening isn't a big deal for a lot of varietals. You should have used tomato as the example, the difference there is night and day pretty much across the board.<p>If humans can prove that bespoke human code brings value, it'll stick around. I expect that the cases where this will be true will just gradually erode over time.</p>
]]></description><pubDate>Sun, 05 Apr 2026 13:13:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47649101</link><dc:creator>CuriouslyC</dc:creator><comments>https://news.ycombinator.com/item?id=47649101</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47649101</guid></item><item><title><![CDATA[New comment by CuriouslyC in "The threat is comfortable drift toward not understanding what you're doing"]]></title><description><![CDATA[
<p>Fusion isn't a good example. Self driving cars are a battle between regulation and 9's of reliability, if we were willing to accept self driving cars that crashed as much as humans it'd be here already.<p>Whatever models suck at, we can pour money into making them do better. It's very cut and dry. The squirrely bit is how that contributes to "general intelligence" and whether the models are progressing towards overall autonomy due to our changes. That mostly matters for the AGI mouthbreathers though, people doing actual work just care that the models have improved.</p>
]]></description><pubDate>Sun, 05 Apr 2026 13:07:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47649037</link><dc:creator>CuriouslyC</dc:creator><comments>https://news.ycombinator.com/item?id=47649037</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47649037</guid></item><item><title><![CDATA[New comment by CuriouslyC in "The threat is comfortable drift toward not understanding what you're doing"]]></title><description><![CDATA[
<p>Just because Bob doesn't know e.g. Rust syntax and library modules well, doesn't mean that Bob can't learn an algorithm to solve a difficult problem. The AI might suggest classes of algorithms that could be applicable given the real world constraints, and help Bob set up an experimental plan to test different algorithms for efficacy in the situation, but Bob's intuition is still in the drivers's seat.<p>Of course, that assumes a Bob with drive and agency. He could just as easily tell the AI to fix it without trying to stay in the loop.</p>
]]></description><pubDate>Sun, 05 Apr 2026 13:03:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47649003</link><dc:creator>CuriouslyC</dc:creator><comments>https://news.ycombinator.com/item?id=47649003</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47649003</guid></item></channel></rss>