<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: __alexs</title><link>https://news.ycombinator.com/user?id=__alexs</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 18 Apr 2026 10:49:09 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=__alexs" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by __alexs in "This year’s insane timeline of hacks"]]></title><description><![CDATA[
<p>Anthropic's marketing team are terrifyingly good. I wonder if Opus came up with this plan?</p>
]]></description><pubDate>Mon, 13 Apr 2026 16:57:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47754882</link><dc:creator>__alexs</dc:creator><comments>https://news.ycombinator.com/item?id=47754882</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47754882</guid></item><item><title><![CDATA[New comment by __alexs in "I still prefer MCP over skills"]]></title><description><![CDATA[
<p>It cannot do "anything" with the tools. Tools are very constrained in that the agent must insert into it's context the tool call, and it can only receive the response of the tool directly back into its context.<p>Tools themselves also cannot be composed in any SOTA models. Composition is not a feature the tool schema supports and they are not trained on it.<p>Models obviously understand the general concept of function composition, but we don't currently provide the environments in which this is actually possible out side of highly generic tools like Bash or sandboxed execution environments like <a href="https://agenttoolprotocol.com/" rel="nofollow">https://agenttoolprotocol.com/</a></p>
]]></description><pubDate>Fri, 10 Apr 2026 13:48:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47718141</link><dc:creator>__alexs</dc:creator><comments>https://news.ycombinator.com/item?id=47718141</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47718141</guid></item><item><title><![CDATA[New comment by __alexs in "I still prefer MCP over skills"]]></title><description><![CDATA[
<p>The agent <i>cannot</i> compose MCPs.<p>What it <i>can</i> do is call multiple MCPs, dumping tons of crap into the context and then separately run some analysis on that data.<p>Composable MCPs would require some sort of external sandbox in which the agent can write small bits of code to transform and filter the results from one MCP to the next.</p>
]]></description><pubDate>Fri, 10 Apr 2026 12:17:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47716951</link><dc:creator>__alexs</dc:creator><comments>https://news.ycombinator.com/item?id=47716951</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47716951</guid></item><item><title><![CDATA[New comment by __alexs in "Claude mixes up who said what and that's not OK"]]></title><description><![CDATA[
<p>This hasn't been the full story for years now. All SOTA models are strongly post-trained with reinforcement learning to improve performance  on specific problems and interaction patterns.<p>The vast majority of this training data is generated synthetically.</p>
]]></description><pubDate>Thu, 09 Apr 2026 11:50:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47702442</link><dc:creator>__alexs</dc:creator><comments>https://news.ycombinator.com/item?id=47702442</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47702442</guid></item><item><title><![CDATA[New comment by __alexs in "Claude mixes up who said what and that's not OK"]]></title><description><![CDATA[
<p>I think OpenAI and Anthropic probably have a lot of that lying around by now.</p>
]]></description><pubDate>Thu, 09 Apr 2026 11:28:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47702241</link><dc:creator>__alexs</dc:creator><comments>https://news.ycombinator.com/item?id=47702241</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47702241</guid></item><item><title><![CDATA[New comment by __alexs in "Claude mixes up who said what and that's not OK"]]></title><description><![CDATA[
<p>The models are already massively over trained. Perhaps you could do something like  initialise the 2 new token sets based on the shared data, then use existing chat logs to train it to understand the difference between input and output content? That's only a single extra phase.</p>
]]></description><pubDate>Thu, 09 Apr 2026 10:23:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47701710</link><dc:creator>__alexs</dc:creator><comments>https://news.ycombinator.com/item?id=47701710</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47701710</guid></item><item><title><![CDATA[New comment by __alexs in "Claude mixes up who said what"]]></title><description><![CDATA[
<p>Why are tokens not coloured? Would there just be too many params if we double the token count so the model could always tell input tokens from output tokens?</p>
]]></description><pubDate>Thu, 09 Apr 2026 10:10:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47701577</link><dc:creator>__alexs</dc:creator><comments>https://news.ycombinator.com/item?id=47701577</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47701577</guid></item><item><title><![CDATA[New comment by __alexs in "The Future of Everything Is Lies, I Guess"]]></title><description><![CDATA[
<p>Solving arbitrary logical problems seems to be equivalent to solving the halting problem so you are probably wise not to make that bet.</p>
]]></description><pubDate>Wed, 08 Apr 2026 16:51:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47692837</link><dc:creator>__alexs</dc:creator><comments>https://news.ycombinator.com/item?id=47692837</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47692837</guid></item><item><title><![CDATA[New comment by __alexs in "Every GPU That Mattered"]]></title><description><![CDATA[
<p>A lot of GPUs in this list are basically just previous GPU but faster or more RAM. I kind of thought it was going to focus on interesting new architecture innovations.</p>
]]></description><pubDate>Tue, 07 Apr 2026 09:07:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47672511</link><dc:creator>__alexs</dc:creator><comments>https://news.ycombinator.com/item?id=47672511</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47672511</guid></item><item><title><![CDATA[New comment by __alexs in "Sam Altman may control our future – can he be trusted?"]]></title><description><![CDATA[
<p>If you know even the basics of ethics then such claims are clearly nonsense. There is no stable context independent ethical behaviour. This is a great example of the dangers of motivated reasoning.</p>
]]></description><pubDate>Tue, 07 Apr 2026 08:42:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47672322</link><dc:creator>__alexs</dc:creator><comments>https://news.ycombinator.com/item?id=47672322</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47672322</guid></item><item><title><![CDATA[New comment by __alexs in "4D Doom"]]></title><description><![CDATA[
<p>There have been other 4D video games with actually good performance though.</p>
]]></description><pubDate>Wed, 01 Apr 2026 10:27:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47599044</link><dc:creator>__alexs</dc:creator><comments>https://news.ycombinator.com/item?id=47599044</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47599044</guid></item><item><title><![CDATA[New comment by __alexs in "Claude Code's source code has been leaked via a map file in their NPM registry"]]></title><description><![CDATA[
<p>No it isn't? Are you an AI?</p>
]]></description><pubDate>Wed, 01 Apr 2026 10:24:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47599023</link><dc:creator>__alexs</dc:creator><comments>https://news.ycombinator.com/item?id=47599023</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47599023</guid></item><item><title><![CDATA[New comment by __alexs in "Claude Code Unpacked : A visual guide"]]></title><description><![CDATA[
<p>Many people seem to believe the Claude Code has some sort of secret sauce in the agent itself for some reason.<p>I have no idea why because in my experience Claude Code and the same models inside of Cursor behave almost identically. I think all the secret sauce is in the RLHF.</p>
]]></description><pubDate>Wed, 01 Apr 2026 10:22:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47599013</link><dc:creator>__alexs</dc:creator><comments>https://news.ycombinator.com/item?id=47599013</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47599013</guid></item><item><title><![CDATA[New comment by __alexs in "OpenAI closes funding round at an $852B valuation"]]></title><description><![CDATA[
<p>I thought they needed $7 trillion or they'd be unable to keep training new models?</p>
]]></description><pubDate>Tue, 31 Mar 2026 21:48:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47593934</link><dc:creator>__alexs</dc:creator><comments>https://news.ycombinator.com/item?id=47593934</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47593934</guid></item><item><title><![CDATA[New comment by __alexs in "Claude Code's source code has been leaked via a map file in their NPM registry"]]></title><description><![CDATA[
<p>Looking forward to someone patching it so that it works with non Anthropic models.</p>
]]></description><pubDate>Tue, 31 Mar 2026 13:56:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47587476</link><dc:creator>__alexs</dc:creator><comments>https://news.ycombinator.com/item?id=47587476</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47587476</guid></item><item><title><![CDATA[New comment by __alexs in "Claude Code's source code has been leaked via a map file in their NPM registry"]]></title><description><![CDATA[
<p>Using some ML to derive a sentiment regex seems like a good actually?</p>
]]></description><pubDate>Tue, 31 Mar 2026 13:42:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47587281</link><dc:creator>__alexs</dc:creator><comments>https://news.ycombinator.com/item?id=47587281</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47587281</guid></item><item><title><![CDATA[New comment by __alexs in "Spring Boot Done Right: Lessons from a 400-Module Codebase"]]></title><description><![CDATA[
<p>Not a strong indicator that Enterprises have good taste though is it?</p>
]]></description><pubDate>Mon, 30 Mar 2026 14:40:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47574981</link><dc:creator>__alexs</dc:creator><comments>https://news.ycombinator.com/item?id=47574981</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47574981</guid></item><item><title><![CDATA[New comment by __alexs in "Miscellanea: The War in Iran"]]></title><description><![CDATA[
<p>The reset isn't the problem, the entirely nerfing the Red team is the problem. The US took steps to fail to learn from the exercise before it had even finished.</p>
]]></description><pubDate>Wed, 25 Mar 2026 17:39:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47520659</link><dc:creator>__alexs</dc:creator><comments>https://news.ycombinator.com/item?id=47520659</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47520659</guid></item><item><title><![CDATA[New comment by __alexs in "GitHub appears to be struggling with measly three nines availability"]]></title><description><![CDATA[
<p>No they were slow at doing features before, and they are still slow afterwards.</p>
]]></description><pubDate>Mon, 23 Mar 2026 14:01:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47489691</link><dc:creator>__alexs</dc:creator><comments>https://news.ycombinator.com/item?id=47489691</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47489691</guid></item><item><title><![CDATA[New comment by __alexs in "Walmart: ChatGPT checkout converted 3x worse than website"]]></title><description><![CDATA[
<p>It's not cynical it is materialism.<p>Shoppers do not want to pay to shop. Retailers pay thousands to encourage you to shop with them. They are the economic buyers of this feature.</p>
]]></description><pubDate>Mon, 23 Mar 2026 13:07:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47488976</link><dc:creator>__alexs</dc:creator><comments>https://news.ycombinator.com/item?id=47488976</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47488976</guid></item></channel></rss>