<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: HarHarVeryFunny</title><link>https://news.ycombinator.com/user?id=HarHarVeryFunny</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 15 Apr 2026 11:22:11 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=HarHarVeryFunny" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by HarHarVeryFunny in "The M×N problem of tool calling and open-source models"]]></title><description><![CDATA[
<p>Bottom line is that MCP doesn't change anything in the way the model discovers and invokes tools, so MCP doesn't help with the issue of lack of standard tool call syntax.<p>1) The way basic non-MCP tool use works is that the client (e.g. agent) registers (advertises) the tools it wants to make available to the model by sending an appropriate chunk of JSON to the model as part of every request (since the model is stateless), and if the model wants to use the tool then it'll generate a corresponding tool call chunk of JSON in the output.<p>2) For built-in tools like web_search the actual implementation of the tool will be done server-side before the response is sent back to the client. The server sees the tool invocation JSON in the response, calls the tool and replaces the tool call JSON with the tool output before sending the updated response back to the client.<p>3) For non-built-in tools such as the edit tool provided by a coding agent, the tool invocation JSON will not be intercepted server-side, and is instead just returned as-is to the client (agent) as part of the response. The client now has the responsibility of recognizing these tool invocations and replacing the invocation JSON with the tool output the same as the server would have done for built-in tools. The actual "tool call" can be implemented by the client however it likes - either internal to the client to by calling some external API.<p>4) MCP tools work exactly the same as other client-provided tools, aside from how the client learns about them, and implements them if the model chooses to use them. This all happens client side, with the server/model unaware that these client tools are different from any others it is offering. The same JSON tool registration and JSON tool call syntax will be used.<p>What happens is that client configuration tells it what MCP servers to support, and as part of client initialization the client calls each MCP server to ask what tools it is providing. The client then advertises/registers these MCP tools it has "discovered" to the model in the normal way. When the client receives a tool call in the model response and sees that it is an MCP provided tool, then it knows it has to make an MCP call to the MCP server to execute the tool call.<p>TL/DR<p>o the client/agent talks standard MCP protocol to the MCP servers<p>o the client/agent talks model-specific tool use protocol to the model</p>
]]></description><pubDate>Tue, 14 Apr 2026 21:11:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47771590</link><dc:creator>HarHarVeryFunny</dc:creator><comments>https://news.ycombinator.com/item?id=47771590</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47771590</guid></item><item><title><![CDATA[New comment by HarHarVeryFunny in "The M×N problem of tool calling and open-source models"]]></title><description><![CDATA[
<p>It's not that strange - the industry wants customer lock-in, not commodification.</p>
]]></description><pubDate>Tue, 14 Apr 2026 19:23:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47770204</link><dc:creator>HarHarVeryFunny</dc:creator><comments>https://news.ycombinator.com/item?id=47770204</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47770204</guid></item><item><title><![CDATA[New comment by HarHarVeryFunny in "Write less code, be more responsible"]]></title><description><![CDATA[
<p>> He didn’t say that.<p>Actually he did, or something very close to it.<p>Obviously SOMETIMES you can add more developers to a project to successfully speed it up, but Brooks point was that it can easily also have the opposite effect and slow the project down.<p>The main reason Brooks gives for this is the extra overhead you've just added to the project in terms of communications, management, etc. In fact increasing team size always makes the team less efficient - adds more overhead, and the question is whether the new person added adds enough value to offset or overcome this.<p>Most experienced developers realize this intuitively - always faster to have the smallest team of the best people possible.<p>Of course some projects are just so huge that a large team is unavoidable, but don't think you are going to get linear speedup by adding more people. A 20 person team will not be twice as fast as a 10 person team. This is the major point of the book, and the reason for the title "the mythical man month". The myth is that men and months can be traded off, such that a "100 man month" project that would take 10 men 10 months could therefore be accomplished in 1 month if you had a team of 100. The team of 100 may in fact take more than 10 months since you just just turned a smallish efficient team into a chaotic mess.<p>Adding an AI "team member" is of course a bit different to adding a human team member, but maybe not that different, and the reason is basically the same - there are negatives as well as positives to adding that new member, and it will only be a net win if the positives outweigh the negatives (extra layers of specifications/guardrails, interaction, babysitting and correction - knowing when context rot has set in and time to abort and reset, etc).<p>With AI, you are typically interactively "vibe coding", even if in responsible fashion with specifications and guardrails, so the "new guy" isn't working in parallel with you, but is rather taking up all your time, and now his/its prodigious code output needs reviewing by someone, unless you choose to omit that step.</p>
]]></description><pubDate>Tue, 14 Apr 2026 18:19:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47769255</link><dc:creator>HarHarVeryFunny</dc:creator><comments>https://news.ycombinator.com/item?id=47769255</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47769255</guid></item><item><title><![CDATA[New comment by HarHarVeryFunny in "Taking on CUDA with ROCm: 'One Step After Another'"]]></title><description><![CDATA[
<p>Triton, while a compiler, generates code at a lower level than CUDA or ROCm.<p>The machine code that actually runs on NVidia and AMD GPUs respectively are SASS and AMDGCN, and in each case there is also an intermediate level of representation:<p>CUDA -> PTX -> SASS<p>ROCm -> LLVM-IR -> AMDGCN<p>The Triton compiler isn't generating CUDA or ROCm - it generates it's own generic MLIR intermediate representation, which then gets converted into PTX or LLVM-IR, with vendor-specific tools then doing the final step.<p>If you are interested in efficiency and wanted to write high level code, then you might be using Pytorch's torch.compile, which then generates Triton kernels, etc.<p>If you really want to squeeze the highest performance out of an NVIDA GPU then you would write in PTX assembler, not CUDA, and for AMD in GCN assembler.</p>
]]></description><pubDate>Tue, 14 Apr 2026 15:39:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47767106</link><dc:creator>HarHarVeryFunny</dc:creator><comments>https://news.ycombinator.com/item?id=47767106</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47767106</guid></item><item><title><![CDATA[New comment by HarHarVeryFunny in "The M×N problem of tool calling and open-source models"]]></title><description><![CDATA[
<p>There already exist products like LiteLLM that adapt tool calling to different providers. FWIW, incompatibility isn't just an opensource problem - OpenAI and Anthropic also use different syntax for tool registration and invocation.<p>I would guess that lack of standardization of what tools are provided by different agents is as much of a problem as the differences in syntax, since the ideal case would be for a model to be trained end-to-end for use with a specific agent and set of tools, as I believe Anthropic do. Any agent interacting with a model that wasn't specifically trained to work with that agent/toolset is going to be at a disadvantage.</p>
]]></description><pubDate>Tue, 14 Apr 2026 13:42:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47765563</link><dc:creator>HarHarVeryFunny</dc:creator><comments>https://news.ycombinator.com/item?id=47765563</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47765563</guid></item><item><title><![CDATA[New comment by HarHarVeryFunny in "Stanford report highlights growing disconnect between AI insiders and everyone"]]></title><description><![CDATA[
<p>OK, but your post reads as if you think that AI being the cause of layoffs can't be true if AI is "worthless" (less capable than they are assuming), which is false.<p>CEOs are laying off because of AI because they think it will save them money, but are doing so based on misinformation, largely due their own insistence that everyone uses AI, and report how much they are using - they are just hearing what they asked to hear (just like Mao hearing about impossible levels of rice production during the "Great Leap Forward"). I'm not making this up - I've seen it first hand.<p>You can see the proof of this - companies laying of because of what they mistakenly believe AI can do - in companies like Salesforce, forced to do an embarassing U-turn and hire people back when the reality sets in. At least Salesforce were quick to correct - most big companies are not so nimble or ready to admit their own mistakes.<p>We seem to have reached mania-like levels of rice-production reporting, with companies like Meta now taking AI token usage as a proxy for productivity and/or a measure of something positive, and apparently having a huge leaderboard displaying who is using the most (i.e. spending the most money!). The only guaranteed outcome of this is that they will indeed see massive use of tokens, and a massive AI bill, and then in a year or so will likely be left scratching their heads wondering why nothing much appears to have changed.</p>
]]></description><pubDate>Tue, 14 Apr 2026 12:22:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47764684</link><dc:creator>HarHarVeryFunny</dc:creator><comments>https://news.ycombinator.com/item?id=47764684</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47764684</guid></item><item><title><![CDATA[New comment by HarHarVeryFunny in "Stanford report highlights growing disconnect between AI insiders and everyone"]]></title><description><![CDATA[
<p>You're apparently assuming that AI related layoffs are rational, based on those making the decisions having good information about what their own organizations are achieving with AI.<p>I think this is far from the truth. In many companies AI has become a religion, not a new technology to be evaluated and judged. Employees are told to use AI, and report how much they are using, and all understand the consequences of giving the wrong answer. The CEO hears the tales of rampant AI use and productivity that he is demanding to hear, then pats himself on the back and initiates another layoff. Meanwhile in the trenches little if anything has actually changed.</p>
]]></description><pubDate>Mon, 13 Apr 2026 23:56:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47759509</link><dc:creator>HarHarVeryFunny</dc:creator><comments>https://news.ycombinator.com/item?id=47759509</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47759509</guid></item><item><title><![CDATA[New comment by HarHarVeryFunny in "Stanford report highlights growing disconnect between AI insiders and everyone"]]></title><description><![CDATA[
<p>"You think that AI will take your job, disrupt society, and has a 25% chance of being an EXISTENTIAL threat?! Who told you that?!"</p>
]]></description><pubDate>Mon, 13 Apr 2026 23:38:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47759369</link><dc:creator>HarHarVeryFunny</dc:creator><comments>https://news.ycombinator.com/item?id=47759369</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47759369</guid></item><item><title><![CDATA[New comment by HarHarVeryFunny in "The peril of laziness lost"]]></title><description><![CDATA[
<p>> But what happens when new requirements come in for just one of the things?<p>I guess it could happen, but that depends on your mental model when coding - if you're just pattern matching similar chunks of code (which are not being used in a semantically identical way) then all bets are off, although that seems a very alien concept of how someone might code.<p>OTOH, if you have a higher level mental model of what you are doing then it's not a matter of "this looks like common code" but rather "i need to do the exact same operation" (same inputs/outputs/semantics) here. Maybe I'm expressing it poorly, but I can't recall ever having to fork a function because requirements at two call sites just diverged.</p>
]]></description><pubDate>Mon, 13 Apr 2026 17:22:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47755199</link><dc:creator>HarHarVeryFunny</dc:creator><comments>https://news.ycombinator.com/item?id=47755199</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47755199</guid></item><item><title><![CDATA[New comment by HarHarVeryFunny in "The peril of laziness lost"]]></title><description><![CDATA[
<p>The trouble with rewarding token usage is the same as rewarding LOC written/generated - if that's what you are asking for then that is what you will get. Asking the AI to "scan the entire codebase for vulnerabilities" would certainly be a good way to climb the leaderboard!</p>
]]></description><pubDate>Mon, 13 Apr 2026 00:40:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47746152</link><dc:creator>HarHarVeryFunny</dc:creator><comments>https://news.ycombinator.com/item?id=47746152</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47746152</guid></item><item><title><![CDATA[New comment by HarHarVeryFunny in "The peril of laziness lost"]]></title><description><![CDATA[
<p>Meta apparently now has a "leaderboard" for who is using the most AI - consuming the most tokens. Must make Anthropic happy, since Meta is using Claude, and accounts for some significant percentage (10%? 20%?) of their total volume.</p>
]]></description><pubDate>Mon, 13 Apr 2026 00:13:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47745983</link><dc:creator>HarHarVeryFunny</dc:creator><comments>https://news.ycombinator.com/item?id=47745983</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47745983</guid></item><item><title><![CDATA[New comment by HarHarVeryFunny in "Taking on CUDA with ROCm: 'One Step After Another'"]]></title><description><![CDATA[
<p>If you don't want/need to program at lowest level possible, then Pytorch seems the obvious option for AMD support, or maybe Mojo. The Triton compiler would be another option for kernel writing.</p>
]]></description><pubDate>Mon, 13 Apr 2026 00:07:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47745943</link><dc:creator>HarHarVeryFunny</dc:creator><comments>https://news.ycombinator.com/item?id=47745943</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47745943</guid></item><item><title><![CDATA[New comment by HarHarVeryFunny in "How Beyond Meat sank from a $14B plant-based protein powerhouse to a penny stock"]]></title><description><![CDATA[
<p>Vegetarian and vegan menu options are extremely common here in the US too, but I'd say not so much these meat substitute products at fast food places. One of the big chains (Burger King? McDonalds?) had a Beyond burger when it first came out, but otherwise you need to avoid the big chains and may find a veggieburger on the menu, just called that - a veggie patty of some nature, not pretending to be meat. You can buy Quorn etc products in all the supermarkets.</p>
]]></description><pubDate>Sun, 12 Apr 2026 23:57:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47745865</link><dc:creator>HarHarVeryFunny</dc:creator><comments>https://news.ycombinator.com/item?id=47745865</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47745865</guid></item><item><title><![CDATA[New comment by HarHarVeryFunny in "The peril of laziness lost"]]></title><description><![CDATA[
<p>Personally, even for a prototype, I'd be using functions immediately as soon as I saw (or anticipated) I needed to do same thing twice - mainly so that if I want to change it later there is one place to change, not many. It's the same for production code of course, but when prototyping the code structure may be quite fluid and you want to keep making changes easy, not have to remember to update multiple copies of the same code.<p>I'm really talking about manually writing code, but the same would apply for AI written code. Having a single place to update when something needs changing is always going to be less error prone.<p>The major concession I make to modularity when developing a prototype is often to put everything into a single source file to make it fast to iteratively refactor etc rather than split it up into modules.</p>
]]></description><pubDate>Sun, 12 Apr 2026 23:49:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47745791</link><dc:creator>HarHarVeryFunny</dc:creator><comments>https://news.ycombinator.com/item?id=47745791</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47745791</guid></item><item><title><![CDATA[New comment by HarHarVeryFunny in "The peril of laziness lost"]]></title><description><![CDATA[
<p>Writing twice makes sense if time permits, or the opportunity presents itself. First time may be somewhat exploratory (maybe a thow-away prototype), then second time you better understand the problem and can do a better job.<p>A third time, with a new abstraction, is where you need to be careful. Fred Brooks ("Mythical Man Month") refers to it as the "second-system effect" where the confidence of having done something once (for real, not just prototype) may lead to an over-engineered and unnecessarily complex "version 2" as you are tempted to "make it better" by adding layers of abstractions and bells and whistles.</p>
]]></description><pubDate>Sun, 12 Apr 2026 22:26:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47745201</link><dc:creator>HarHarVeryFunny</dc:creator><comments>https://news.ycombinator.com/item?id=47745201</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47745201</guid></item><item><title><![CDATA[New comment by HarHarVeryFunny in "How Beyond Meat sank from a $14B plant-based protein powerhouse to a penny stock"]]></title><description><![CDATA[
<p>I guess I just don't get it. Obviously there's a decent sized market for vegetarian convenience food, but the meat-based branding, and attempts to copy texture/flavor of meat products would seem a turn off for that market. Good flavors and mouth feel (not tofu!) are important, but why explicitly try to copy meat unless meat eaters are the market you are targeting?<p>It'd make more sense to me to have different products/brands/advertising for different market segments. For the meat eaters the marketing would be "healthy/cheap, tastes just like beef/chicken" (which seems to be what Beyond Meat are going for), and for the vegetarians "delicious flavors, plant based, high in protein" (not "fake beef").</p>
]]></description><pubDate>Sun, 12 Apr 2026 20:42:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47744245</link><dc:creator>HarHarVeryFunny</dc:creator><comments>https://news.ycombinator.com/item?id=47744245</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47744245</guid></item><item><title><![CDATA[New comment by HarHarVeryFunny in "How Beyond Meat sank from a $14B plant-based protein powerhouse to a penny stock"]]></title><description><![CDATA[
<p>Are competitors doing well? It's really a bit of a weird product category - not really appealing to vegetarians or meat eaters. Who are they marketing it to?</p>
]]></description><pubDate>Sun, 12 Apr 2026 19:19:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47743344</link><dc:creator>HarHarVeryFunny</dc:creator><comments>https://news.ycombinator.com/item?id=47743344</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47743344</guid></item><item><title><![CDATA[New comment by HarHarVeryFunny in "AI Will Be Met with Violence, and Nothing Good Will Come of It"]]></title><description><![CDATA[
<p>America only has the shallowest appearance of a democracy where voters get to control who is elected.<p>The electoral college system, coupled with it's winner-takes-all implementation in most states, means that voting is a sham for 80% of the population. The other 20% live in a swing state and their vote can at least potentially affect the outcome of an election, but even there "your vote" will literally be cast opposite to what you put on the ballet unless you end up being part of the winning majority.</p>
]]></description><pubDate>Sun, 12 Apr 2026 16:47:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47741791</link><dc:creator>HarHarVeryFunny</dc:creator><comments>https://news.ycombinator.com/item?id=47741791</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47741791</guid></item><item><title><![CDATA[New comment by HarHarVeryFunny in "AI Will Be Met with Violence, and Nothing Good Will Come of It"]]></title><description><![CDATA[
<p>America routinely ranks fairly low on the "happiest countries" rankings. Currently #24 behind most of Europe, with the Scandinavian countries typically at the top.<p><a href="https://worldpopulationreview.com/country-rankings/happiest-countries-in-the-world" rel="nofollow">https://worldpopulationreview.com/country-rankings/happiest-...</a><p>Clearly other countries are doing something to keep their citizens happy that the US is not copying.<p>Given that US politics and policy is driven by lobbyists and tribal infighting, would you really expect anything different?</p>
]]></description><pubDate>Sun, 12 Apr 2026 16:35:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47741658</link><dc:creator>HarHarVeryFunny</dc:creator><comments>https://news.ycombinator.com/item?id=47741658</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47741658</guid></item><item><title><![CDATA[New comment by HarHarVeryFunny in "Small models also found the vulnerabilities that Mythos found"]]></title><description><![CDATA[
<p>Most of the comments here seems to be responding to the issue of finding vulnerabilities, rather than exploiting them, but the Anthropic claim is that the Mythos advance is being able to actually develop exploits whereas Opus 4.6 had been able to find vulnerabilities, but was poor at being able to develop exploits for them.<p>It's also noteworthy that Anthropic attributes Mythos' improvement to advances in "coding, reasoning and autonomy", and that the autonomy part seems especially important since they go on to say that trying to develop exploits included adding debug code to projects, running them under a debugger, etc.<p>When comparing the capabilities of Mythos to previous generation and/or smaller models, it seems it would therefore be useful to distinguish between identifying potential vulnerabilities and actually trying to build exploits for them in agentic fashion. Finding the "needle in a haystack" (potential vulnerability) is one aspect, but the other part is an agentic exploit-writing harness being handed the needle and asked to try to exploit it.<p>I wonder how much effort Anthropic put into building the harnesses and environments for Mythos to run, modify and debug code? For example, was Mythos set up to be able to build and run a modified BSD in some virtual environment, or did it just take suspect functions and test those in isolation?<p>It'd be interesting to put the capabilities of Opus 4.6, Mythos, and other models into perspective by comparing them to traditional non-AI static analysis security scanning tools. Anthropic mention that the open source projects they scanned came from the OSS-Fuzz corpus, but as far as I can see they don't say what other tools have, or have not, been used to scan these projects.<p>It'd also be interesting to know to what extent Mythos was explicitly RL trained to develop exploits (especially since it sounds as if Anthropic have the dataset and environment needed to do this) as opposed to this just being a natural consequence of the model being better. If this was the case then it might be a large part of why they are not releasing it - can't really position yourself as strong on security if you deliberately develop and release a hacking tool!</p>
]]></description><pubDate>Sun, 12 Apr 2026 13:02:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47739133</link><dc:creator>HarHarVeryFunny</dc:creator><comments>https://news.ycombinator.com/item?id=47739133</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47739133</guid></item></channel></rss>