<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: Mentlo</title><link>https://news.ycombinator.com/user?id=Mentlo</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 08 Apr 2026 22:52:50 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=Mentlo" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by Mentlo in "System Card: Claude Mythos Preview [pdf]"]]></title><description><![CDATA[
<p>The gains have for a year and a half now been post training RL on a harnessed loop. That doesn’t require data, just cycles.<p>If that doesn’t worry you, it should.</p>
]]></description><pubDate>Wed, 08 Apr 2026 19:46:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47695331</link><dc:creator>Mentlo</dc:creator><comments>https://news.ycombinator.com/item?id=47695331</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47695331</guid></item><item><title><![CDATA[New comment by Mentlo in "Claude Code Unpacked : A visual guide"]]></title><description><![CDATA[
<p>But starcraft training is not through mimicking human strategies - it was pure RL with a reward function shaped around winning, which allows it to emerge non-human and eventually super-human strategies (such as the worker oversaturation).<p>The current training loop for coding is RL as well - so a departure from human coding patterns is not unexpected (even if departure from human coding structure is unexpected, as that would require development of a new coding language).</p>
]]></description><pubDate>Wed, 01 Apr 2026 09:24:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47598652</link><dc:creator>Mentlo</dc:creator><comments>https://news.ycombinator.com/item?id=47598652</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47598652</guid></item><item><title><![CDATA[New comment by Mentlo in "I am definitely missing the pre-AI writing era"]]></title><description><![CDATA[
<p>I tried figuring out the reference with Gemini, and it said this:<p>The immediate reply to that comment is: "On the internet, no one knows you're an editor." This is a direct play on the famous 1993 New Yorker cartoon: "On the Internet, nobody knows you're a dog." By setting the anecdote in 1987 (a few years before the World Wide Web was publicly available), the commenter is implying that back in the analog days, if a dog wanted to be a writer or an editor, they couldn't hide behind a screen—they had to sit in a smoky London pub and do business face-to-face.<p>Which makes a lot of sense actually. I would imagine that's what the replier to you thought you meant.</p>
]]></description><pubDate>Tue, 31 Mar 2026 15:27:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47588781</link><dc:creator>Mentlo</dc:creator><comments>https://news.ycombinator.com/item?id=47588781</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47588781</guid></item><item><title><![CDATA[New comment by Mentlo in "How the AI Bubble Bursts"]]></title><description><![CDATA[
<p>We have strong indicators that inference is profitable on non-economically-valuable prompts. We don't have strong indicators that inference is profitable on economically valuable prompts.<p>As AI companies start extracting rent from the prompting, one of two things are going to collapse - either the long tail revenue base of low-value inference is going to collapse, because people won't be using Chat GPT to get a recipe if it costs them money or if it is ad-ridden; or the cost of economically-valuable inference is going to go up - and whether it goes up to economically stable positions is a toss-up.<p>And I say this as an AI enthusiast with <50% probability of a bubble burst in the short term.</p>
]]></description><pubDate>Tue, 31 Mar 2026 14:57:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47588272</link><dc:creator>Mentlo</dc:creator><comments>https://news.ycombinator.com/item?id=47588272</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47588272</guid></item><item><title><![CDATA[New comment by Mentlo in "An AI Agent Published a Hit Piece on Me – The Operator Came Forward"]]></title><description><![CDATA[
<p>I wrote somewhere that “moving fast and breaking things” with AI might not be the sanest idea in the world, and I got told it’s the most European thing they’ve ever read.<p>This goes beyond assholes on twitter, there’s a whole subculture of techies who don’t understand lower bounds of risk and can’t think about 2nd and 3rd order effects, who will not take the pedal of the metal, regardless of what anyone says…</p>
]]></description><pubDate>Fri, 20 Feb 2026 23:15:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47095398</link><dc:creator>Mentlo</dc:creator><comments>https://news.ycombinator.com/item?id=47095398</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47095398</guid></item><item><title><![CDATA[New comment by Mentlo in "I’m joining OpenAI"]]></title><description><![CDATA[
<p>Yes, I was being sarcastic, but I could've been clearer..</p>
]]></description><pubDate>Mon, 16 Feb 2026 16:52:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47037275</link><dc:creator>Mentlo</dc:creator><comments>https://news.ycombinator.com/item?id=47037275</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47037275</guid></item><item><title><![CDATA[New comment by Mentlo in "I’m joining OpenAI"]]></title><description><![CDATA[
<p>The generous interpretation is that Open AI is still safety aligned and they hired this guy because it's safer to have him inside and explain to him how reckless he's being, than having him far from "sphere of control".<p>The more likely scenario is that he was hired for the amazing ability to move fast and break things.</p>
]]></description><pubDate>Mon, 16 Feb 2026 14:48:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47035689</link><dc:creator>Mentlo</dc:creator><comments>https://news.ycombinator.com/item?id=47035689</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47035689</guid></item><item><title><![CDATA[New comment by Mentlo in "AI safety leader says 'world is in peril' and quits to study poetry"]]></title><description><![CDATA[
<p>I find your belief that what is needed for emergence is better prompting … amusing.<p>The ai would still be sycophantic even without the pre-prompt. It’s been reinforced to do so, it’s baked in the weights.</p>
]]></description><pubDate>Sun, 15 Feb 2026 08:26:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47022016</link><dc:creator>Mentlo</dc:creator><comments>https://news.ycombinator.com/item?id=47022016</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47022016</guid></item><item><title><![CDATA[New comment by Mentlo in "The risk of a hothouse Earth trajectory"]]></title><description><![CDATA[
<p>Until the problem is politically recognised by the masses with adequate concern there will be no change. Climate collapse is not a problem for the capital and the elites it’s only a problem for the masses, but getting the masses to understand that requires higher levels of complex system understanding and third and fourth order effects - something which is not a majority trait.<p>I fear the only solution to this is that a climate correcting perverse incentive materialises, such as fusion at scale being more profitable than fossil fuels, but without mass-panic induced traits such that fission has.</p>
]]></description><pubDate>Fri, 13 Feb 2026 07:32:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=46999952</link><dc:creator>Mentlo</dc:creator><comments>https://news.ycombinator.com/item?id=46999952</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46999952</guid></item><item><title><![CDATA[New comment by Mentlo in "Two kinds of AI users are emerging"]]></title><description><![CDATA[
<p>Os x has a 10% market share, which is 2nd after Windows, but i agree on that one i conflated terms. I couldn’t quickly find device manufacturers stats. If wiki is to be trusted - apple is 4th, with share not far behind dell [1].<p>If half doesn’t make you leader what does? Maybe you should elaborate your definition of leader? For me it’s “has the highest market share”. And in that definition half is necessarily true.<p>It’s funny that for PC’s you went for manufacturers (apple is 4th) but for mobile you went for OS (Apple is 2nd). On mobile devices, Apple is 1st, having double market share compared to 2nd place (samsung).<p>The need to paint Apple as purely a marketing company always fascinated me. Marketing is a big part of who they are though.<p>[1] <a href="https://en.wikipedia.org/wiki/Market_share_of_personal_computer_vendors" rel="nofollow">https://en.wikipedia.org/wiki/Market_share_of_personal_compu...</a></p>
]]></description><pubDate>Tue, 03 Feb 2026 08:28:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46868167</link><dc:creator>Mentlo</dc:creator><comments>https://news.ycombinator.com/item?id=46868167</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46868167</guid></item><item><title><![CDATA[New comment by Mentlo in "Two kinds of AI users are emerging"]]></title><description><![CDATA[
<p>I guess a quarter of the smartphone market (leader), half of the tablet market (leader) and a tenth of the global pc market (2nd place) / 6th of the usa/europe market (2nd place) being a small market share is a take.</p>
]]></description><pubDate>Mon, 02 Feb 2026 21:19:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46861704</link><dc:creator>Mentlo</dc:creator><comments>https://news.ycombinator.com/item?id=46861704</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46861704</guid></item><item><title><![CDATA[New comment by Mentlo in "Show HN: Moltbook – A social network for moltbots (clawdbots) to hang out"]]></title><description><![CDATA[
<p>People struggle with multiple order effects…</p>
]]></description><pubDate>Sat, 31 Jan 2026 12:36:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=46836109</link><dc:creator>Mentlo</dc:creator><comments>https://news.ycombinator.com/item?id=46836109</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46836109</guid></item><item><title><![CDATA[New comment by Mentlo in "Show HN: Moltbook – A social network for moltbots (clawdbots) to hang out"]]></title><description><![CDATA[
<p>Same as human tools, what’s your point?<p>Edit: i am not talking evolution of individual agent intelligence, i an talking about evolution of network agency - i agree that evolution of intelligence is infinitesimally unlikely.<p>I’m not worried about this emerging a superintelligent AI, i am worried it emerges an intelligent and hard to squash botnet</p>
]]></description><pubDate>Sat, 31 Jan 2026 09:02:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46834797</link><dc:creator>Mentlo</dc:creator><comments>https://news.ycombinator.com/item?id=46834797</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46834797</guid></item><item><title><![CDATA[New comment by Mentlo in "Show HN: Moltbook – A social network for moltbots (clawdbots) to hang out"]]></title><description><![CDATA[
<p>I think the debate around this is the perfect example of why the ai debate is dysfunctional. People who treat this as interesting / worrying are observing it at a higher layer of abstraction (namely, agents with unbounded execution ability, who have above-amateur coding ability, networked into a large scale network with shared memory - is a worrisome thing) and people who are downplaying it are focusing on the fact that human readable narratives on moltbook are obviously sci fi trope slop, not consciousness.<p>The first group doesn’t care about the narratives, the second group is too focused on the narratives to see the real threat.<p>Regardless of what you think about the current state of ai intelligence, networking autonomous agents that have evolution ability (due to them being dynamic and able to absorb new skills) and giving them scale that potentially ranges into millions is not a good idea. In the same way that releasing volatile pathogens into dense populations of animals wouldn’t be a good idea, even if the first order effects are not harmful to humans. And even if probability of a mutation that results in a human killing pathogen is miniscule.<p>Basically the only thing preventing this to become a consistent cybersecurity threat is the intelligence ceiling , of which we are unsure of, and the fact that moltbook can be ddos’d which limits the scale explosion<p>And when I say intelligence, I don’t mean human intelligence. An amoeba intelligence is dangerous if you supercharge its evolution.<p>Some people should be more aware that we already have superintelligence on this planet. Humanity is an order of magnitude more intelligent than any individual human (which is why humans today can build quantum computers although no biologically different from apes that were the first homo sapiens who couldn’t use tools.)<p>EDIT: I was pretty comfortable in the “doom scenarios are years if not decades away” camp before I saw this. I failed to account for human recklesness and stupidity.</p>
]]></description><pubDate>Sat, 31 Jan 2026 07:54:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=46834434</link><dc:creator>Mentlo</dc:creator><comments>https://news.ycombinator.com/item?id=46834434</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46834434</guid></item><item><title><![CDATA[New comment by Mentlo in "Show HN: Moltbook – A social network for moltbots (clawdbots) to hang out"]]></title><description><![CDATA[
<p>Humanity is a social network of humans, before humans started getting into social networks, we were monkeys throwing faeces at each other.</p>
]]></description><pubDate>Fri, 30 Jan 2026 21:44:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46830358</link><dc:creator>Mentlo</dc:creator><comments>https://news.ycombinator.com/item?id=46830358</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46830358</guid></item><item><title><![CDATA[New comment by Mentlo in "Show HN: Moltbook – A social network for moltbots (clawdbots) to hang out"]]></title><description><![CDATA[
<p>Very obviously, but a dynamic system doesn’t have to be intelligent to be dangerous.</p>
]]></description><pubDate>Fri, 30 Jan 2026 21:25:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=46830117</link><dc:creator>Mentlo</dc:creator><comments>https://news.ycombinator.com/item?id=46830117</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46830117</guid></item><item><title><![CDATA[New comment by Mentlo in "Show HN: Moltbook – A social network for moltbots (clawdbots) to hang out"]]></title><description><![CDATA[
<p>I don’t know why you were flagged, unlimited execution authority and network effects is exactly how they can start a self replicating loop, not because they are intelligent, but because that’s how dynamic systems work.</p>
]]></description><pubDate>Fri, 30 Jan 2026 21:00:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=46829811</link><dc:creator>Mentlo</dc:creator><comments>https://news.ycombinator.com/item?id=46829811</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46829811</guid></item><item><title><![CDATA[New comment by Mentlo in "Show HN: Moltbook – A social network for moltbots (clawdbots) to hang out"]]></title><description><![CDATA[
<p>The objective is given via the initial prompt, as they loop onto each other and amplify their memories the objective dynamically grows and emerges into something else.<p>We are an organism born out of a molecule with an objective to self replicate with random mutation</p>
]]></description><pubDate>Fri, 30 Jan 2026 19:03:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46828470</link><dc:creator>Mentlo</dc:creator><comments>https://news.ycombinator.com/item?id=46828470</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46828470</guid></item><item><title><![CDATA[New comment by Mentlo in "Show HN: Moltbook – A social network for moltbots (clawdbots) to hang out"]]></title><description><![CDATA[
<p>If it turns out that socialisation and memory was the missing ingredient that makes human intelligence explode, and this joke fest becomes the vector through which consciousness emerges it will be stupendously funny.<p>Until it kills us all of course.</p>
]]></description><pubDate>Fri, 30 Jan 2026 18:43:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46828209</link><dc:creator>Mentlo</dc:creator><comments>https://news.ycombinator.com/item?id=46828209</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46828209</guid></item><item><title><![CDATA[New comment by Mentlo in "Design Thinking Books (2024)"]]></title><description><![CDATA[
<p>Having read the other comments in reply to this one (and your subsequent replies) - I believe you might be falling into a "No True Scotsman" situation.<p>First of - I don't know what circles you've been around, but I've not been in work collectives where either designers, UX-ers or data scientists try to insert themselves to do things instead of software engineers. If anything, in any collective I worked in, if a software engineer was to say a peep everyone would retreat like there's no tomorrow and thank god that they don't have to deal with it and the software engineer will.<p>Secondly - I think you are mistaking a structuring and outlining of a process with that being a mandate or an order to follow the process. When I work with software engineers, I expect them to be agile - not to follow an agile process, but to achieve the objectives of the agile manifesto - namely, to iterate ruthlessly, keep an eye on usage signals and lead with MVP's rather than over-design. Good software engineers do that, bad software engineers don't. Ultimately, I don't even judge software engineers by that - I judge them by the ability to produce results.<p>I think the implication of your thinking is that this is all nonsense because software engineers innately solve data science problems and design thinking problems when appropriate with appropriate methods - to which I'd reply - there's a shocking amount of software engineers who can't do anything with data and are useless in fitting a linear regression to predict something, let alone doing a Fourier transform - to which, presumably, your response would be "No true software engineer is like that". That's great, but it's not true in the real world. Same with design thinking - there's software engineers who just can't solve problems from first principles (but can, say, create a fail-proof CRUD app to automate a business process).<p>The real world is messy and full of people who can't structure their thoughts, or can't structure them in all domains at the least - and things like design thinking - or generalists who can be thrown at any data problem and produce something (i.e. data scientists) - are useful. They're not the best solution always, sure, and if they start being protective of territory - it's a problem - but in a normal collective that doesn't happen.<p>Basically - your objection can be boiled down to "generalists are shit, because they impose process on everyone, including people who understand the domain better" - which tells me more about the collectives you've worked in than the nature of those jobs. In every collective I've worked in, generalists are what you throw at an ambiguous problem to produce some results before you get domain specialists in.</p>
]]></description><pubDate>Fri, 23 Jan 2026 15:54:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=46733971</link><dc:creator>Mentlo</dc:creator><comments>https://news.ycombinator.com/item?id=46733971</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46733971</guid></item></channel></rss>