<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: RIMR</title><link>https://news.ycombinator.com/user?id=RIMR</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 10 Apr 2026 04:25:12 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=RIMR" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by RIMR in "Why LLM-Generated Passwords Are Dangerously Insecure"]]></title><description><![CDATA[
<p>No, but if those VCs let their AI agents purchase things on their behalf, you could maybe trick those agents into thinking your cloud service was the better option.</p>
]]></description><pubDate>Sat, 04 Apr 2026 19:59:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47642767</link><dc:creator>RIMR</dc:creator><comments>https://news.ycombinator.com/item?id=47642767</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47642767</guid></item><item><title><![CDATA[New comment by RIMR in "Why LLM-Generated Passwords Are Dangerously Insecure"]]></title><description><![CDATA[
<p>I urge you to actually read the article, because it doesn't say anything about the risks of the LLM knowing your password (e.g., stored in server-side logs), it talks about LLMs generating predicatable passwords because they are deterministic pattern-following machines.<p>While the loss of secrecy between you and the LLM provider is a legitimate risk, the point of the article was that you should only use vetted RNGs to generate passwords, because LLMs will frequently generate identical secure-looking passwords when asked to do so repeatedly, meaning that all a bad actor has to do is collect the most frequent ones and go hunting.<p>The loss of secrecy between you and the LLM only poses a risk if the LLM logs are compromised, exposing your generated passwords. The harvesting of commonly-generated passwords from LLMs poses a much broader attack surface for anyone who uses this method, because any attacker with access to publicly available LLMs can start mining commonly generated passwords and using them today without having to compromise anything first.</p>
]]></description><pubDate>Sat, 04 Apr 2026 19:55:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47642722</link><dc:creator>RIMR</dc:creator><comments>https://news.ycombinator.com/item?id=47642722</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47642722</guid></item><item><title><![CDATA[New comment by RIMR in "Why LLM-Generated Passwords Are Dangerously Insecure"]]></title><description><![CDATA[
<p>I mean, people are still rotating <month><year> passwords because they refuse to remember anything. I only know this, because I am in a customer-facing position, and these customers rarely care about revealing their passwords when they need help...</p>
]]></description><pubDate>Sat, 04 Apr 2026 19:45:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47642615</link><dc:creator>RIMR</dc:creator><comments>https://news.ycombinator.com/item?id=47642615</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47642615</guid></item><item><title><![CDATA[New comment by RIMR in "Why LLM-Generated Passwords Are Dangerously Insecure"]]></title><description><![CDATA[
<p>That is interesting data. Just from looking at those graphs, it looks like AIs are consistently avoidant of the number 69, likely because of safeguards to prevent it from being offensive. Otherwise its training would probably tell it that it was a really nice number.</p>
]]></description><pubDate>Sat, 04 Apr 2026 19:43:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47642598</link><dc:creator>RIMR</dc:creator><comments>https://news.ycombinator.com/item?id=47642598</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47642598</guid></item><item><title><![CDATA[New comment by RIMR in "Show HN: sllm – Split a GPU node with other developers, unlimited tokens"]]></title><description><![CDATA[
<p>I read the FAQ, and I can't imagine this is going to work the way you want it to. It fundamentally doesn't make sense as a business model.<p>I can sign up for a cohort today, but there's not even a hint of how long it will take the cohort to fill up. The most subscribed cohort is only at 42% (and dropping), so maybe days to weeks? That's a long time to wait if you have a use case to satisfy.<p>And then the cohort expires, and I have to sign up for another one and play the waiting game again? Nobody wants that level of unreliability.<p>Also, don't say "15-25 tok/s". That is a min-max figure, but your FAQ says that this is actually a maximum. It makes no sense to measure a maximum as a range, and you state no minimum so I can only assume that it is 0 tok/s. If all users in the cohort use it simultaneously, the best they're getting is something like 1.5 tok/s (probably less), which is abyssmal.<p>You mention "optimization", but I have no idea what that means. It certainly doesn't mean imposing token limits, because your FAQ says that won't happen. If more than 25 users are using the cohort simultaneously, it is a physical impossibility to improve performance to the levels you advertise without sacrificing something else, like switching to a smaller model, which would essentially be fraud, or adding more GPUs which will bankrupt you at these margins. With 465 users per cohort, a large chunk of whom will be using tools like OpenClaw, nobody will ever see the performance you are offering.<p>The issue here is you are trying to offer affordable AI GPU nodes without operating at a loss. The entire AI industry is operating at a loss right now because of how expensive this all is. This strategy literally won't work right now unless you start courting VCs to invest tens to hundreds of millions of dollars so you can get this off the ground by operating at a loss until hopefully you turn a profit at some point in the future, but at that point developers will probably be able to run these models at home without your help.</p>
]]></description><pubDate>Sat, 04 Apr 2026 19:01:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47642143</link><dc:creator>RIMR</dc:creator><comments>https://news.ycombinator.com/item?id=47642143</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47642143</guid></item><item><title><![CDATA[New comment by RIMR in "Tell HN: Anthropic no longer allowing Claude Code subscriptions to use OpenClaw"]]></title><description><![CDATA[
<p>Anthropic isn't "fighting" OpenClaw. They just want OpenClaw users to switch to API pricing so that their service doesn't become a blackhole for investor money. Operating at a loss can be strategic, but they had to carefully consider the ratio of casual users to power users to keep that loss steady and sustainable.<p>Power users always cost these services more than they pay, and OpenClaw turns every user into a power user. A recalculation was rational.</p>
]]></description><pubDate>Sat, 04 Apr 2026 15:57:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47640170</link><dc:creator>RIMR</dc:creator><comments>https://news.ycombinator.com/item?id=47640170</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47640170</guid></item><item><title><![CDATA[New comment by RIMR in "Delve removed from Y Combinator"]]></title><description><![CDATA[
<p>>it's difficult for me to understand the outrage.<p>It's pretty simple. Compliance is legally important, and faking compliance exposes companies to extraordinary legal liability. Being lied to about your compliance warrants outrage.<p>>SOC2 is basically fake<p>This isn't true, but if it were, it would justify outrage in its own right.</p>
]]></description><pubDate>Sat, 04 Apr 2026 15:56:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47640169</link><dc:creator>RIMR</dc:creator><comments>https://news.ycombinator.com/item?id=47640169</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47640169</guid></item><item><title><![CDATA[New comment by RIMR in "Author of "Careless People" banned from saying anything negative about Meta"]]></title><description><![CDATA[
<p>YC is run by these kinds of careless people, so in a literal sense it probably isn't allowed, but we should say it anyway:<p>FUCK META</p>
]]></description><pubDate>Sat, 04 Apr 2026 15:52:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47640122</link><dc:creator>RIMR</dc:creator><comments>https://news.ycombinator.com/item?id=47640122</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47640122</guid></item><item><title><![CDATA[New comment by RIMR in "Author of "Careless People" banned from saying anything negative about Meta"]]></title><description><![CDATA[
<p>It should not be legal to enforce this kind of thing 9 years after a person leaves your company. I get that it currently is legal, but have some principles. Just because this is legal doesn't mean it isn't morally reprehensible, and its legality should be challenged.</p>
]]></description><pubDate>Sat, 04 Apr 2026 15:50:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47640098</link><dc:creator>RIMR</dc:creator><comments>https://news.ycombinator.com/item?id=47640098</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47640098</guid></item><item><title><![CDATA[New comment by RIMR in "Author of "Careless People" banned from saying anything negative about Meta"]]></title><description><![CDATA[
<p>I mean, a reasonable non-disparagement clause for your current employees makes sense. You don't want your employees actively undermining the company in public. If they don't believe in what you're doing, they should be able to quit and say whatever they want. It should end immediately when your employment ends. It should be illegal to make it compulsary for severance packages, as many companies do.<p>And there need to be serious regulations about how these agreements can be used, and those regulations should protect whistleblowers at all costs. Like a public figure suing for libel/slander/defamation, the burden of proving statements false should rest entirely with the company.</p>
]]></description><pubDate>Sat, 04 Apr 2026 15:47:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47640067</link><dc:creator>RIMR</dc:creator><comments>https://news.ycombinator.com/item?id=47640067</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47640067</guid></item><item><title><![CDATA[New comment by RIMR in "Tell HN: Anthropic no longer allowing Claude Code subscriptions to use OpenClaw"]]></title><description><![CDATA[
<p>You are still misunderstanding.<p>If you max out your token limits, you are costing Anthropic more than you are paying them. They only expect a small percentage of their users to do this, but OpenClaw changed the dynamic.<p>Anthropic knows that they will lose more users by lowering limits than they will by blocking OpenClaw, because OpenClaw users will overwhelmingly switch to API pricing, while chatbot users will leave for competitors with higher limits.<p>They are a business. They hope to become profitable. This was the correct move.</p>
]]></description><pubDate>Sat, 04 Apr 2026 09:59:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47637618</link><dc:creator>RIMR</dc:creator><comments>https://news.ycombinator.com/item?id=47637618</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47637618</guid></item><item><title><![CDATA[New comment by RIMR in "Tell HN: Anthropic no longer allowing Claude Code subscriptions to use OpenClaw"]]></title><description><![CDATA[
<p>I didn't even realize you could connect a standard subscription to OpenClaw in the first place. It seems like you would run into limits rather quickly, which would degrade the experience quite badly.<p>Anthropic's current business model is to sell access to their tools to subscribers at a loss. Users maxing out their $200/month plan can realistically cost Anthropic $500-600 in actual compute costs.<p>Anthropic is okay with this right now because they want to amass as many users as they can, and eventually hope that GPUs will increase in power and efficiency, and their LLMs will become more efficient as well. They can eventually profit off of their current pricing, or with modest price increases, if that comes to fruition.<p>But letting OpenClaw wake up every 30 minutes and start sending requests is a surefire way to max out your weekly limits, and that certainly isn't something Anthropic planned for.</p>
]]></description><pubDate>Sat, 04 Apr 2026 09:56:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47637600</link><dc:creator>RIMR</dc:creator><comments>https://news.ycombinator.com/item?id=47637600</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47637600</guid></item><item><title><![CDATA[New comment by RIMR in "OpenClaw privilege escalation vulnerability"]]></title><description><![CDATA[
<p>I give it monumental tasks. For example, I will write massive markdown files describing all the features I want to see in an application, and I will use a standard AI chatbot to check my work and consider additional details. Finally, when I have everything written down, I upload it to OpenClaw and tell the agent to make it happen.<p>Sometimes it toils away for 2+ hours, spawning Claude Code instances, checking its work, testing the code, even using browser automation to make sure everything works the way it is supposed to if it's writing a webapp.<p>In the end, it consumes like $10-20 worth of tokens and spits out a functional application with everything I asked for.<p>Claude Code can do this on its own, to an extent, but there's something about getting OpenClaw to iterate through multiple sessions and testing everything to make sure it works the way I described that I really like. It completely offloads the process to the AI, and keeps me mostly out of the loop.<p>Is the code any good? Probably not. Am I at risk of being exploited by malware? Probably. But I have automated quite a lot of things with the software that OpenClaw builds for me, and I am careful to review the libraries it imports before running the code on any machine with actual access to anything I actually care about.<p>Personally, anyone using OpenClaw for the "it reads your emails" use case is crazy, because prompt injection is real, and you're basically inviting anyone who knows your email address to take a stab at pwning you, with full access to your personal life. I keep my instances on a VPS, behind a restrictive security group, and only accessible via Tailscale where it has zero access to anything on my tailnet. I only recently gave it its own email account (not mine!), but even then I am skeptical of doing so, and take efforts to prevent it from taking action on any email it receives (e.g., disabling the Heartbeat) because who knows what it'll end up doing. I mostly like that it can email me if I ask it to.</p>
]]></description><pubDate>Sat, 04 Apr 2026 09:48:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47637551</link><dc:creator>RIMR</dc:creator><comments>https://news.ycombinator.com/item?id=47637551</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47637551</guid></item><item><title><![CDATA[New comment by RIMR in "OpenClaw privilege escalation vulnerability"]]></title><description><![CDATA[
<p>Just a heads up that everyone can still see the comment you made on your profile because it wasn't removed by moderator action. It was downvoted to oblivion because it was an attack on another user for using AI.<p>That user said that they use OpenClaw to scrape city meetings for context so that they can more efficiently participate in local politics. You then attacked them, accusing them of "leaving AI slop comments on public city meetings", which isn't what they said they were doing at all.<p>I see absolutely no problem in using AI to summarize large quantities of information (such as a collection of city meeting notes). Summarization is one of the places that AI really shines right now, and if it helps people wrap their head around what is happening in their communities, good!<p>I understand a healthy skepticm of AI. Everyone should have some degree of that. But maybe avoid the urge to publicly shame people for their use of AI, especially on a site like this where that won't be received well. Or, if you're going to offer criticism, show some tact.</p>
]]></description><pubDate>Sat, 04 Apr 2026 09:35:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47637461</link><dc:creator>RIMR</dc:creator><comments>https://news.ycombinator.com/item?id=47637461</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47637461</guid></item><item><title><![CDATA[New comment by RIMR in "OpenClaw privilege escalation vulnerability"]]></title><description><![CDATA[
<p>Can you point to any reputable reports or specific commits that suggest that these companies are trying to plant malware in OpenClaw?<p>Or did you just see "China" and decide it must be malicous?<p>(This is a rhetorical question, I already know it's the latter)</p>
]]></description><pubDate>Sat, 04 Apr 2026 09:25:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47637402</link><dc:creator>RIMR</dc:creator><comments>https://news.ycombinator.com/item?id=47637402</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47637402</guid></item><item><title><![CDATA[New comment by RIMR in "LinkedIn is searching your browser extensions"]]></title><description><![CDATA[
<p>I'm confused, you call this "misleading" then quote the claim, but say it's "what [you'd] expect to find in modern browser fingerprinting code".<p>So what is it? Misleading, or exactly what you expected to find? It cannot be both.<p>It sounds more like you object to the negative framing of Microsoft hoovering up as much data as possible for profit, even though this is objectively a crime in the jurisdictions they are being sued in.</p>
]]></description><pubDate>Fri, 03 Apr 2026 11:22:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47625405</link><dc:creator>RIMR</dc:creator><comments>https://news.ycombinator.com/item?id=47625405</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47625405</guid></item><item><title><![CDATA[New comment by RIMR in "Car Seats as Contraception"]]></title><description><![CDATA[
<p>It is also the number at which your reproduction exceeds that of only replacing your own life. This is very important to some parents to leave the world with more people in it.</p>
]]></description><pubDate>Mon, 30 Mar 2026 21:41:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47580060</link><dc:creator>RIMR</dc:creator><comments>https://news.ycombinator.com/item?id=47580060</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47580060</guid></item><item><title><![CDATA[New comment by RIMR in "Cyber.mil serving file downloads using TLS certificate which expired 3 days ago"]]></title><description><![CDATA[
<p>I bet some guy with a ton of badges on his suit is asking the exact question in some Pentagon boardroom right now.</p>
]]></description><pubDate>Mon, 23 Mar 2026 20:34:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47494722</link><dc:creator>RIMR</dc:creator><comments>https://news.ycombinator.com/item?id=47494722</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47494722</guid></item><item><title><![CDATA[New comment by RIMR in "Cyber.mil serving file downloads using TLS certificate which expired 3 days ago"]]></title><description><![CDATA[
<p>Oh wow, they really are telling people to bypass the cert warning! It's a shame that the average layperson won't understand how breathtakingly stupid this is, because more people need to be paying attention to the staggering incompetence of the US military under this administration.</p>
]]></description><pubDate>Mon, 23 Mar 2026 20:31:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47494697</link><dc:creator>RIMR</dc:creator><comments>https://news.ycombinator.com/item?id=47494697</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47494697</guid></item><item><title><![CDATA[New comment by RIMR in "Cyber.mil serving file downloads using TLS certificate which expired 3 days ago"]]></title><description><![CDATA[
<p>Look, when I forget to renew the cert on my Jellyfin server, like 4 people suffer.<p>When the DoD forgets to renew the cert for their cybersecurity download website AND can't figure what a A TLS cert even is (calling it a "TSSL Certification"), this is an indicator that our military has absolutely zero understanding of the most basic cybersecurity concepts.<p>If you can't tell the difference between a hobbyist forgetting to renew their Let's Encrypt cert, vs. a trillion-dollar military not even knowing what a certificate is, maybe you should work for our military, because they can't tell the difference either.</p>
]]></description><pubDate>Mon, 23 Mar 2026 20:27:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47494649</link><dc:creator>RIMR</dc:creator><comments>https://news.ycombinator.com/item?id=47494649</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47494649</guid></item></channel></rss>