<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: vanillameow</title><link>https://news.ycombinator.com/user?id=vanillameow</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 08:57:37 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=vanillameow" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by vanillameow in "Show HN: Relay – The open-source Claude Cowork for OpenClaw"]]></title><description><![CDATA[
<p>1. In reality most people simply do not do this, and frankly it's exhausting to be expected to always assume goodwill in a setting that is full of pure vanity.<p>2. There's a difference between technical documentation, which AI can be quite decent at, and product marketing. A README is usually about 20/80, maybe 50/50 for large FOSS projects. You can have the AI write the sections on how to install the  thing for all I care, but as soon as AI is telling me <i>why</i> I should use it, you've lost me. Signals a complete lack of interest in your own product.</p>
]]></description><pubDate>Thu, 26 Mar 2026 11:01:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47528962</link><dc:creator>vanillameow</dc:creator><comments>https://news.ycombinator.com/item?id=47528962</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47528962</guid></item><item><title><![CDATA[New comment by vanillameow in "Show HN: Relay – The open-source Claude Cowork for OpenClaw"]]></title><description><![CDATA[
<p>Genuine question - your README is full of em-dashes, emojis, feature squares and ASCII diagrams - none of which are present in your pre-AI era projects.<p>Why do you expect a potential userbase to care to read something you didn't even care to write?<p>Seems a bit disrespectful to me.</p>
]]></description><pubDate>Thu, 26 Mar 2026 10:39:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47528790</link><dc:creator>vanillameow</dc:creator><comments>https://news.ycombinator.com/item?id=47528790</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47528790</guid></item><item><title><![CDATA[New comment by vanillameow in "Show HN: Email.md – Markdown to responsive, email-safe HTML"]]></title><description><![CDATA[
<p>If you need to author a lot of emails with LLM you should be rethinking your business strategy tbh</p>
]]></description><pubDate>Wed, 25 Mar 2026 08:02:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47514587</link><dc:creator>vanillameow</dc:creator><comments>https://news.ycombinator.com/item?id=47514587</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47514587</guid></item><item><title><![CDATA[New comment by vanillameow in "Show HN: Cq – Stack Overflow for AI coding agents"]]></title><description><![CDATA[
<p>I use mainly Opus 4.6.<p>I did the same thing and created a skill for summarizing a troubleshooting conversation. It works decently, as long as my own input in the troubleshooting is minimal. i.e. dangerously-skip-permissions. As soon as I need to take manual steps or especially if the conversation is in Desktop/Web, it will very quickly degrade and just assume steps I've taken (e.g. if it gave me two options to fix something, and I come back saying it's fixed, it will in the summary just kind of randomly decide a solution). It also generally doesn't consider the previous state of the system (e.g. what was already installed/configured/setup) when writing such a summary, which maybe makes it reusable for me, somewhat, but certainly not for others.<p>Now you could say, "these are all things you can prompt away", and, I mean, to an extent, probably. But once you're talking about taking something like this online, you're not working with the top 1% proompters. The average claude session is not the diligent little worker bee you'd want it to be. These models are still, at their core, chaos goblins. I think Moltbook showed that quite clearly.<p>I think having your model consider someone else's "fix" to your problem as a primary source is bad. Period. Maybe it won't be bad in 3 generations when models can distinguish noise and nonsense from useful information, but they really can't right now.</p>
]]></description><pubDate>Tue, 24 Mar 2026 12:40:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47501769</link><dc:creator>vanillameow</dc:creator><comments>https://news.ycombinator.com/item?id=47501769</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47501769</guid></item><item><title><![CDATA[New comment by vanillameow in "Show HN: Cq – Stack Overflow for AI coding agents"]]></title><description><![CDATA[
<p>I'm surprised to see this getting so much positive reception. In my experience AI is still really bad with documenting the exact steps it took, much more so when those are dependent on its environment, and once there's a human in the loop at any point you can completely throw the idea out the window. The AI will just hallucinate intermediate steps that you may or may not have taken unless you spell out in exact detail every step you took.<p>People in general seem super obsessed with AI context, bordering on psychosis. Even setting aside obvious examples like Gas Town or OpenClaw or that tweet I saw the other day of someone putting their agents in scrum meetings (lol?), this is exactly the kind of vague LLM "half-truth" documentation that will cascade into errors down the line. In my experience, AI works best when the ONLY thing it has access to is GROUND TRUTH HUMAN VERIFIED documentation (and a bunch of shell tools obviously).<p>Nevertheless it'll be interesting to see how this turns out, prompt injection vectors and all. Hope this doesn't have an admin API key in the frontend like Moltbook.</p>
]]></description><pubDate>Tue, 24 Mar 2026 07:45:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47499661</link><dc:creator>vanillameow</dc:creator><comments>https://news.ycombinator.com/item?id=47499661</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47499661</guid></item><item><title><![CDATA[New comment by vanillameow in "Push events into a running session with channels"]]></title><description><![CDATA[
<p>I am not sure how I feel about all these hype-driven tools honestly, especially considering they are super janky since probably rushed out with Claude Code.<p>It reminds me that I don't really like Anthropic as a company, I just like Claude as a model a lot. It just feels more capable and personable than the others. I wonder if / when OpenAI et al. will be able to replicate it.<p>For now, I basically have no choice but to use the walled garden but I do hope Anthropic is not completely compromising their core mission of actually making the model better rather than following these public bandwagons.<p>Then again most of these probably take them like a day to develop through a junior dev talking to Claude Opus 5 or some shit lol (and to be fair, it shows). I don't know.</p>
]]></description><pubDate>Fri, 20 Mar 2026 07:53:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47451720</link><dc:creator>vanillameow</dc:creator><comments>https://news.ycombinator.com/item?id=47451720</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47451720</guid></item><item><title><![CDATA[New comment by vanillameow in "What 81,000 people want from AI"]]></title><description><![CDATA[
<p>Not incorrect, but it honestly borders on grifting a lot of the time imo. At least it's a spectrum. If you are supercharging your existing technical and domain knowledge, and actually caring about the security of your customers while doing so, fair play. That is real entrepreneurship.<p>Then there's people who are "well intentioned", I guess, but lack the technical knowledge. A friend of a friend with no technical background is selling websites to companies that he writes with Claude. They look shiny, everyone's happy in the short run, but I don't doubt issues will come up down the line that someone will have to be responsible for. I'd personally feel like I was ripping people off doing this, but I think also Dunning-Kruger prevents you from knowing any better if you are the type of person doing this.<p>Then there's the whole B2B SaaS gang that are basically just producing vaporware and telling other people how to produce more vaporware. This is no different from crypto, NFTs etc. before it really. Just people trying to hustle others.<p>And then there's the whole clawdbot gang probably burning more in tokens everyday than normal people use in a month so they can sort 18 e-mails.<p>So yeah I mean you're right, there certainly is a subset of people who are using this ethically (as ethically as you can use LLMs but that's another story) to make some money on the side. Certainly not the majority though I'd say.</p>
]]></description><pubDate>Thu, 19 Mar 2026 14:27:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47440129</link><dc:creator>vanillameow</dc:creator><comments>https://news.ycombinator.com/item?id=47440129</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47440129</guid></item><item><title><![CDATA[New comment by vanillameow in "What 81,000 people want from AI"]]></title><description><![CDATA[
<p>> Also, services based SAAS especially B2B will not die, because a tyre shop won't have the time to write/debug/host it's own solution and will not want to depend to a single contractor who can disappear for a vacation.<p>True in the current state of LLMs, possibly not true forever if someone finds the magic bullet that turns the one-shotting (reliable) software dream that companies like Anthropic and Perplexity currently peddle into reality. Seems far-fetched ATM but the gains since GPT-2 have been very real.<p>We're quite a ways away from this though, even with Opus 4.6 and the like. And even further from it being part of Claude Code rather than some proprietary $1000/mo. closed-source solution.<p>As you say though, _if_ such a technology were to exist, it's Anthropic that holds all the cards, not random entrepreneur #25721 who is asking the Anthropic API the same thing that the actual customer could just be asking directly. At that point you're an undesirable middleman, not a business.</p>
]]></description><pubDate>Thu, 19 Mar 2026 11:18:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47437537</link><dc:creator>vanillameow</dc:creator><comments>https://news.ycombinator.com/item?id=47437537</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47437537</guid></item><item><title><![CDATA[New comment by vanillameow in "Translate Garry Tan's LinkedIn-speak to plain English"]]></title><description><![CDATA[
<p>...what is your actual point? I'm pretty sure none of the shit I read on LinkedIn is making "philosophical ideas accessible to the masses", it's churned out 20x regurgitated self-promotional material.<p>Is this a bot post?</p>
]]></description><pubDate>Thu, 19 Mar 2026 09:12:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47436712</link><dc:creator>vanillameow</dc:creator><comments>https://news.ycombinator.com/item?id=47436712</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47436712</guid></item><item><title><![CDATA[New comment by vanillameow in "Translate Garry Tan's LinkedIn-speak to plain English"]]></title><description><![CDATA[
<p>So I've seen. It's just the LinkedIn one is what they advertised. Speaks to the fact that it's probably some slopcoded thing, which I'd usually get mildly upset about but who can muster the effort in this economy. I think the point still stands though.</p>
]]></description><pubDate>Thu, 19 Mar 2026 09:09:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47436692</link><dc:creator>vanillameow</dc:creator><comments>https://news.ycombinator.com/item?id=47436692</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47436692</guid></item><item><title><![CDATA[New comment by vanillameow in "Translate Garry Tan's LinkedIn-speak to plain English"]]></title><description><![CDATA[
<p>This LinkedIn translator was a genius move by Kagi, honestly. A lot of people are incredibly tired not only of AI(-adjacent) writing, but overall of people with a stick up their ass thinking they're this generation's Aristotle.<p>Having their slop written out in plain English really shows you how vain it all is.</p>
]]></description><pubDate>Thu, 19 Mar 2026 09:03:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47436657</link><dc:creator>vanillameow</dc:creator><comments>https://news.ycombinator.com/item?id=47436657</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47436657</guid></item><item><title><![CDATA[New comment by vanillameow in "What 81,000 people want from AI"]]></title><description><![CDATA[
<p>I can't help but feel a little bit of ... pity for a lot of the people who call themselves "entrepreneurs" in this survey?<p>"I live hand to mouth, zero savings. If I use AI smarter, it may help me craft solutions to that cycle."<p>"Relaxing while my AI gets the work done, builds the wealth. It’s a shadow of me, just a very, very long one."<p>etc. I do believe AI currently accelerates businesses, especially in software dev. We work with a contractor who use Claude Code to reach incredible development pace for the size of their team, but also when we sit down with them in meetings they understand what's being created, they are able to argue their architectural choices, and they know how to propose business value.<p>You can't just buy a Claude subscription and have magically solve your problems. The thing is, as soon as Claude can do this without a business savvy human in the loop, then 
a) everyone can do it, so you won't actually have any value to propose, and 
b) Once the AI can run businesses without humans in the loop, you can bet your ass they will not out of the goodness of their hearts keep giving that ability away for $20.<p>In summary, AI if used to accelerate businesses _CAN_ be good. Buying it as a magic bullet to bring you out of poverty is probably a worse choice than just buying a lottery ticket.</p>
]]></description><pubDate>Thu, 19 Mar 2026 07:59:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47436241</link><dc:creator>vanillameow</dc:creator><comments>https://news.ycombinator.com/item?id=47436241</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47436241</guid></item><item><title><![CDATA[New comment by vanillameow in "How we hacked McKinsey's AI platform"]]></title><description><![CDATA[
<p>I've picked up reading again over the last year or so! Maybe, if anything, that is <i>why</i> I feel so angry. Writing and reading are how we communicate thoughts and ideas between people, humans, at scale. A grand fantasy novel evokes a thirst for adventure, a romance evokes a yearning for true love.<p>What makes me angry, is to use the feelings we associate with this process and disingenuously <i>pretend</i> that there is a human that wants to tell me something, just for it to be generated drivel.<p>Don't get me wrong, I don't mind reading AI content, but it should read like this: "Our AI agent 'hacked' (found unexposed API endpoints) x or y company, we asked it to summarize and here's what it said:" - now I know I am about to read generated content, and I can decide myself if I want to engage with it or not. Do you ever notice how nobody that uses AI writing does this? If using AI to produce creative media, including art, music, videos, and writing, is so innocuous, why do all the "AI creatives" so desperately want to hide it from you? Because they don't want you to know that it's generated. Their literal goal is to pretend to have a deeper understanding, a better outlook, on a given topic, than they actually have. I think it is sad for them to feel the need to do this, and sad for me to have to use my limited lifespan discerning it. That is why I am angry.<p>Anyway, there's no need to "closely parse each sentence construction" at all to identify this post is fully AI generated. It's about as clear as they come. If you have trouble identifying that, well, in the short term you're probably at a disadvantage. In the long term, if AI does ever become able to fully mimic human expression, it won't matter anyway, I guess.<p>ps: FWIW, I agree with you that of all places, some random AI company with an AI generated website reporting on their AI pentesting with AI is the least surprising thing - the entire company is slop, and it's very easy to see that. My initial post was more of a projection at the dozens of posts I've read from personal blogs in recent weeks where I had to carefully decide if someone's writing that they publish under their own name actually contains original thought or not.</p>
]]></description><pubDate>Thu, 12 Mar 2026 07:50:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47347709</link><dc:creator>vanillameow</dc:creator><comments>https://news.ycombinator.com/item?id=47347709</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47347709</guid></item><item><title><![CDATA[New comment by vanillameow in "How we hacked McKinsey's AI platform"]]></title><description><![CDATA[
<p>Can we stop softening the blow? This isn't "drafted with at least major AI help", it's just straight up AI slop writing. Let's call a spade a spade. I have yet to meet anyone claiming they "write with AI help but thoughts are my own" that had anything interesting to say. I don't particularly agree with a lot of Simon Willison's posts but his proofreading prompt should pretty much be the line on what constitutes acceptable AI use for writing.<p><a href="https://simonwillison.net/guides/agentic-engineering-patterns/prompts/#proofreader" rel="nofollow">https://simonwillison.net/guides/agentic-engineering-pattern...</a><p>Grammar check, typo check, calls you out on factual mistakes and missing links and that's it. I've used this prompt once or twice for my own blog posts and it does just what you expect. You just don't end up with writing like this post by having AI "assistance" - you end up with this type of post by asking Claude, probably the same Claude that found the vulnerability to begin with, to make the whole ass blog post. No human thought went into this. If it did, I strongly urge the authors to change their writing style asap.<p>"So we decided to point our autonomous offensive agent at it. No credentials. No insider knowledge. And no human-in-the-loop. Just a domain name and a dream."<p>Give me a fucking break</p>
]]></description><pubDate>Wed, 11 Mar 2026 15:09:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47336637</link><dc:creator>vanillameow</dc:creator><comments>https://news.ycombinator.com/item?id=47336637</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47336637</guid></item><item><title><![CDATA[New comment by vanillameow in "How we hacked McKinsey's AI platform"]]></title><description><![CDATA[
<p>Tiring. Internet in 2026 is LLMs reporting on LLMs pen-testing LLM-generated software.</p>
]]></description><pubDate>Wed, 11 Mar 2026 14:12:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47335821</link><dc:creator>vanillameow</dc:creator><comments>https://news.ycombinator.com/item?id=47335821</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47335821</guid></item><item><title><![CDATA[New comment by vanillameow in "Meta acquires Moltbook"]]></title><description><![CDATA[
<p>> If the primary appeal of your VR universe is that your avatar can be an anthropomorphic banana, an anime girl, a furry, a giant penis with legs - that's never going to become a 300-million-user platform.<p>I mean the inherent appeal of VR is self-expression; being who you want to be, seeing the worlds you want to see. You won't get 300 million users with corporate slop either. That maybe works once, if ever, VR headsets become an interface suitable for white collar work, which they currently very much aren't, and then it wouldn't be the next Facebook - it'd be the next Microsoft Teams. Which is not really in line with Meta's other offerings, though they certainly wouldn't say no to it I guess. But I think a 500-user survey is all it would take to get a very clear signal that current VR is NOT about to replace Teams.</p>
]]></description><pubDate>Wed, 11 Mar 2026 11:15:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47334130</link><dc:creator>vanillameow</dc:creator><comments>https://news.ycombinator.com/item?id=47334130</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47334130</guid></item><item><title><![CDATA[New comment by vanillameow in "RFC 454545 – Human Em Dash Standard"]]></title><description><![CDATA[
<p>>Is there realistically any way to CAPTCHA anymore?<p>Talking to someone in real life.<p>Of course that's taking the piss, but realistically that is sort of the answer: Having access to a side-channel through which you can identify that a person is in fact a person, coupled with the trust that they won't try to sleight you by making you waste your attention. Honestly with how much LLM drivel is being generated and the sort-of renaissance of the personal website, blog, and RSS, I wouldn't be surprised if some kind of consensus-trust-based network for human authors were established.</p>
]]></description><pubDate>Wed, 11 Mar 2026 09:46:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47333544</link><dc:creator>vanillameow</dc:creator><comments>https://news.ycombinator.com/item?id=47333544</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47333544</guid></item><item><title><![CDATA[New comment by vanillameow in "The death of social media is the renaissance of RSS (2025)"]]></title><description><![CDATA[
<p>Considering the topic of this article I'm giving you the benefit of the doubt, but to be honest - if you're not writing your articles with LLMs, you should strongly consider changing your writing style. I peeked some of your other articles, like the one about half your readers being bots, and it reads straight out of ChatGPT. I trust given your framing in this article that you know that's not a good thing.</p>
]]></description><pubDate>Mon, 09 Mar 2026 13:59:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47309147</link><dc:creator>vanillameow</dc:creator><comments>https://news.ycombinator.com/item?id=47309147</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47309147</guid></item><item><title><![CDATA[New comment by vanillameow in "Dario Amodei calls OpenAI’s messaging around military deal ‘straight up lies’"]]></title><description><![CDATA[
<p>Regardless, I think if you are thinking purely from a ruthless business standpoint then standing up to the DoD was an incredibly ill-advised move. It's basically free financial and technological backing at the cost of ethics. Additionally, basically everyone with functioning eyeballs knows that the current US administration is incredibly vindicative, reckless and short-tempered. I would agree that in a more tame administration, you might do something like this as a publicity stunt. In the Trump administration, and while the AI arms race is still in full force, it feels like there has to be at least somewhat genuine sentiment behind it, otherwise it just doesn't really make sense. Like what do they accomplish from this? You'll get some users who will view you more favourably for it but it probably won't make up for the lost revenue, and no matter how many people like you, if you are first to AGI in this industry you win. The prior sentiment basically won't matter at that point. In the most critical interpretation I guess you could say if the bubble pops it might be more of a matter of sentiment. I don't know, in my mind the math just doesn't work for it to be a business move.</p>
]]></description><pubDate>Thu, 05 Mar 2026 09:19:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47259475</link><dc:creator>vanillameow</dc:creator><comments>https://news.ycombinator.com/item?id=47259475</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47259475</guid></item><item><title><![CDATA[New comment by vanillameow in "Chaos and Dystopian news for the dead internet survivors"]]></title><description><![CDATA[
<p>This is a project where I actually kind of like the idea, but the implementation looks incredibly soulless.</p>
]]></description><pubDate>Thu, 05 Mar 2026 08:09:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47258985</link><dc:creator>vanillameow</dc:creator><comments>https://news.ycombinator.com/item?id=47258985</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47258985</guid></item></channel></rss>