<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: yowlingcat</title><link>https://news.ycombinator.com/user?id=yowlingcat</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 14 May 2026 14:39:49 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=yowlingcat" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by yowlingcat in "Claude for Small Business"]]></title><description><![CDATA[
<p>I've been really enjoying claude design but my biggest critique of it (and frankly how vanilla claude handles files in general) is that it has no native conception of git-like version control. In code land you can work around this with harnesses so there's only so much harm claude code/opencode can do, but to your point in small biz land when it's putzing around with a system of record without rewindability, things could get really messy really fast.<p>A couple more thoughts here - the hard part is not just the data side of it, it's replaying/unplaying actions. Many actions are non-reversible. Code is clean in the same way that google docs is clean. But for many business processes, some actions just can't be unwound once started. If claude initiates a wire that it shouldn't, no amount of git technology will undo that wire.</p>
]]></description><pubDate>Thu, 14 May 2026 05:07:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=48131302</link><dc:creator>yowlingcat</dc:creator><comments>https://news.ycombinator.com/item?id=48131302</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48131302</guid></item><item><title><![CDATA[New comment by yowlingcat in "An AI agent deleted our production database. The agent's confession is below"]]></title><description><![CDATA[
<p>It's been a very strange realization to have with AI lately (which you have reminded me of) because it also reminds me that the same thing works with humans. Not the killing part at least, but the honeypot and jailing/restricting access part.<p>Probably because telling someone not to do something works the 99% of the time they weren't going to do it anyways. But telling somebody "here's how to do something" and seeing them have the judgment not do it gives you information right away, as does them actually taking the honeypot. At the heart of it, delayed catastrophic implosions are much worse than fast, guarded, recoverable failures. At the end of the day, I suppose that's been supposed part of lean startup methodology forever -- just always easy in theory and tricky in practice I suppose.</p>
]]></description><pubDate>Mon, 27 Apr 2026 05:59:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47918187</link><dc:creator>yowlingcat</dc:creator><comments>https://news.ycombinator.com/item?id=47918187</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47918187</guid></item><item><title><![CDATA[New comment by yowlingcat in "SpaceX says it has agreement to acquire Cursor for $60B"]]></title><description><![CDATA[
<p>401k rollovers into IRA aren't that hard these days and you could always use that IRA to have a more customized strategy, more specifically direct indexing of a major fund minus key ticker symbols you don't want exposure to. Of course, that all presumes that you won't regret excluding this long term.</p>
]]></description><pubDate>Wed, 22 Apr 2026 02:59:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47858332</link><dc:creator>yowlingcat</dc:creator><comments>https://news.ycombinator.com/item?id=47858332</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47858332</guid></item><item><title><![CDATA[New comment by yowlingcat in "Iceye Open Data"]]></title><description><![CDATA[
<p>I was thinking more of a Breaking Bad arc. Pulling off a decent Heisenberg would be a lot easier than some of the other stuff he's done..</p>
]]></description><pubDate>Fri, 17 Apr 2026 18:03:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47808740</link><dc:creator>yowlingcat</dc:creator><comments>https://news.ycombinator.com/item?id=47808740</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47808740</guid></item><item><title><![CDATA[New comment by yowlingcat in "We gave an AI a 3 year retail lease and asked it to make a profit"]]></title><description><![CDATA[
<p>We can fault them individually for such corny and groan inducing deceit, but we can't fault them for society's role in rewarding the highest profile and most wealthy founders (OAI/Anthropic) taking the exact same approach with optics.<p>I am about to go on a long rant, but there is so much money sloshing around the capital allocation machine going towards a vision of the AI managed and optimized future that the propaganda machine for these rose colored delusions must work in overtime. What disappoints me is the question of where the heck are the bears? Did they all go into hibernation 5 years ago when QE gave the retail kindergartener a handgun to pump low quality tickers to the moon? have we just societally accepted that everything should be a hyperreal version of sports gambling now and the world is and ought to be an efficient market of hyperstition?<p>I may be old and grumpy saying this, but this all sounds dumb and corny. I would like some of the very capable traders who make money repricing mispriced assets to find a way to make money deflating this bubble and bring this environment back to sanity. And I say this as someone who likes the capabilities of AI but continue to see it do little to none of the hard work solving incompressible problems that continue to create and retain enterprise value.<p>To get off my soapbox for a second and get back to your quoted passage -- what they're really saying is "We are working very hard to make this future coming, and we think so little of your intelligence that we believe you'll fall for the fear tactic of believing it's inevitable, ignoring the fact that it won't happen without someone's hands. And in this case, it is very much our hands, which are incentivized to not just do it but to do it so well that we ensure we do everything possible to make this happen. Part of which means persuading you that it is guaranteed to succeed. If we ever let the honest truth slip that what we're proposing is extremely hard to pull off with pure AI and we're just going to be a any other commercial real estate investor like anyone else, the jig is up."<p>That's what every single one of these kinds of hypocritical navel gazing faux-concern proclamations amount to for me. Astroturf.</p>
]]></description><pubDate>Thu, 16 Apr 2026 21:35:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47799800</link><dc:creator>yowlingcat</dc:creator><comments>https://news.ycombinator.com/item?id=47799800</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47799800</guid></item><item><title><![CDATA[New comment by yowlingcat in "A Founder Tried to Pitch – and Got a Restraining Order"]]></title><description><![CDATA[
<p>Agreed. I will say that poking around a bit more on that site it looks like a temporary restraining order involving the same parties was granted, so perhaps there's some more history to this dispute than just what is visible on said link.</p>
]]></description><pubDate>Thu, 02 Apr 2026 12:39:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47613666</link><dc:creator>yowlingcat</dc:creator><comments>https://news.ycombinator.com/item?id=47613666</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47613666</guid></item><item><title><![CDATA[New comment by yowlingcat in "Global warming has accelerated significantly"]]></title><description><![CDATA[
<p>I'd be interested to see what things look like on a longer timescale, say 500, 1000, 2000, 7000 years. 80 years of time feels like a long time on a human lifescale, but on a civilizational timescale it is a lot shorter.</p>
]]></description><pubDate>Fri, 06 Mar 2026 23:36:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47282563</link><dc:creator>yowlingcat</dc:creator><comments>https://news.ycombinator.com/item?id=47282563</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47282563</guid></item><item><title><![CDATA[New comment by yowlingcat in "Pi – A minimal terminal coding harness"]]></title><description><![CDATA[
<p>It definitely does suck. I had the same feelings about a year ago and the unpleasantness has definitely increased. But glass half full, we didn't have Kimi K2.5, GLM5, Qwen3.5, MiniMax 2.5, Step Flash 3.5, etc available and the cambrian explosion is only continuing (DeepSeek V4 should be out pretty soon too).<p>The real moment of relief for me was the first time I used DeepSeek R1 to do a large task that I would've otherwise needed Claude/OpenAI for about 12 months ago and it just did it -- not just decently, but with less slop than Claude/OpenAI. Ever since that point, I've been continuing to eye local models and parallel testing them for workloads I'd otherwise use commercial frontier models for. It's never a perfect 1:1 replacement, but I've found that I've gotten close enough that I no longer feel that paranoia of my AI workloads not being something I can own and control. True, I do have to sacrifice some capability, but the tradeoff is I get something that lives on my metal, never leaks data or IP, doesn't change behavior or get worse under my feet, doesn't rate limit me, can be fine tuned and customized. It's all lead to a belief for me that the market competition is very much functioning and the cat is out of the bag, for the benefit of all of us as users.</p>
]]></description><pubDate>Thu, 26 Feb 2026 15:30:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47167419</link><dc:creator>yowlingcat</dc:creator><comments>https://news.ycombinator.com/item?id=47167419</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47167419</guid></item><item><title><![CDATA[New comment by yowlingcat in "Pi – A minimal terminal coding harness"]]></title><description><![CDATA[
<p>I'm actually relieved they're doing it now because it's going to be a forcing function for the local LLM ecosystem. Same thing with their "distillation attack" smear piece -- the more of a spotlight people get on true alternatives + competition to the 900 lb gorillas, the better for all users of LLMs.</p>
]]></description><pubDate>Wed, 25 Feb 2026 19:03:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47156169</link><dc:creator>yowlingcat</dc:creator><comments>https://news.ycombinator.com/item?id=47156169</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47156169</guid></item><item><title><![CDATA[New comment by yowlingcat in "Infrastructure decisions I endorse or regret after 4 years at a startup (2024)"]]></title><description><![CDATA[
<p>Very clever. Our team is small enough right now for this to not be an issue, but I've ran into this issue previously and this feels like a far more practical design to avoid lockin.</p>
]]></description><pubDate>Fri, 20 Feb 2026 22:42:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47095019</link><dc:creator>yowlingcat</dc:creator><comments>https://news.ycombinator.com/item?id=47095019</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47095019</guid></item><item><title><![CDATA[New comment by yowlingcat in "Expensively Quadratic: The LLM Agent Cost Curve"]]></title><description><![CDATA[
<p>What do you think about RLMs? At first blush it looks like sub agents with some sprinkles on top, but people who have become more adept with it seem to show its ability to handle sublinear context scaling behavior very effectively.</p>
]]></description><pubDate>Mon, 16 Feb 2026 18:08:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47038164</link><dc:creator>yowlingcat</dc:creator><comments>https://news.ycombinator.com/item?id=47038164</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47038164</guid></item><item><title><![CDATA[New comment by yowlingcat in "Improving 15 LLMs at Coding in One Afternoon. Only the Harness Changed"]]></title><description><![CDATA[
<p>It's a very concerning future. I would love to live in a world where we could simply stop them from doing that, but for the moment, the best hedge appears to be the Chinese open weight models that can't be put back in the box and provide the valuable market function of commodifying the encoded knowledge of these models (which in and of itself was derived from knowledge not created by the frontier lab).</p>
]]></description><pubDate>Fri, 13 Feb 2026 16:32:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47004577</link><dc:creator>yowlingcat</dc:creator><comments>https://news.ycombinator.com/item?id=47004577</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47004577</guid></item><item><title><![CDATA[New comment by yowlingcat in "Improving 15 LLMs at Coding in One Afternoon. Only the Harness Changed"]]></title><description><![CDATA[
<p>It goes the other way around as well. DeepSeek has made quite a few innovations that the US labs were lacking (DSA being the most notable one). It's also not clear to me how much of distilled outputs are just an additional ingredient of the recipe rather than a whole "frozen dinner" so to speak. I have no evidence to say it's one way or the other, but my guess is the former.</p>
]]></description><pubDate>Fri, 13 Feb 2026 16:30:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47004548</link><dc:creator>yowlingcat</dc:creator><comments>https://news.ycombinator.com/item?id=47004548</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47004548</guid></item><item><title><![CDATA[New comment by yowlingcat in "Warcraft III Peon Voice Notifications for Claude Code"]]></title><description><![CDATA[
<p>Does this support when you click on a peon a bunch of times and it says "Me not that kind of Orc!"</p>
]]></description><pubDate>Thu, 12 Feb 2026 20:39:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=46994803</link><dc:creator>yowlingcat</dc:creator><comments>https://news.ycombinator.com/item?id=46994803</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46994803</guid></item><item><title><![CDATA[New comment by yowlingcat in "Stelvio: Ship Python to AWS"]]></title><description><![CDATA[
<p>Just came across SST the other day and it looks very interesting. It looks like it's based on Pulumi, so it begs the question for me of why does it exist? Structurally it doesn't seem to be that different in capabilities. Perhaps it is more that it is a more opinionated subset that has better ergonomics. Is that correct, or is the reason different for you?</p>
]]></description><pubDate>Tue, 03 Feb 2026 01:12:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=46864912</link><dc:creator>yowlingcat</dc:creator><comments>https://news.ycombinator.com/item?id=46864912</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46864912</guid></item><item><title><![CDATA[New comment by yowlingcat in "Nvidia shares are down after report that its OpenAI investment stalled"]]></title><description><![CDATA[
<p>What I don't understand is why Gemini is not #1, other than that Google has no economic reason to have the same fire under its ass to be #1 as Anthropic and OpenAI. Or maybe they are correctly assessing that getting to good enough and out-building infrastructure is more valuable; they do have their TPUs as bets on their future and their search monopoly today to print nearly endless free cash flow. Perhaps Gemini is advancing at exactly the right rate for them.<p>I guess there is one thing that Gemini is objectively better at than either, which is long context, and it does seem to be by an order of magnitude. What boggles my mind is why Gemini is still not as good as the open weight frontier models yet. If they got just to parity with that along with their existing long context and strong token pricing, they'd be able to take over the coding market. Are they just biding their time to make their move? Hard to discern.</p>
]]></description><pubDate>Tue, 03 Feb 2026 01:10:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46864894</link><dc:creator>yowlingcat</dc:creator><comments>https://news.ycombinator.com/item?id=46864894</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46864894</guid></item><item><title><![CDATA[New comment by yowlingcat in "They lied to you. Building software is hard"]]></title><description><![CDATA[
<p>Something that's been on my mind recently - what if gen AI coding tools are ultimately attention casinos in the same way social media is? You burn through tons of tokens and you pay per token, it feels productive and engaging, but ultimately the more you try and fail, the more money the vendor makes. Their expressed (though perhaps not stated) economic goal may be to keep you in the "goldilocks zone" of making enough progress to not give up, but not so much progress that you 1-shot to the end state without issues.<p>I'm not saying that they can actually do that per sé; switching costs are so low that if you are doing worse than an existing competitor, you'd lose that volume. Nor am I saying they are deliberately bilking folks -- I think it would be hard to do that without folks cottoning on.<p>But, I did see an interesting thread on Twitter that had me pondering [1]. Basically, Claude Code experimented with RAG approaches over the simple iterative grep that they now use. The RAG approach was brittle and hard to get right in their words, and just brute forcing it with grep was easier to use effectively. But Cursor took the other approach to make semantic searching work for them, which made me wonder about the intrinsic token economics for both firms. Cursor is incentivized to minimize token usage to increase spread from their fixed seat pricing. But for Claude, iterative grep bloating token usage doesn't harm them and in fact increases gross tokens purchased, so there is no incentive to find a better approach.<p>I am sure there are many instances of this out there, but it does make me inclined to wonder if it will be economic incentives rather than technical limitations that eventually put an upper limit on closed weight LLM vendors like OpenAI and Claude. Too early to tell for now, IMO.<p>[1] <a href="https://x.com/antoine_chaffin/status/2018069651532787936" rel="nofollow">https://x.com/antoine_chaffin/status/2018069651532787936</a></p>
]]></description><pubDate>Mon, 02 Feb 2026 18:39:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=46859548</link><dc:creator>yowlingcat</dc:creator><comments>https://news.ycombinator.com/item?id=46859548</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46859548</guid></item><item><title><![CDATA[New comment by yowlingcat in "Please don't say mean things about the AI I just invested a billion dollars in"]]></title><description><![CDATA[
<p>I agree with your point and it is to that point I disagree with GP. These open weight models which have ultimately been constructed from so many thousands of years of humanity are also now freely available to all of humanity. To me that is the real marvel and a true gift.</p>
]]></description><pubDate>Thu, 29 Jan 2026 02:27:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46804959</link><dc:creator>yowlingcat</dc:creator><comments>https://news.ycombinator.com/item?id=46804959</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46804959</guid></item><item><title><![CDATA[New comment by yowlingcat in "GLM-4.7-Flash"]]></title><description><![CDATA[
<p>I don't believe Antigravity or Cursor work well with pluggable models. It seems to be impossible with Antigravity and with Cursor while you can change the OAI compatible API endpoint to one of your choice, not all features may work as expected.<p>My recommendation would be to use other tools built to support pluggable model backends better. If you're looking for a Claude Code alternative, I've been liking OpenCode so far lately, and if you're looking for a Cursor alternative, I've heard great things about Roo/Cline/KiloCode although I personally still just use Continue out of habit.</p>
]]></description><pubDate>Mon, 19 Jan 2026 22:04:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46685149</link><dc:creator>yowlingcat</dc:creator><comments>https://news.ycombinator.com/item?id=46685149</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46685149</guid></item><item><title><![CDATA[New comment by yowlingcat in "GLM-4.7-Flash"]]></title><description><![CDATA[
<p>It may be worth taking a look at LFM [1]. I haven't had the need to use it so far (running on Apple silicon on a day to day basis so my dailies are usually the 30B+ MoEs) but I've heard good things from the internet from folks using it as a daily on their phones. YMMV.<p>[1] <a href="https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct" rel="nofollow">https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct</a></p>
]]></description><pubDate>Mon, 19 Jan 2026 21:59:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=46685081</link><dc:creator>yowlingcat</dc:creator><comments>https://news.ycombinator.com/item?id=46685081</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46685081</guid></item></channel></rss>