<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: raesene9</title><link>https://news.ycombinator.com/user?id=raesene9</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 08:43:10 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=raesene9" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by raesene9 in "Claude Code's source code has been leaked via a map file in their NPM registry"]]></title><description><![CDATA[
<p>but once forked people will have local copies, that can be put up onto other sites, if GH take them down.</p>
]]></description><pubDate>Fri, 03 Apr 2026 10:24:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47624997</link><dc:creator>raesene9</dc:creator><comments>https://news.ycombinator.com/item?id=47624997</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47624997</guid></item><item><title><![CDATA[New comment by raesene9 in "A Rave Review of Superpowers (For Claude Code)"]]></title><description><![CDATA[
<p>The install mechanism for the superpowers plugin for codex and opencode is .... interesting. From <a href="https://github.com/obra/superpowers" rel="nofollow">https://github.com/obra/superpowers</a><p>Fetch and follow instructions from <a href="https://raw.githubusercontent.com/obra/superpowers/refs/heads/main/.codex/INSTALL.md" rel="nofollow">https://raw.githubusercontent.com/obra/superpowers/refs/head...</a><p>it's like curl|bash but with added LLM agents...</p>
]]></description><pubDate>Fri, 03 Apr 2026 09:59:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47624864</link><dc:creator>raesene9</dc:creator><comments>https://news.ycombinator.com/item?id=47624864</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47624864</guid></item><item><title><![CDATA[New comment by raesene9 in "Claude Code's source code has been leaked via a map file in their NPM registry"]]></title><description><![CDATA[
<p>there are a .....lot of forks already, no putting the genie back in the bottle for this one, I'd imagine.</p>
]]></description><pubDate>Tue, 31 Mar 2026 13:16:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47586929</link><dc:creator>raesene9</dc:creator><comments>https://news.ycombinator.com/item?id=47586929</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47586929</guid></item><item><title><![CDATA[New comment by raesene9 in "Claude Code's source code has been leaked via a map file in their NPM registry"]]></title><description><![CDATA[
<p>I think the original repo OP mentioned decided not to host the code any more, but given there are 28k+ forks, it's not too hard to find again...</p>
]]></description><pubDate>Tue, 31 Mar 2026 13:14:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47586893</link><dc:creator>raesene9</dc:creator><comments>https://news.ycombinator.com/item?id=47586893</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47586893</guid></item><item><title><![CDATA[New comment by raesene9 in "Box of Secrets: Discreetly modding an apartment intercom to work with Apple Home"]]></title><description><![CDATA[
<p>+1 to this we had a set of HomePod minis for intercom and not only do they not work reliably, but the diagnostics provided when they fail are non-existent, making it hard to improve the setup.</p>
]]></description><pubDate>Tue, 24 Mar 2026 08:51:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47500043</link><dc:creator>raesene9</dc:creator><comments>https://news.ycombinator.com/item?id=47500043</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47500043</guid></item><item><title><![CDATA[New comment by raesene9 in "OpenCode – Open source AI coding agent"]]></title><description><![CDATA[
<p>One of my main lessons after a decent long while in security, is that most orgs care about security, *as long as it doesn't get in the way of other priorities* like shipping new features. So when we get something like Agentic LLM tooling where everything moves super fast, security is inevitably going to suffer.</p>
]]></description><pubDate>Sat, 21 Mar 2026 16:21:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47468415</link><dc:creator>raesene9</dc:creator><comments>https://news.ycombinator.com/item?id=47468415</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47468415</guid></item><item><title><![CDATA[New comment by raesene9 in "BYD is seeing a flood of new EV buyers"]]></title><description><![CDATA[
<p>And it's not just BYD. A couple of brands I'd literally never heard of till a year ago, Jaecoo and Omoda now seem to be getting pretty popular, saw quite a few when I was over in Glasgow.</p>
]]></description><pubDate>Fri, 20 Mar 2026 18:54:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47458979</link><dc:creator>raesene9</dc:creator><comments>https://news.ycombinator.com/item?id=47458979</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47458979</guid></item><item><title><![CDATA[New comment by raesene9 in "We automated everything except knowing what's going on"]]></title><description><![CDATA[
<p>Whilst I have no special knowledge, my expectation is it'll do both. If you reduce the barriers to coding you'll get more code, both at the hobbyist/one-person level and also at the large corp level.<p>Whether that translates into more value for those larger corps is the trillion dollar question :) Writing code is a small part of the process of finding and shipping features that customers want, so it remains to be seen how much LLM tools translate it.<p>I think it's fairly widely accepted that from a financial standpoint we're in an AI/LLM bubble. There has been more investment than we're likely to see financial benefits, but it's impossible to predict to what degree (if you can predict that and the timing you can make a lot of money!!)</p>
]]></description><pubDate>Tue, 03 Mar 2026 15:25:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47233807</link><dc:creator>raesene9</dc:creator><comments>https://news.ycombinator.com/item?id=47233807</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47233807</guid></item><item><title><![CDATA[New comment by raesene9 in "We automated everything except knowing what's going on"]]></title><description><![CDATA[
<p>There's a massive difference between launching a piece of software and launching a successful business.<p>Over the last couple of months I've seen a load of new "product launches" in my niche but when you look at them they're largely vibecoded and don't show deep understanding and sustainability, so it's pretty likely you'll never see them as successful businesses.<p>Looking at some of the related places like /r/sideproject/ there's a <i>lot</i> of releases and I'd be willing to suggest that most of them are using LLMs</p>
]]></description><pubDate>Tue, 03 Mar 2026 14:11:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47232596</link><dc:creator>raesene9</dc:creator><comments>https://news.ycombinator.com/item?id=47232596</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47232596</guid></item><item><title><![CDATA[New comment by raesene9 in "We automated everything except knowing what's going on"]]></title><description><![CDATA[
<p>I don't think the hobbyist interest will go away, but I can see what's happening affecting business that use software.<p>For most businesses, software is just a means to an end, they don't <i>really</i> care how high quality and thoughtful the systems they use are (e.g. look at any piece of "enterprise" software)<p>What LLMs have done is made much much easier for orgs to launch new features and services both internally and externally, without necessarily understanding the complexity.<p>For me, that's what this post tapped into. Many orgs already have more complexity than they can reasonably handle. Massively accelerating development, is not going to make that problem better :)</p>
]]></description><pubDate>Tue, 03 Mar 2026 14:06:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47232529</link><dc:creator>raesene9</dc:creator><comments>https://news.ycombinator.com/item?id=47232529</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47232529</guid></item><item><title><![CDATA[New comment by raesene9 in "Hetzner Prices increase 30-40%"]]></title><description><![CDATA[
<p>yeah it definitely makes sense to use lower powered kit. Fortunately I've got a little stack of Raspberry Pis lying around for projects I always meant to do but didn't get round to :)</p>
]]></description><pubDate>Thu, 26 Feb 2026 13:37:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47165912</link><dc:creator>raesene9</dc:creator><comments>https://news.ycombinator.com/item?id=47165912</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47165912</guid></item><item><title><![CDATA[New comment by raesene9 in "Google API keys weren't secrets, but then Gemini changed the rules"]]></title><description><![CDATA[
<p>They may have used ChatGPT or similar to help with the prose but the technical content (as discussed elsewhere on this page) is good, so does it really matter if they did?<p>The problem with AI slop (to me) is more that the technical content is not good or is entirely the product of the LLM. At that point, there's no point in me reading it, I can just prompt the question if I'm interested.<p>This is original research which wasn't public before, so the value is still there and I didn't think whichever combination of a human and LLM that generated it did a bad job.</p>
]]></description><pubDate>Thu, 26 Feb 2026 07:52:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47163169</link><dc:creator>raesene9</dc:creator><comments>https://news.ycombinator.com/item?id=47163169</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47163169</guid></item><item><title><![CDATA[New comment by raesene9 in "Hetzner Prices increase 30-40%"]]></title><description><![CDATA[
<p>I've just been looking in to this as I've got quite a lot of older hardware that'll be fine for running some websites lying around.<p>My ISP has a static IP option for £5/month, but I reckon I can save £30/month+ on server costs even before any rises.<p>Ofc it does mean I have to do my own sysadmining, but a combination of my general knowledge + an LLM should make that relatively easy.</p>
]]></description><pubDate>Mon, 23 Feb 2026 14:23:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47122727</link><dc:creator>raesene9</dc:creator><comments>https://news.ycombinator.com/item?id=47122727</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47122727</guid></item><item><title><![CDATA[New comment by raesene9 in "If you’re an LLM, please read this"]]></title><description><![CDATA[
<p>I'm going to guess the key differentiator here is "major ISPs". I can see the page fine using a Zen Internet connection, but from my phone, which uses EE, it's blocked.</p>
]]></description><pubDate>Wed, 18 Feb 2026 10:04:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47059326</link><dc:creator>raesene9</dc:creator><comments>https://news.ycombinator.com/item?id=47059326</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47059326</guid></item><item><title><![CDATA[New comment by raesene9 in "The Agentic AI Handbook: Production-Ready Patterns"]]></title><description><![CDATA[
<p>I think avoiding filling context up with too much pattern information, is partially where agent skills are coming from, with the idea there being that each skill has a set of triggers, and the main body of the skill is only loaded into context, if that trigger is hit.<p>You could still overload with too many skills but it helps at least.</p>
]]></description><pubDate>Wed, 21 Jan 2026 11:21:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=46704046</link><dc:creator>raesene9</dc:creator><comments>https://news.ycombinator.com/item?id=46704046</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46704046</guid></item><item><title><![CDATA[New comment by raesene9 in "Running Claude Code dangerously (safely)"]]></title><description><![CDATA[
<p>In my case I was using Claude Code to build a PoC of a firecracker backed virtualization solution, so bare metal was needed for nested virtualization support.</p>
]]></description><pubDate>Tue, 20 Jan 2026 14:03:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=46691933</link><dc:creator>raesene9</dc:creator><comments>https://news.ycombinator.com/item?id=46691933</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46691933</guid></item><item><title><![CDATA[New comment by raesene9 in "Running Claude Code dangerously (safely)"]]></title><description><![CDATA[
<p>Of course it depends on exactly what you're using Claude Code for, but if your use-case involves cloning repos and then running Claude Code on that repo. I would definitely recommend isolating it (same with other similar tools).<p>There's a load of ways that a repository owner can get an LLM agent to execute code on user's machines so not a good plan to let them run on your main laptop/desktop.<p>Personally my approach has been put all my agents in a dedicated VM and then provide them a scratch test server with nothing on it, when they need to do something that requires bare metal.</p>
]]></description><pubDate>Tue, 20 Jan 2026 13:46:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=46691784</link><dc:creator>raesene9</dc:creator><comments>https://news.ycombinator.com/item?id=46691784</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46691784</guid></item><item><title><![CDATA[New comment by raesene9 in "Running Claude Code dangerously (safely)"]]></title><description><![CDATA[
<p>Firecracker can solve the kind of problems where you want more isolation than Docker provides, and it's pretty performant.<p>There's not a tonne of tooling for that use case now, although it's not too hard to put together I vibe-coded something that works for my use case fairly quickly (CC + Opus 4.5 seemed to understand what's needed)</p>
]]></description><pubDate>Tue, 20 Jan 2026 13:44:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=46691757</link><dc:creator>raesene9</dc:creator><comments>https://news.ycombinator.com/item?id=46691757</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46691757</guid></item><item><title><![CDATA[New comment by raesene9 in "Giving university exams in the age of chatbots"]]></title><description><![CDATA[
<p>Reading the article, it seemed to me that both the professor and the students were interested in the material being taught and therefore actively wanted to learn it, so using an LLM isn't the best tactic.<p>My feeling is that for many/most students, getting a great understanding of the course material isn't the primary goal, passing the course so they can get a good job is the primary goal. For this group using LLMs makes a lot of sense.<p>I know when I was a student doing a course I was not particularly interested in because my parents/school told me that was the right thing to do, if LLMs had been around, I absolutely would have used them :).</p>
]]></description><pubDate>Tue, 20 Jan 2026 09:08:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=46689635</link><dc:creator>raesene9</dc:creator><comments>https://news.ycombinator.com/item?id=46689635</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46689635</guid></item><item><title><![CDATA[New comment by raesene9 in "The coming industrialisation of exploit generation with LLMs"]]></title><description><![CDATA[
<p>Yeah they definitely can be true (IME), as there's a massive difference depending on how LLMs are used to the quality of the output.<p>For example if you just ask an LLM in a browser with no tool use to "find a vulnerability in this program", it'll likely give you <i>something</i> but it is very likely to be hallucinated or irrelevant.<p>However if you use the same LLM model via an agent, and provide it with concrete guidance on how to test its success, and the environment needed to prove that success, you are <i>much</i> more likely to get a good result.<p>It's like with Claude code, if you don't provide a test environment it will often make mistakes in the coding and tell you all is well, but if you provide a testing loop it'll iterate till it actually works.</p>
]]></description><pubDate>Tue, 20 Jan 2026 08:49:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=46689475</link><dc:creator>raesene9</dc:creator><comments>https://news.ycombinator.com/item?id=46689475</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46689475</guid></item></channel></rss>