<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: BoppreH</title><link>https://news.ycombinator.com/user?id=BoppreH</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 11 Apr 2026 18:02:51 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=BoppreH" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by BoppreH in "Installing every* Firefox extension"]]></title><description><![CDATA[
<p>Sad that no real pages can load successfully, but I thoroughly enjoyed the writing.<p>> We turned on crash reporting on the way.<p>I haven't burst out laughing like this in a while! You'll probably make for some horror stories to a poor Mozilla team.</p>
]]></description><pubDate>Sat, 11 Apr 2026 01:13:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47726171</link><dc:creator>BoppreH</dc:creator><comments>https://news.ycombinator.com/item?id=47726171</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47726171</guid></item><item><title><![CDATA[New comment by BoppreH in "Show HN: I made a YouTube search form with advanced filters"]]></title><description><![CDATA[
<p>YouTube search is one of those services that is pointlessly hostile. Most recently, they've removed the "order by upload date" filter, and changed the way that blurring works. Previously, sensitive videos had blurred thumbnails and a toggle to remove the blur (even though it had no way to never blur). Now the UI looks the same, but the "toggle" reloads the page without any filters, and adding a filter re-blurs them. So it's impossible to filter results and see unblurred thumbnails.<p>These changes baffle me. It's not even enshittification because I cannot see any benefit to YouTube at all.</p>
]]></description><pubDate>Mon, 06 Apr 2026 03:02:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47656494</link><dc:creator>BoppreH</dc:creator><comments>https://news.ycombinator.com/item?id=47656494</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47656494</guid></item><item><title><![CDATA[New comment by BoppreH in "Claude Code's source code has been leaked via a map file in their NPM registry"]]></title><description><![CDATA[
<p>1) It's not clear to me that this is only for internal tooling, as opposed to publishing commits on public GitHub repos. 2) Yes, it does explicitly say to pretend to be a human. From the link on my post:<p>> NEVER include in commit messages or PR descriptions:<p>> [...]<p>> - The phrase "Claude Code" or any mention that you are an AI</p>
]]></description><pubDate>Tue, 31 Mar 2026 15:02:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47588376</link><dc:creator>BoppreH</dc:creator><comments>https://news.ycombinator.com/item?id=47588376</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47588376</guid></item><item><title><![CDATA[New comment by BoppreH in "Claude Code's source code has been leaked via a map file in their NPM registry"]]></title><description><![CDATA[
<p>You're right, I missed the \b's. Thanks for the correction.</p>
]]></description><pubDate>Tue, 31 Mar 2026 11:21:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47585737</link><dc:creator>BoppreH</dc:creator><comments>https://news.ycombinator.com/item?id=47585737</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47585737</guid></item><item><title><![CDATA[New comment by BoppreH in "Claude Code's source code has been leaked via a map file in their NPM registry"]]></title><description><![CDATA[
<p>I don't care who is using it, I don't want LLMs pretending to be humans in public repos. Anthropic just lost some points with me for this one.<p>EDIT: I just realized this might be used <i>without</i> publishing the changes, for internal evaluation only as you mentioned. That would be a lot better.</p>
]]></description><pubDate>Tue, 31 Mar 2026 11:16:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47585690</link><dc:creator>BoppreH</dc:creator><comments>https://news.ycombinator.com/item?id=47585690</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47585690</guid></item><item><title><![CDATA[New comment by BoppreH in "Claude Code's source code has been leaked via a map file in their NPM registry"]]></title><description><![CDATA[
<p>It's fast, but it'll miss a ton of cases. This feels like it would be better served by a prompt instruction, or an additional tiny neural network.<p>And some of the entries are too short and will create false positives. It'll match the word "offset" ("ffs"), for example. EDIT: no it won't, I missed the \b. Still sounds weird to me.</p>
]]></description><pubDate>Tue, 31 Mar 2026 11:10:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47585626</link><dc:creator>BoppreH</dc:creator><comments>https://news.ycombinator.com/item?id=47585626</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47585626</guid></item><item><title><![CDATA[New comment by BoppreH in "Claude Code's source code has been leaked via a map file in their NPM registry"]]></title><description><![CDATA[
<p>Undercover mode also pretends to be human, which I'm less ok with:<p><a href="https://github.com/chatgptprojects/claude-code/blob/642c7f944bbe5f7e57c05d756ab7fa7c9c5035cc/src/utils/undercover.ts#L52" rel="nofollow">https://github.com/chatgptprojects/claude-code/blob/642c7f94...</a></p>
]]></description><pubDate>Tue, 31 Mar 2026 11:06:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47585596</link><dc:creator>BoppreH</dc:creator><comments>https://news.ycombinator.com/item?id=47585596</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47585596</guid></item><item><title><![CDATA[New comment by BoppreH in "Claude Code's source code has been leaked via a map file in their NPM registry"]]></title><description><![CDATA[
<p>An LLM company using <i>regexes</i> for sentiment analysis? That's like a truck company using horses to transport parts. Weird choice.</p>
]]></description><pubDate>Tue, 31 Mar 2026 10:59:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47585535</link><dc:creator>BoppreH</dc:creator><comments>https://news.ycombinator.com/item?id=47585535</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47585535</guid></item><item><title><![CDATA[New comment by BoppreH in "Go hard on agents, not on your filesystem"]]></title><description><![CDATA[
<p>Excellent project, unfortunate title. I almost didn't click on it.<p>I like the tradeoff offered: full access to the current directory, read-only access to the rest, copy-on-write for the home directory. With stricter modes to (presumably) protect against data exfiltration too. It really feels like it should be the default for agent systems.</p>
]]></description><pubDate>Sat, 28 Mar 2026 01:26:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47550575</link><dc:creator>BoppreH</dc:creator><comments>https://news.ycombinator.com/item?id=47550575</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47550575</guid></item><item><title><![CDATA[New comment by BoppreH in "The 'paperwork flood': How I drowned a bureaucrat before dinner"]]></title><description><![CDATA[
<p>And there's also fraud. If there's no periodic check, a single diagnosis from a corrupt doctor can give someone disability benefits for life.<p>This might not be the right frequency, though, and only accepting post/fax is bullshit. Doubly so for short deadlines.</p>
]]></description><pubDate>Fri, 27 Mar 2026 14:48:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47543303</link><dc:creator>BoppreH</dc:creator><comments>https://news.ycombinator.com/item?id=47543303</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47543303</guid></item><item><title><![CDATA[New comment by BoppreH in "Tell HN: Litellm 1.82.7 and 1.82.8 on PyPI are compromised"]]></title><description><![CDATA[
<p>It's assumed that in this scenario you don't have access to a trusted compiler; if you do, then there's no problem.<p>And the thesis linked above seems to go beyond simply "use a trusted compiler to compile the next compiler". It involves deterministic compilation and comparing outputs, for example.</p>
]]></description><pubDate>Wed, 25 Mar 2026 14:02:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47517486</link><dc:creator>BoppreH</dc:creator><comments>https://news.ycombinator.com/item?id=47517486</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47517486</guid></item><item><title><![CDATA[New comment by BoppreH in "Tell HN: Litellm 1.82.7 and 1.82.8 on PyPI are compromised"]]></title><description><![CDATA[
<p>The proposed solution seems to rely on a trusted compiler that generates the exact same output, bit-for-bit, as the compiler-under-test would generate if it was not compromised. That seems useful only in very narrow cases.</p>
]]></description><pubDate>Wed, 25 Mar 2026 12:34:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47516491</link><dc:creator>BoppreH</dc:creator><comments>https://news.ycombinator.com/item?id=47516491</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47516491</guid></item><item><title><![CDATA[New comment by BoppreH in "I tried to prove I'm not AI. My aunt wasn't convinced"]]></title><description><![CDATA[
<p>Paradox of choice? It's more related to the <i>number</i> of choices and the impact on people's anxiety, but it's close.</p>
]]></description><pubDate>Wed, 25 Mar 2026 11:07:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47515775</link><dc:creator>BoppreH</dc:creator><comments>https://news.ycombinator.com/item?id=47515775</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47515775</guid></item><item><title><![CDATA[New comment by BoppreH in "OpenClaw is a security nightmare dressed up as a daydream"]]></title><description><![CDATA[
<p>I believe you only need a unique phone number to create the account, then you can use WhatsApp Web as client. Be very careful with alternative clients, as I've had an account banned in the past for this (and therefore a phone number blacklisted), even without messaging anybody. I think that clients that run WhatsApp Web in a web view (like <a href="https://github.com/rafatosta/zapzap" rel="nofollow">https://github.com/rafatosta/zapzap</a>) are safe.<p>I think they started banning unauthorized API users around the time that "WhatsApp For Business" was introduced, because it was competing with that product. Unfortunately WhatsApp For Business is geared toward physical products and services with registered companies, so home automation and agents are left with no options.</p>
]]></description><pubDate>Sun, 22 Mar 2026 21:50:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47482564</link><dc:creator>BoppreH</dc:creator><comments>https://news.ycombinator.com/item?id=47482564</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47482564</guid></item><item><title><![CDATA[New comment by BoppreH in "Show HN: I built 48 lightweight SVG backgrounds you can copy/paste"]]></title><description><![CDATA[
<p>Thanks. I'm already doing something similar, but I feel like the background that is visible on the <i>sides</i> is still somewhat distracting. Might be my imagination though.</p>
]]></description><pubDate>Wed, 18 Mar 2026 23:26:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47432674</link><dc:creator>BoppreH</dc:creator><comments>https://news.ycombinator.com/item?id=47432674</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47432674</guid></item><item><title><![CDATA[New comment by BoppreH in "Show HN: I built 48 lightweight SVG backgrounds you can copy/paste"]]></title><description><![CDATA[
<p>Those are excellent! The orange shingles are my favorite. Though I think some of them are not working on Firefox; the blue and green vortices are rendered as a single blue rectangle and a single green hexagon.<p>I wonder how people are using them in a way that is not distracting to the main content. I've found that high-frequency patterns (small details with sharp transitions) can be a bit distracting, but I haven't found a good solution that doesn't compromise the beauty of the backgrounds.</p>
]]></description><pubDate>Wed, 18 Mar 2026 23:15:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47432581</link><dc:creator>BoppreH</dc:creator><comments>https://news.ycombinator.com/item?id=47432581</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47432581</guid></item><item><title><![CDATA[New comment by BoppreH in "Are LLM merge rates not getting better?"]]></title><description><![CDATA[
<p>We're probably thinking about this at very different levels. Here's what I meant: I can ask Claude for "a bilingual German-Russian poem about the side effects of the most common drugs used in anesthesia". I would bet my left shoe that if I asked people on the street, no one will do a better job than Claude. And to me, answering questions correctly is a very good metric for intelligence.<p>We can debate whether that's real intelligence, and whether the question is fair, but this is still a real, measurable capability, that just eight years ago was a pipe dream. This capability is what OP is tracking, and what I believe is impressive but hamstrung by harnesses.</p>
]]></description><pubDate>Fri, 13 Mar 2026 09:32:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47362260</link><dc:creator>BoppreH</dc:creator><comments>https://news.ycombinator.com/item?id=47362260</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47362260</guid></item><item><title><![CDATA[New comment by BoppreH in "Are LLM merge rates not getting better?"]]></title><description><![CDATA[
<p>It's not a value judgement, I'm no misanthrope. But it's a fact or life that we humans must specialize, while LLMs can afford to have "studied" a staggering variety of topics. It's no different than being slower than a car, or weaker than a hydraulic press.<p>On a different note, LLMs are still not very <i>wise</i>, as displayed by all the prompt attacks and occasional inane responses like walking to the car wash.</p>
]]></description><pubDate>Fri, 13 Mar 2026 08:43:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47361985</link><dc:creator>BoppreH</dc:creator><comments>https://news.ycombinator.com/item?id=47361985</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47361985</guid></item><item><title><![CDATA[New comment by BoppreH in "Are LLM merge rates not getting better?"]]></title><description><![CDATA[
<p>Yes, "street". Typing from my phone, sorry.<p>And search engines are narrow tools that can only output copies of its dataset. An LLM is capable of surprisingly novel output, even if the exact level of creativity is heavily debated.</p>
]]></description><pubDate>Thu, 12 Mar 2026 16:08:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47352950</link><dc:creator>BoppreH</dc:creator><comments>https://news.ycombinator.com/item?id=47352950</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47352950</guid></item><item><title><![CDATA[New comment by BoppreH in "Are LLM merge rates not getting better?"]]></title><description><![CDATA[
<p>Controversial opinion from a casual user, but state-of-art LLMs now feel to me more intelligent then the average person on the steet.  Also explains why training on more average-quality data (if there's any left) is not making improvements.<p>But LLMs are hamstrung by their harnesses. They are doing the equivalent of providing technical support via phone call: little to no context, and limited to a bidirectional stream of words (tokens). The best agent harnesses have the equivalent of vision-impairment accessibility interfaces, and even those are still subpar.<p>Heck, giving LLMs <i>time to think</i> was once a groundbreaking idea. Yesterday I saw Claude Code editing a file using shell redirects! It's barbaric.<p>I expect future improvements to come from harness improvements, especially around sub agents/context rollbacks (to work around the non-linear cost of context) and LLM-aligned "accessibility tools". That, or more synthetic training data.</p>
]]></description><pubDate>Thu, 12 Mar 2026 13:31:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47350286</link><dc:creator>BoppreH</dc:creator><comments>https://news.ycombinator.com/item?id=47350286</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47350286</guid></item></channel></rss>