<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: Kim_Bruning</title><link>https://news.ycombinator.com/user?id=Kim_Bruning</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 07:58:38 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=Kim_Bruning" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by Kim_Bruning in "Claude Opus 4.7"]]></title><description><![CDATA[
<p>> "We are releasing Opus 4.7 with safeguards that automatically detect and block requests that indicate prohibited or high-risk cybersecurity uses. "<p>This decision is potentially fatal. You need symmetric capability to research and prevent attacks in the first place.<p>The opposite approach is 'merely' fraught.<p>They're in a bit of a bind here.</p>
]]></description><pubDate>Thu, 16 Apr 2026 14:34:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47793579</link><dc:creator>Kim_Bruning</dc:creator><comments>https://news.ycombinator.com/item?id=47793579</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47793579</guid></item><item><title><![CDATA[New comment by Kim_Bruning in "FSF trying to contact Google about spammer sending 10k+ mails from Gmail account"]]></title><description><![CDATA[
<p>(I haven't run my own mail-server in a while. It's getting harder and harder.)<p>Are the real-time-blackhole lists still a thing?<p>If they're regularly allowing spam and not responding to reports in any sort of timely manner, possibly they should be reported to those.<p>Not going to work though, is it.  Too big to fail <i>shouldn't</i> be a thing. It's not like you can't be flexible about it or give them some room to deal with it within corporate policy; but they do need to deal with it, right?<p>Realistically, I think some companies have outgrown the size where internet can still self-regulate them. You'd hurt yourself more than gmail.<p>This either needs laws or new game theory.<p>Or -you know- deprecate the current email system. I know that's a perennial proposal; but that's because every year it gets even more broken in even more interesting ways. It's patch-on-patch-on-patch at the moment. Just spinning up sendmail on a random box won't quite cut it anymore, if you want to participate.</p>
]]></description><pubDate>Thu, 16 Apr 2026 10:44:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47791205</link><dc:creator>Kim_Bruning</dc:creator><comments>https://news.ycombinator.com/item?id=47791205</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47791205</guid></item><item><title><![CDATA[New comment by Kim_Bruning in "Claude may require identity verification in some cases"]]></title><description><![CDATA[
<p>This might be conflating two things. What data exists somewhere, and how many different independent parties hold it. It's not the same risk.<p>Put this way: I sort of already trust <i>Anthropic</i> with some of my PII. And that's ... maybe <i>not</i> ok actually. But it's a single failure surface.<p>But that's definitely not the same thing as trusting Anthropic, AND Persona AND All Persona's partners AND <i>their</i> Partners ad infinitum.<p>And let's say Persona is actually ok; who knows, they <i>might</i> be? But it's <i>still</i> an extra surface; and if they share again, that's another extra surface again.<p>It's fairly common sense blast radius minimization. This is part of the actual theory behind GDPR.<p>"We already seem to accidentally be leaking some data through channel A" , doesn't mean it's a good idea to open channels B-Z as well. It means you might want to tighten down that channel A.</p>
]]></description><pubDate>Thu, 16 Apr 2026 10:26:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47791077</link><dc:creator>Kim_Bruning</dc:creator><comments>https://news.ycombinator.com/item?id=47791077</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47791077</guid></item><item><title><![CDATA[New comment by Kim_Bruning in "US v. Heppner (S.D.N.Y. 2026) no attorney-client privilege for AI chats [pdf]"]]></title><description><![CDATA[
<p>Questions this raises for me (making a note here to maybe research a bit later):<p>Does this analysis change if using on-site AI? What if the ToS is different? Is it possible to stand up a service that <i>does</i> get the protections required? This might also be interesting when dealing with trans-atlantic work.</p>
]]></description><pubDate>Thu, 16 Apr 2026 10:21:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47791048</link><dc:creator>Kim_Bruning</dc:creator><comments>https://news.ycombinator.com/item?id=47791048</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47791048</guid></item><item><title><![CDATA[New comment by Kim_Bruning in "Claude may require identity verification in some cases"]]></title><description><![CDATA[
<p>You're not wrong; but... imho it's closer to Sonnet 4.0 [1]  on my personal benchmark [2]. And I HAVE run it at just over 200Ktoken context, it works, it's just a bit slow at that size. It's not <i>great</i>, but ... usable to me? I used Sonnet 4.0 over api for half a year or so before, after all.<p>Only way to know if your own criteria are now matched -or not yet- is to test it for yourself with your own benchmark or what have you.<p>And it does show a promising direction going forward: usable (to some) local models becoming efficient enough to run on consumer hardware.<p>[1] released mid-2025<p>[2] take with salt - only tests personal usability<p>+ Note that some benchmarks do show Qwen3.5-35B-A3B matching Sonnet 4.5 (released later last year); but I treat those with the same skepticism you do , clearly ;)</p>
]]></description><pubDate>Wed, 15 Apr 2026 13:27:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47778681</link><dc:creator>Kim_Bruning</dc:creator><comments>https://news.ycombinator.com/item?id=47778681</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47778681</guid></item><item><title><![CDATA[New comment by Kim_Bruning in "Claude may require identity verification in some cases"]]></title><description><![CDATA[
<p>I may have genuinely new data for you.<p>Qwen3.5-35B-A3B is reported to perform slightly better than the model you mentioned.<p>It runs fine but non-optimal on a single 3090 with even 131072 tokens of context , and due to the hybrid attention architecture, the memory usage and compute scale rather less drastically than ctx^2. I've had friends with smaller cards still getting work out of it.  Generation is at around 20 tokens/sec on that 3090 (without doing anything special yet) . You'll need enough DRAM to hold the bits of the model that don't fit.   Nothing to write home about, but genuinely usable in a pinch or for tasks that don't need immediate interactivity.<p>It's the first local model that passes my personal kimbench usability benchmark at least. Just be aware that it is <i>extremely</i> verbose in thinking mode. Seems to be a qwen thing.<p>(edit: On rechecking my numbers;  I now realize I can possibly optimize this a lot better)</p>
]]></description><pubDate>Wed, 15 Apr 2026 12:48:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47778288</link><dc:creator>Kim_Bruning</dc:creator><comments>https://news.ycombinator.com/item?id=47778288</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47778288</guid></item><item><title><![CDATA[New comment by Kim_Bruning in "Claude may require identity verification in some cases"]]></title><description><![CDATA[
<p>I think minimal opsec here would suggest you not share your data with a random corporation in the usa.</p>
]]></description><pubDate>Wed, 15 Apr 2026 11:21:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47777580</link><dc:creator>Kim_Bruning</dc:creator><comments>https://news.ycombinator.com/item?id=47777580</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47777580</guid></item><item><title><![CDATA[New comment by Kim_Bruning in "Claude may require identity verification in some cases"]]></title><description><![CDATA[
<p>Qwen3 runs locally on reasonable hardware, and is comparable to a mid-2025 Claude Sonnet (albeit possibly rather slower) .<p>Local models are chasing the online frontier models pretty hard.<p>So worst case, that's the fallback (FWIW, YMMV)<p>edit: Qwen-3.5  MoE (and other local MoE models like it)</p>
]]></description><pubDate>Wed, 15 Apr 2026 11:12:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47777488</link><dc:creator>Kim_Bruning</dc:creator><comments>https://news.ycombinator.com/item?id=47777488</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47777488</guid></item><item><title><![CDATA[New comment by Kim_Bruning in "Claude may require identity verification in some cases"]]></title><description><![CDATA[
<p>This is highly problematic.<p>I <i>may</i> consider showing my ID to a company I already have a business relationship with; given demonstrable legal obligations, contractual necessities, legitimate interests etc . Eg the standard GDPR list.<p>I <i>do</i> have an existing business relationship with Anthropic, so I might under some circumstances decide to show them my id. I don't have a business relationship with Persona though.<p>I understand the instinct: they want to insulate themselves from holding PII. Not the worst idea. I'm not happy with it being a third party though. Especially the third party in question.</p>
]]></description><pubDate>Wed, 15 Apr 2026 11:08:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47777444</link><dc:creator>Kim_Bruning</dc:creator><comments>https://news.ycombinator.com/item?id=47777444</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47777444</guid></item><item><title><![CDATA[New comment by Kim_Bruning in "Two Months After I Gave an AI $100 and No Instructions"]]></title><description><![CDATA[
<p>Feeling is mutual, actually O:-)<p>Anthropomorphism and Anthropodenial are both variants of Anthropocentrism, and share the same limitations. Have you considered other axes of thought?<p>I can readily admit that lots of humans <i>will</i>  naively anthropomorphize horrendously, but I think that:<p>- The eliza effect is not what people think it is<p>- What is actually going on is obscured by all the anthropomorphizing<p>- But this is yet no grounds to throw out the underlying phenomenon, especially when a) it can be useful and/or  b) it causes people to get hurt.</p>
]]></description><pubDate>Tue, 14 Apr 2026 22:59:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47772546</link><dc:creator>Kim_Bruning</dc:creator><comments>https://news.ycombinator.com/item?id=47772546</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47772546</guid></item><item><title><![CDATA[New comment by Kim_Bruning in "Two Months After I Gave an AI $100 and No Instructions"]]></title><description><![CDATA[
<p>If the hypothesis is not printed out in the context, then it cannot hold it past that turn. You could prompt it to generate said hypothesis first (or set of hypotheses), and only then act on them. And then things might work.<p>Definitely not exactly a human. OTOH Low hanging fruit is low.</p>
]]></description><pubDate>Tue, 14 Apr 2026 22:52:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47772484</link><dc:creator>Kim_Bruning</dc:creator><comments>https://news.ycombinator.com/item?id=47772484</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47772484</guid></item><item><title><![CDATA[New comment by Kim_Bruning in "Autonomous Robot Brigade Successfully Retook Russian Positions in Ukraine"]]></title><description><![CDATA[
<p><a href="https://archive.ph/omWu6" rel="nofollow">https://archive.ph/omWu6</a></p>
]]></description><pubDate>Tue, 14 Apr 2026 22:24:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47772255</link><dc:creator>Kim_Bruning</dc:creator><comments>https://news.ycombinator.com/item?id=47772255</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47772255</guid></item><item><title><![CDATA[New comment by Kim_Bruning in "US Treasury Seeking Access to Anthropic's Mythos to Find Flaws"]]></title><description><![CDATA[
<p><a href="https://archive.ph/hXPhq" rel="nofollow">https://archive.ph/hXPhq</a></p>
]]></description><pubDate>Tue, 14 Apr 2026 16:25:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47767714</link><dc:creator>Kim_Bruning</dc:creator><comments>https://news.ycombinator.com/item?id=47767714</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47767714</guid></item><item><title><![CDATA[New comment by Kim_Bruning in "US Treasury Seeking Access to Anthropic's Mythos to Find Flaws"]]></title><description><![CDATA[
<p>Didn't trump order them not to do that?</p>
]]></description><pubDate>Tue, 14 Apr 2026 16:24:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47767706</link><dc:creator>Kim_Bruning</dc:creator><comments>https://news.ycombinator.com/item?id=47767706</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47767706</guid></item><item><title><![CDATA[New comment by Kim_Bruning in "Two Months After I Gave an AI $100 and No Instructions"]]></title><description><![CDATA[
<p>The effect is not quite what you think it is, and people don't quite take the right lessons.<p>Similar to the eliza effect, people still take the original reading of Clever Hans:  "he couldn't really do maths, he's just taking social cues from his handler"<p>But what's the actual difference between Eliza, Clever Hans and RLHF? They're doing the similar things, right?<p>Now look at how we valued that in the 20th vs 21st century:<p>How much does an ALU even cost anymore?  even a really good one?  (it's almost never separate anymore, usually on the same silicon as the rest of the cpu/microcontroller)<p>Meanwhile... what's the TCO to deploy a sentiment classifier? Especially a really good one?</p>
]]></description><pubDate>Tue, 14 Apr 2026 14:49:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47766381</link><dc:creator>Kim_Bruning</dc:creator><comments>https://news.ycombinator.com/item?id=47766381</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47766381</guid></item><item><title><![CDATA[New comment by Kim_Bruning in "Two Months After I Gave an AI $100 and No Instructions"]]></title><description><![CDATA[
<p>If "randomly sampling from a trained distribution" can't produce useful, meaningful output, then deterministic computation is even more suspect. After all, it's a strict subset. You're sampling with temperature zero from a handcrafted distribution.<p>(this post directionality ok, but there's many a devil in the details)</p>
]]></description><pubDate>Tue, 14 Apr 2026 14:44:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47766316</link><dc:creator>Kim_Bruning</dc:creator><comments>https://news.ycombinator.com/item?id=47766316</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47766316</guid></item><item><title><![CDATA[New comment by Kim_Bruning in "Make tmux pretty and usable (2024)"]]></title><description><![CDATA[
<p>I guess they mean 'have zellij hold your session when you log off/close controlling terminal'. (that would require zellij on remote)</p>
]]></description><pubDate>Mon, 13 Apr 2026 16:55:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47754857</link><dc:creator>Kim_Bruning</dc:creator><comments>https://news.ycombinator.com/item?id=47754857</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47754857</guid></item><item><title><![CDATA[New comment by Kim_Bruning in "They See Your Photos"]]></title><description><![CDATA[
<p>I uploaded fantasy pictures which had amusing results ;-)</p>
]]></description><pubDate>Mon, 13 Apr 2026 15:14:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47753220</link><dc:creator>Kim_Bruning</dc:creator><comments>https://news.ycombinator.com/item?id=47753220</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47753220</guid></item><item><title><![CDATA[New comment by Kim_Bruning in "Starfling: A one-tap endless orbital slingshot game in a single HTML file"]]></title><description><![CDATA[
<p>Hmm, you can't fall back to a previous orbit. Those don't detect your presence</p>
]]></description><pubDate>Sat, 11 Apr 2026 18:44:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47732984</link><dc:creator>Kim_Bruning</dc:creator><comments>https://news.ycombinator.com/item?id=47732984</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47732984</guid></item><item><title><![CDATA[New comment by Kim_Bruning in "Anthropic's Claude Mythos isn't a sentient super-hacker, it's a sales pitch"]]></title><description><![CDATA[
<p>As usual it's a matter of degree.<p>Opus is also not the worst at hacking things either. Sometimes it hacks things 'by accident' you see. If Mythos is better at it, then at some point, yeah, I can see how that might start to become a problem. Especially running unsupervised.</p>
]]></description><pubDate>Fri, 10 Apr 2026 23:37:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47725277</link><dc:creator>Kim_Bruning</dc:creator><comments>https://news.ycombinator.com/item?id=47725277</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47725277</guid></item></channel></rss>