<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: voidhorse</title><link>https://news.ycombinator.com/user?id=voidhorse</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 15 Apr 2026 02:12:48 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=voidhorse" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by voidhorse in "Sam Altman's response to Molotov cocktail incident"]]></title><description><![CDATA[
<p>Thanks, dang, I appreciate the slack. I let emotion get the better of me in the moment, and I'll refrain from that in the future.</p>
]]></description><pubDate>Sat, 11 Apr 2026 14:45:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47731107</link><dc:creator>voidhorse</dc:creator><comments>https://news.ycombinator.com/item?id=47731107</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47731107</guid></item><item><title><![CDATA[New comment by voidhorse in "A practical guide for setting up Zettelkasten method in Obsidian"]]></title><description><![CDATA[
<p>Zettelkasten is great <i>for researchers</i>. I actually don't think it's that valuable for practicing technologists. The general practice of taking notes and connecting ideas together is of course useful, but most technologists don't need such a sophisticated system.<p>Amid all the fanaticism that grew around zettelkasten method the past few years people have forgotten and de-emphasized the fact that for Luhmann it was not a "second brain" to be referenced on demand, it was explicitly a system to support <i>writing</i>. It is tailored to help researchers <i>write papers</i>. It shines if you actually need a system in which to keep notions coherent and organized so that ideas are clear and citations precise when you need them during the writing process. If that's not you, the overhead probably isn't worth it. Just keep a notebook.</p>
]]></description><pubDate>Sat, 11 Apr 2026 14:38:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47731035</link><dc:creator>voidhorse</dc:creator><comments>https://news.ycombinator.com/item?id=47731035</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47731035</guid></item><item><title><![CDATA[New comment by voidhorse in "Filing the corners off my MacBooks"]]></title><description><![CDATA[
<p>> I file the sharp corners off my MacBooks. People like to freak out about this<p>The fact that any conscious human being has the time or energy to be "freaked out" about someone futzing around with their own devices is astounding to me.</p>
]]></description><pubDate>Sat, 11 Apr 2026 04:03:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47727264</link><dc:creator>voidhorse</dc:creator><comments>https://news.ycombinator.com/item?id=47727264</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47727264</guid></item><item><title><![CDATA[New comment by voidhorse in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>I've been looking at things from the same lens since 2023. At the same time, the depletion/hoarding bit isn't new. Companies were already doing this with consumer data, LLMs are just finally the factory moment—now that we have all the raw material we finally have a means of automating production using it.<p>So, in some ways, I also view LLMs as a pivotal and important wake up call. Companies were already taking the data and using it for a variety of other purposes—it was just way less evident to people when they weren't in direct competition with labor, since, under capital, labor is what we sell.<p>Either an entire new industry needs to form, or it's finally time to move beyond capitalism. Centralized capital ends up killing itself, because it effectively shuts down its own engine if it kills off consumers, who can only exist in the first place if the wage labor structure holds.</p>
]]></description><pubDate>Thu, 09 Apr 2026 02:57:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47698818</link><dc:creator>voidhorse</dc:creator><comments>https://news.ycombinator.com/item?id=47698818</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47698818</guid></item><item><title><![CDATA[New comment by voidhorse in "System Card: Claude Mythos Preview [pdf]"]]></title><description><![CDATA[
<p>Thanks for taking the time for some sober analysis in the midst of reactionary chaos.<p>I can't wait until everyone stops falling for the "AGI ubermodel end of times" myth and we can actually have boring announcements that treat these things as what they actually are: tools. Tools for doing stuff, that's it.<p>Maybe I'm wrong, maybe stuffing a computer with enough language and binary patterns is indeed enough to achieve AGI, but then, so what? There's no point in being right about this. Buying into this ridiculous marketing will get us "AGI" in the form of machines, but only because all the human beings have gotten so stupid as to make critical reasoning an impossibility.</p>
]]></description><pubDate>Wed, 08 Apr 2026 03:32:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47684785</link><dc:creator>voidhorse</dc:creator><comments>https://news.ycombinator.com/item?id=47684785</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47684785</guid></item><item><title><![CDATA[New comment by voidhorse in "System Card: Claude Mythos Preview [pdf]"]]></title><description><![CDATA[
<p>> According to this document, 1 of the 18 Anthropic staff surveyed even said the model could completely replace an entry level researcher.
>
> So I'd say we've reached this milestone.<p>If 1/N=18 are our requirements for statistical significance for world-altering claims, then yeah, I think we can replace all the researchers.</p>
]]></description><pubDate>Wed, 08 Apr 2026 03:11:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47684578</link><dc:creator>voidhorse</dc:creator><comments>https://news.ycombinator.com/item?id=47684578</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47684578</guid></item><item><title><![CDATA[New comment by voidhorse in "LLM Wiki – example of an "idea file""]]></title><description><![CDATA[
<p>This makes me feel like karpathy is behind on the times a tad. Many agent users I know already do precisely this as part of "agentic" development. If you use a harness, the harness is already empowered to do much of this under the hood, no fancy instruction file required. Just ask the agent to update some knowledge directory at the end of each convo, done. If you really need to automate it, write some scheduling tool that tells the agent to read past convos and summarize. It really is that easy.</p>
]]></description><pubDate>Sun, 05 Apr 2026 04:31:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47646122</link><dc:creator>voidhorse</dc:creator><comments>https://news.ycombinator.com/item?id=47646122</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47646122</guid></item><item><title><![CDATA[New comment by voidhorse in "Google Workspace CLI"]]></title><description><![CDATA[
<p>Totally. I was just remarking today how funny it is that it was apparently ok for humans to suffer from a dearth if documentation for <i>years</i>, but suddenly, once the machines need it, everyone is frantic to make their tools as usable and well-documented as possible</p>
]]></description><pubDate>Thu, 05 Mar 2026 03:20:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47257112</link><dc:creator>voidhorse</dc:creator><comments>https://news.ycombinator.com/item?id=47257112</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47257112</guid></item><item><title><![CDATA[New comment by voidhorse in "Tech companies shouldn't be bullied into doing surveillance"]]></title><description><![CDATA[
<p>This. Most of them weren't exactly bullied.<p>Outside of having a military, several tech companies are probably more powerful than nation states at this point, and I think some of them realize this. As long as a complete slip into barbarism is still not fully on the table, nations need the data that tech companies have more or less entirely captured and established a complete hegemony around at this point. They also rely directly on their products. I guess the EU is starting to wake up to how problematic this is.</p>
]]></description><pubDate>Thu, 26 Feb 2026 13:56:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47166145</link><dc:creator>voidhorse</dc:creator><comments>https://news.ycombinator.com/item?id=47166145</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47166145</guid></item><item><title><![CDATA[New comment by voidhorse in "Writers and Their Day Jobs"]]></title><description><![CDATA[
<p>I actually think being a full-time writer is a more feasible professions today than it probably was a few hundred years ago. On the other hand, back in the 1800s random newspapers would pay for serialized stories. That doesn't really happen anymore (save a few surviving exceptions like the New Yorker) but now we have substack and a ton of other avenues writers can use to keep afloat</p>
]]></description><pubDate>Thu, 26 Feb 2026 13:52:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47166104</link><dc:creator>voidhorse</dc:creator><comments>https://news.ycombinator.com/item?id=47166104</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47166104</guid></item><item><title><![CDATA[New comment by voidhorse in "Show HN: Steerling-8B, a language model that can explain any token it generates"]]></title><description><![CDATA[
<p>It makes the black box slightly more transparent. Knowing more in this regard allows us to be more precise—you go from prompt tweak witchcraft and divination to more of possible science and precise method.</p>
]]></description><pubDate>Tue, 24 Feb 2026 03:57:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47132679</link><dc:creator>voidhorse</dc:creator><comments>https://news.ycombinator.com/item?id=47132679</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47132679</guid></item><item><title><![CDATA[New comment by voidhorse in "Blood test boosts Alzheimer's diagnosis accuracy to 94.5%, clinical study shows"]]></title><description><![CDATA[
<p>There are more personal practical reasons too.<p>Even though it cannot be reversed or eradicated (yet, let's hope) detection can allow individuals to adopt interventions that help either adjust their lives to better cope with its progression or help mitigate some of the detrimental behavioral consequences. In addition, if you have family to care for it may be impetus to get certain things in order for them before later stages of the disease, etc. It's horrible and bleak, but I could certainly see why one might want to know.<p>In the lucky case, it can also relieve anxiety. Even though false negatives may still be possible, receiving a <i>negative</i> detection might give people who have anxiety about certain symptoms relief, since they can rule out (rightly or wrongly) a pretty severe disease.</p>
]]></description><pubDate>Tue, 24 Feb 2026 03:49:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47132640</link><dc:creator>voidhorse</dc:creator><comments>https://news.ycombinator.com/item?id=47132640</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47132640</guid></item><item><title><![CDATA[New comment by voidhorse in "“Car Wash” test with 53 models"]]></title><description><![CDATA[
<p>> not really a reasoning failure<p>And that's precisely why the term "reasoning" was a problematic choice.<p>Most people, when they use the word "reason" mean something akin to logical deduction and they would call it a reasoning failure, being told, as they are, that "llms reason" rather than the more accurate picture you just painted of what actually happens (behavioral basins emerging from training dist.)</p>
]]></description><pubDate>Tue, 24 Feb 2026 03:41:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47132584</link><dc:creator>voidhorse</dc:creator><comments>https://news.ycombinator.com/item?id=47132584</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47132584</guid></item><item><title><![CDATA[New comment by voidhorse in "“Car Wash” test with 53 models"]]></title><description><![CDATA[
<p>It's actually very understandable to me that humans would make this kind of error, and we all make errors of this sort all the time, often without even realizing it. If you had the meta cognitive awareness to police every action and decision you've ever made with complete logical rigor, you'd be severely disappointed in yourself. One of the stupidest things we can do is overestimate our own intelligence. Only reflect for a second and you'll realize that, while a lot of dumb people exist, a lot of smart ones do too, and in many cases it's hard to choose a single measure of intelligence that would adequately account for the complete range of human goals and successful behavior in relation to those goals.</p>
]]></description><pubDate>Tue, 24 Feb 2026 03:37:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47132561</link><dc:creator>voidhorse</dc:creator><comments>https://news.ycombinator.com/item?id=47132561</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47132561</guid></item><item><title><![CDATA[New comment by voidhorse in "“Car Wash” test with 53 models"]]></title><description><![CDATA[
<p>You're not making a fair comparison.<p>"What's 2 + 2" is a completely abstract question for mathematics that human beings are thoroughly trained mostly to associate with tests of mastery and intelligence.<p>The car wash question is not such a question. It is framed as a question regarding a goal oriented, practical behavior, and in this situation it would be bizarre for a person to ask you this (since a rational person having all the information in the prompt, knowing what cars are, which they own, and knowing what a car wash is, wouldn't ask anybody anything, they'd just drive their car to the car wash).<p>And as someone else noted there are in fact situations in which it actually can be reasonable to ask for more context on what you mean by "2 + 2". You're just pointing out that human beings use a variety of social mores when interpreting messages, which is precisely why the car wash question silly/a trick were a human being to ask you and not preceded the question with a statement like "we're going to take an examine to test your logical reasoning".<p>As with LLMs, interpretation is all about context. The people that find this question weird (reasonably) interpret it in a practical context, not in a "this is a logic puzzle context" because human beings wags cats far more often than they subject themselves to logic puzzles.</p>
]]></description><pubDate>Tue, 24 Feb 2026 03:32:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47132523</link><dc:creator>voidhorse</dc:creator><comments>https://news.ycombinator.com/item?id=47132523</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47132523</guid></item><item><title><![CDATA[New comment by voidhorse in "“Car Wash” test with 53 models"]]></title><description><![CDATA[
<p>That's precisely what makes it a "trick question" or a "riddle". It's weird precisely <i>because</i> all the information is there. Most people who have functioning brains and complete information don't ask pointless questions (they would, obviously, just drive their car to the car wash)—there's no functional or practical reason for the communication, which is what gives it the status of a puzzle—syntax and exploitation of our tendency to <i>assume</i> questions are asked <i>because</i> information is incomplete tricks us into brining outside considerations to bear that don't matter.</p>
]]></description><pubDate>Tue, 24 Feb 2026 03:24:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47132476</link><dc:creator>voidhorse</dc:creator><comments>https://news.ycombinator.com/item?id=47132476</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47132476</guid></item><item><title><![CDATA[New comment by voidhorse in "The Legacy of Daniel Kahneman: A Personal View (2025)"]]></title><description><![CDATA[
<p>I don't know, is a logical deduction like the classic aristotelian syllogism 99.9999% certain, or is it certain?</p>
]]></description><pubDate>Wed, 11 Feb 2026 14:22:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=46975270</link><dc:creator>voidhorse</dc:creator><comments>https://news.ycombinator.com/item?id=46975270</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46975270</guid></item><item><title><![CDATA[New comment by voidhorse in "Frontier AI agents violate ethical constraints 30–50% of time, pressured by KPIs"]]></title><description><![CDATA[
<p>> organization learns to use a possibly/likely productivity improving tool<p>But that's precisely the problem with not backing it with actual measures of meaningful outcomes. The "use more" KPIs have no way of actually discerning whether or not it has increased productivity or if the immediate gains are worth possible new risks (outages).<p>You don't need to run cover for a csuite class that has become both itself myopic and incredibly transparent about what they really care about (cost cutting, removing dependencies on workers who might talk back, etc.)</p>
]]></description><pubDate>Tue, 10 Feb 2026 14:09:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=46959923</link><dc:creator>voidhorse</dc:creator><comments>https://news.ycombinator.com/item?id=46959923</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46959923</guid></item><item><title><![CDATA[New comment by voidhorse in "Frontier AI agents violate ethical constraints 30–50% of time, pressured by KPIs"]]></title><description><![CDATA[
<p>I was thinking more about  externalities, e.g. some company dumping chemical pollutants into a nearby water system, and not water companies themselves.</p>
]]></description><pubDate>Tue, 10 Feb 2026 14:05:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=46959864</link><dc:creator>voidhorse</dc:creator><comments>https://news.ycombinator.com/item?id=46959864</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46959864</guid></item><item><title><![CDATA[New comment by voidhorse in "Frontier AI agents violate ethical constraints 30–50% of time, pressured by KPIs"]]></title><description><![CDATA[
<p>Sounds like every AI KPI I've seen. They are all just "use solution more" and none actually measure any outcome remotely meaningful or beneficial to what the business is ostensibly doing or producing.<p>It's part of the reason that I view much of this AI push as an effort to brute force lowering of expectations, followed by a lowering of wages, followed by a lowering of employment numbers, and ultimately the mass-scale industrialization of digital products, software included.</p>
]]></description><pubDate>Tue, 10 Feb 2026 05:16:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46955698</link><dc:creator>voidhorse</dc:creator><comments>https://news.ycombinator.com/item?id=46955698</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46955698</guid></item></channel></rss>