<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: computably</title><link>https://news.ycombinator.com/user?id=computably</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 15 May 2026 21:14:41 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=computably" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by computably in "A few words on DS4"]]></title><description><![CDATA[
<p>Storage is multiple orders of magnitude slower than RAM. Pretty sure it'd be more like 10s/tok than anything reasonable.</p>
]]></description><pubDate>Fri, 15 May 2026 19:43:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=48152928</link><dc:creator>computably</dc:creator><comments>https://news.ycombinator.com/item?id=48152928</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48152928</guid></item><item><title><![CDATA[New comment by computably in "A few words on DS4"]]></title><description><![CDATA[
<p>Akshually, they said "harness," and not "test harness."<p>There's no particular reason "agent harness" can't have practically the same definition, substituting test-specific concepts for agent-specific ones.</p>
]]></description><pubDate>Fri, 15 May 2026 19:02:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=48152495</link><dc:creator>computably</dc:creator><comments>https://news.ycombinator.com/item?id=48152495</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48152495</guid></item><item><title><![CDATA[New comment by computably in "A desktop made for one"]]></title><description><![CDATA[
<p>> can't you just let people enjoy things?<p>Dumping slop into the public commons deserves criticism.</p>
]]></description><pubDate>Mon, 11 May 2026 05:09:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=48091258</link><dc:creator>computably</dc:creator><comments>https://news.ycombinator.com/item?id=48091258</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48091258</guid></item><item><title><![CDATA[New comment by computably in "Meta Shuts Down End-to-End Encryption for Instagram Messaging"]]></title><description><![CDATA[
<p>"Best" is subjective. But "caring about their users"? Their response to RtR alone shows they care about their margins more than their users.</p>
]]></description><pubDate>Fri, 08 May 2026 23:36:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=48070107</link><dc:creator>computably</dc:creator><comments>https://news.ycombinator.com/item?id=48070107</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48070107</guid></item><item><title><![CDATA[New comment by computably in "What can we gain by losing infinity?"]]></title><description><![CDATA[
<p>> We fit continuous theories to discrete measurements--and the good ones fit really well!--but until we can measure it how can we actually know?<p>Well, physicists came up with quantum mechanics because they found a way to distinguish a genuinely discrete phenomenon.<p>Understanding the physical universe overlaps with a subset of math. It shouldn't constrain the abstract tools which may or may not one day be useful for that understanding.</p>
]]></description><pubDate>Thu, 30 Apr 2026 16:56:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47965265</link><dc:creator>computably</dc:creator><comments>https://news.ycombinator.com/item?id=47965265</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47965265</guid></item><item><title><![CDATA[New comment by computably in "What can we gain by losing infinity?"]]></title><description><![CDATA[
<p>> An instance of a number that has a special meaning.<p>Not really. There are infinitely many infinities. Infinite numbers are not particularly more special than real numbers, complex numbers, matrices, functions/operators, etc.</p>
]]></description><pubDate>Thu, 30 Apr 2026 16:51:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47965198</link><dc:creator>computably</dc:creator><comments>https://news.ycombinator.com/item?id=47965198</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47965198</guid></item><item><title><![CDATA[New comment by computably in "I have officially retired from Emacs"]]></title><description><![CDATA[
<p>> It's deeply distressing to watch people fall into AI psychosis.<p>It's unclear what you're saying here... Yes, AI-induced psychosis is a real problem and the frontier labs' mitigations are ineffective, to put it mildly. But using AI as a coding tool doesn't have anything to do with psychosis.</p>
]]></description><pubDate>Tue, 28 Apr 2026 17:18:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47937471</link><dc:creator>computably</dc:creator><comments>https://news.ycombinator.com/item?id=47937471</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47937471</guid></item><item><title><![CDATA[New comment by computably in "Integrated by Design"]]></title><description><![CDATA[
<p>> I suppose they'd be comparably successful.<p>Yes, so, not particularly.</p>
]]></description><pubDate>Tue, 28 Apr 2026 07:48:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47931559</link><dc:creator>computably</dc:creator><comments>https://news.ycombinator.com/item?id=47931559</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47931559</guid></item><item><title><![CDATA[New comment by computably in "An update on recent Claude Code quality reports"]]></title><description><![CDATA[
<p>A strange view. The trade-off has nothing to do with a specific ideology or notable selfishness. It is an intrinsic limitation of the algorithms, which anybody could reasonably learn about.<p>Sure, the exact choice on the trade-off, changing that choice, and having a pretty product-breaking bug as a result, are much more opaque. But I was responding to somebody who was surprised there's any trade-off at all. Computers don't give you infinite resources, whether or not they're "servers," "in the cloud," or "AI."</p>
]]></description><pubDate>Fri, 24 Apr 2026 09:28:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47887770</link><dc:creator>computably</dc:creator><comments>https://news.ycombinator.com/item?id=47887770</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47887770</guid></item><item><title><![CDATA[New comment by computably in "An update on recent Claude Code quality reports"]]></title><description><![CDATA[
<p>> Also by the way, caching does not make LLM inference linear. It's still quadratic, but the constant in front of the quadratic term becomes a lot smaller.<p>Touché. Still, to a reasonable approximation, caching makes the dominant term linear, or equiv, linearly scales the expensive bits.</p>
]]></description><pubDate>Fri, 24 Apr 2026 09:20:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47887701</link><dc:creator>computably</dc:creator><comments>https://news.ycombinator.com/item?id=47887701</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47887701</guid></item><item><title><![CDATA[New comment by computably in "DeepSeek v4"]]></title><description><![CDATA[
<p>> That's super interesting, isn't Deepseek in China banned from using Anthropic models? Yet here they're comparing it in terms of internal employee testing.<p>I don't see why Deepseek would care to respect Anthropic's ToS, even if just to pretend. It's not like Anthropic could file and win a lawsuit in China, nor would the US likely ban Deepseek. And even if the US gov would've considered it, Anthropic is on their shitlist.</p>
]]></description><pubDate>Fri, 24 Apr 2026 09:04:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47887589</link><dc:creator>computably</dc:creator><comments>https://news.ycombinator.com/item?id=47887589</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47887589</guid></item><item><title><![CDATA[New comment by computably in "Meta tells staff it will cut 10% of jobs"]]></title><description><![CDATA[
<p>> It seems basically impossible for everyone to have overhired, for the simple reason that qualified workers do not appear and disappear from nowhere. There is a population of qualified workers in the software sector, and only new grads and retirement can move the needle significantly.<p>SWEs (and most any role for that matter) definitely can be minted in ways besides graduating with a relevant major. On top of that there's also H1Bs and contractors. Plus "overhiring" doesn't necessarily just mean absolute headcount, it could be compensation, scope, middle managers, etc. The definition of "qualified" is also malleable depending on the incentives.<p>> So, if someone overhired then someone else must have done without, all things considered.<p>Beyond the previous points, this also assumes the supply of labor is independent of the demand, and it's clearly not. As the demand increases, so does compensation, outreach, advertising/propaganda, etc. Everybody can overhire simultaneously as a result of pushing for growth of the supply of labor.</p>
]]></description><pubDate>Fri, 24 Apr 2026 08:50:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47887491</link><dc:creator>computably</dc:creator><comments>https://news.ycombinator.com/item?id=47887491</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47887491</guid></item><item><title><![CDATA[New comment by computably in "Palantir employees are starting to wonder if they're the bad guys"]]></title><description><![CDATA[
<p>About as sure as one can be. It's neither logically nor physically impossible, but the claim that trees are conscious is practically unfalsifiable and is not supported by any substantive evidence. It has nothing to do with "fast" or "slow," no matter how you poke or prod or slice or dice a tree, there's nothing that suggests a capacity for consciousness. I would be less surprised if my friend's dog started speaking perfect Chinese with an American accent.</p>
]]></description><pubDate>Fri, 24 Apr 2026 07:28:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47886821</link><dc:creator>computably</dc:creator><comments>https://news.ycombinator.com/item?id=47886821</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47886821</guid></item><item><title><![CDATA[New comment by computably in "Palantir employees are starting to wonder if they're the bad guys"]]></title><description><![CDATA[
<p>> Knowing how the sausage is made does nothing for me.<p>Considering that this is nowadays a substantially less common background, and probably trending that direction indefinitely, this reads more as you being desensitized. It's not like vegans are unaware that people could have a background like yours.<p>> But bringing any moral/religious reasons for it always seemed silly to me. There’s nothing more natural than one animal eating another. Humans evolved from mostly vegetarian monkeys to predators<p>Morals and religion aren't about what's natural, they're about what humans desire. Illness, violence, and deception are all perfectly "natural."</p>
]]></description><pubDate>Fri, 24 Apr 2026 07:12:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47886708</link><dc:creator>computably</dc:creator><comments>https://news.ycombinator.com/item?id=47886708</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47886708</guid></item><item><title><![CDATA[New comment by computably in "Palantir employees are starting to wonder if they're the bad guys"]]></title><description><![CDATA[
<p>First off, I believe veganism is, probably, morally correct.<p>However, I lead a morally imperfect lifestyle. I get around by driving or being driven in a car, even when it would only be moderately less convenient to walk or bike or take transit. A few dollars could feed children in poverty for weeks, and I spend on lot more than "a few" dollars on luxuries like travel. By my measure, knowingly choosing <i>not</i> to prevent human suffering on such a scale is massively worse than eating meat, but at the end of the day, I don't consider myself or others in my position to be monsters.<p>> The other thing I see is casting every human as sacred and every non-human living thing as without value, or, at least less value than a single meal.<p>While I believe non-human animals generally have greater moral value than a single meal - the most widely consumed animals are clearly capable of suffering and IMO intelligent enough for most to instinctively empathize with - I don't think it's particularly strange for humans to view humans as sacred.<p>Many if not most people view morality as rooted in the golden rule, and non-human animals are incapable of making moral considerations the way humans are.<p>Even just considering gut feelings - let's say we presented a trolley problem, on one side one's close friends and family members, on the other side some number of chickens. I would be very surprised at genuine responses opting to save the chickens. Personally, I would sacrifice literally any number of chickens.</p>
]]></description><pubDate>Fri, 24 Apr 2026 07:01:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47886630</link><dc:creator>computably</dc:creator><comments>https://news.ycombinator.com/item?id=47886630</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47886630</guid></item><item><title><![CDATA[New comment by computably in "Palantir employees are starting to wonder if they're the bad guys"]]></title><description><![CDATA[
<p>> What if they were conscious?<p>Well, they're not.<p>> If they aren’t but still a lifeform, that makes it perfectly okay?<p>According to Jains: No. Violence against plants, insects, and possibly even certain microorganisms is considered unethical.<p>IMO as an irreligious person: Yes. Life is just a particular form of self-sustaining and self-propagating system. Those properties are of little to no moral value.</p>
]]></description><pubDate>Fri, 24 Apr 2026 06:25:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47886333</link><dc:creator>computably</dc:creator><comments>https://news.ycombinator.com/item?id=47886333</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47886333</guid></item><item><title><![CDATA[New comment by computably in "An update on recent Claude Code quality reports"]]></title><description><![CDATA[
<p>I would say this <i>is</i> abstracting the behavior.</p>
]]></description><pubDate>Fri, 24 Apr 2026 03:37:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47885213</link><dc:creator>computably</dc:creator><comments>https://news.ycombinator.com/item?id=47885213</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47885213</guid></item><item><title><![CDATA[New comment by computably in "An update on recent Claude Code quality reports"]]></title><description><![CDATA[
<p>> How do you do "due diligence" on an API that frequently makes undocumented changes and only publishes acknowledgement of change after users complain?<p>1. Compute scaling with the length of the sequence is applicable to transformer models in general, i.e. every frontier LLM since ChatGPT's initial release.<p>2. As undocumented changes happen frequently, users should be even more incentivized to at least try to have a basic understanding of the product's cost structure.<p>> You're also talking about internal technical implementations of a chat bot. 99.99% of users won't even understand the words that are being used.<p>I think "internal technical implementation" is a stretch. Users don't need to know what a "transformer" is to understand the trade-off. It's not trivial but it's not something incomprehensible to laypersons.</p>
]]></description><pubDate>Fri, 24 Apr 2026 03:08:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47885019</link><dc:creator>computably</dc:creator><comments>https://news.ycombinator.com/item?id=47885019</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47885019</guid></item><item><title><![CDATA[New comment by computably in "An update on recent Claude Code quality reports"]]></title><description><![CDATA[
<p>I said "prompting with the entire context every time," I think it should be clear even to laypersons that the "prompting" cost refers to what the model provider charges you when you send them a prompt.</p>
]]></description><pubDate>Fri, 24 Apr 2026 02:41:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47884853</link><dc:creator>computably</dc:creator><comments>https://news.ycombinator.com/item?id=47884853</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47884853</guid></item><item><title><![CDATA[New comment by computably in "An update on recent Claude Code quality reports"]]></title><description><![CDATA[
<p>> I was never under the impression that gaps in conversations would increase costs nor reduce quality. Both are surprising and disappointing.<p>You didn't do your due diligence on an expensive API. A naïve implementation of an LLM chat is going to have O(N^2) costs from prompting with the entire context every time. Caching is needed to bring that down to O(N), but the cache itself takes resources, so evictions have to happen eventually.</p>
]]></description><pubDate>Thu, 23 Apr 2026 19:39:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47880631</link><dc:creator>computably</dc:creator><comments>https://news.ycombinator.com/item?id=47880631</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47880631</guid></item></channel></rss>