<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: calf</title><link>https://news.ycombinator.com/user?id=calf</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 15 May 2026 21:07:16 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=calf" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by calf in "Talking to strangers at the gym"]]></title><description><![CDATA[
<p>Then that's a fallacious argument on several levels, e.g. because as the reader I am also a human who can tell, and so on.</p>
]]></description><pubDate>Tue, 05 May 2026 01:39:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=48017084</link><dc:creator>calf</dc:creator><comments>https://news.ycombinator.com/item?id=48017084</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48017084</guid></item><item><title><![CDATA[New comment by calf in "LLMs Are Not a Higher Level of Abstraction"]]></title><description><![CDATA[
<p>There are a few things being confused because people are having to learn/re-learn/re-discover basic computer science classes, but both formal specifications and informal specifications - such as pseudocode (I balk imagining how many AI users might not know this term), or natural language documentation - are all forms of abstraction. Programming languages and underlying models of computation all enable varying degrees of hiding details or emphasizing important ideas/information. Human thought and language, and mathematics, are already examples of abstraction in general. LLMs thus also purport to provide a (via computational model alternative to Turing machines) higher kind of abstraction, the debate is whether it is a good one, if its hallucinations make it unreliable, etc.</p>
]]></description><pubDate>Mon, 04 May 2026 00:04:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=48002979</link><dc:creator>calf</dc:creator><comments>https://news.ycombinator.com/item?id=48002979</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48002979</guid></item><item><title><![CDATA[New comment by calf in "There Will Be a Scientific Theory of Deep Learning"]]></title><description><![CDATA[
<p>I already addressed this type of misargument in my first paragraph. Another way of looking at it is, if NNs are so time bounded then they cannot be computationally powerful at all. Which is really strange.</p>
]]></description><pubDate>Mon, 27 Apr 2026 09:55:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47919626</link><dc:creator>calf</dc:creator><comments>https://news.ycombinator.com/item?id=47919626</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47919626</guid></item><item><title><![CDATA[New comment by calf in "There Will Be a Scientific Theory of Deep Learning"]]></title><description><![CDATA[
<p>I'm not sure I agree with that. Even technically, my PC is not Turing-complete because its hard drive is finite. Yet there is an informal sense that Rice's Theorem is still relevant in a kind of PC abstraction sense, as we are all taught "virus checkers are strictly speaking impossible". This is a subtle point that needs further clarification from CS theorists, of which I am not.<p>Neural networks in general are Turing models. Human brains are in the abstract Turing complete as well, as a simple example. LLMs being run iteratively in an unbounded loop may be "effectively Turing complete" for this simple reason, as well.<p>Regardless, any theory purporting to be foundational ought to explicitly address this demarcation. Unless practitioners think computability and formal complexity are not scientific foundations for CS.</p>
]]></description><pubDate>Sat, 25 Apr 2026 10:37:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47900352</link><dc:creator>calf</dc:creator><comments>https://news.ycombinator.com/item?id=47900352</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47900352</guid></item><item><title><![CDATA[New comment by calf in "There Will Be a Scientific Theory of Deep Learning"]]></title><description><![CDATA[
<p>Is there not some Rice's Theorem equivalent for deep nets? After all they are machines that are randomly generated, so from classical computer science I would not presume a theory of "what do all deep nets do" to be prima facie logically possible. Nor do I see this explained in the objections section.</p>
]]></description><pubDate>Sat, 25 Apr 2026 01:49:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47897901</link><dc:creator>calf</dc:creator><comments>https://news.ycombinator.com/item?id=47897901</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47897901</guid></item><item><title><![CDATA[New comment by calf in "GPT-5.5"]]></title><description><![CDATA[
<p>On ChatGPT 5.3 Plus subscription I find that long informal chats tend to reveal unsatisfactory answers and biases, at this point after 10 rounds of replies I end up having to correct it so much that it starts to agree with my initial arguments full circle. I don't see how this behavior is acceptable or safe for real work. Like are programmers and engineers using LLMs completely differently than I'm doing, because the underlying technology is fundamentally the same.</p>
]]></description><pubDate>Fri, 24 Apr 2026 01:26:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47884430</link><dc:creator>calf</dc:creator><comments>https://news.ycombinator.com/item?id=47884430</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47884430</guid></item><item><title><![CDATA[New comment by calf in "All phones sold in the EU to have replaceable batteries from 2027"]]></title><description><![CDATA[
<p>They they're both wrong for separate reasons, hah.<p>Edit: the person who posted the links is still saying they're right, it seems they found the wrong link and fixed it.</p>
]]></description><pubDate>Wed, 22 Apr 2026 03:56:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47858775</link><dc:creator>calf</dc:creator><comments>https://news.ycombinator.com/item?id=47858775</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47858775</guid></item><item><title><![CDATA[New comment by calf in "All phones sold in the EU to have replaceable batteries from 2027"]]></title><description><![CDATA[
<p>It really, really was. It's the most basic type of logical implication.<p>It said: IF BatteryCycles THEN Exempt. BatteryCycles(Apple).<p>By first order logic modus ponens this results in:<p>Exempt(Apple)<p>This is basic math literacy by now. The fact that you do not seem aware and are being confidently rude about it is worth pointing out. Don't do that on HN. This is still a tech forum so try to respect rational discussion as we all abide by these shared rules in this space.</p>
]]></description><pubDate>Wed, 22 Apr 2026 03:53:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47858744</link><dc:creator>calf</dc:creator><comments>https://news.ycombinator.com/item?id=47858744</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47858744</guid></item><item><title><![CDATA[New comment by calf in "All phones sold in the EU to have replaceable batteries from 2027"]]></title><description><![CDATA[
<p>It was not said explicitly but it was a straightforward implication. The replier then pointed out the exemption rule is outdated therefore the implied consequence is wrong and the original line of reasoning was misinformation, and thus would be the greater error. Humans</p>
]]></description><pubDate>Mon, 20 Apr 2026 22:02:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47841584</link><dc:creator>calf</dc:creator><comments>https://news.ycombinator.com/item?id=47841584</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47841584</guid></item><item><title><![CDATA[New comment by calf in "John Ternus to become Apple CEO"]]></title><description><![CDATA[
<p>What technological advance is there for high quality complex software?<p>The advances that made Apple Silicon possible were, fundamentally, TSMC and ARM. These were the material conditions that had to exist in order for a tech company to capitalize on a new generation of vertically integrated chip design. Now what's the conditions for next generation Mac OS? What research advances or software engineering paradigms that are mature enough for adoption? The state of Apple software isn't just due to mismanagement, it is, but the success of the hardware entails technology nodes as a confounding factor.</p>
]]></description><pubDate>Mon, 20 Apr 2026 21:51:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47841410</link><dc:creator>calf</dc:creator><comments>https://news.ycombinator.com/item?id=47841410</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47841410</guid></item><item><title><![CDATA[New comment by calf in "Advancing human gut microbiota research by considering gut transit time"]]></title><description><![CDATA[
<p>Just to be clear I thought the typical advice has been fiber -> protein -> carbs, for blood sugar reasons, you're saying to frontload fiber/carbs & backload proteins for easier digestion? That is interesting, I wonder what studies there are on this.</p>
]]></description><pubDate>Mon, 20 Apr 2026 09:24:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47831983</link><dc:creator>calf</dc:creator><comments>https://news.ycombinator.com/item?id=47831983</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47831983</guid></item><item><title><![CDATA[New comment by calf in "Sam Altman's response to Molotov cocktail incident"]]></title><description><![CDATA[
<p>Sorry but dang's rationale is just nonsensical at this point. Spirit of law does not mean having no articulable laws, or principles, or ethics whatsoever. This moderator seems very philosophically confused, and would benefit from further education in philosophy, social studies, political-economic theory, and related subjects. Especially if this incident is bothering them so much, it is an opportunity for reflection and learning. It is tempting to think up one's own theories, about "bad mobs", etc., but a lot of these issues are well-trodden by incredible writings of intellectuals and thinkers, so why attempt to reinvent the wheel and commit all these pitfalls in the process.</p>
]]></description><pubDate>Sun, 12 Apr 2026 05:41:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47736452</link><dc:creator>calf</dc:creator><comments>https://news.ycombinator.com/item?id=47736452</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47736452</guid></item><item><title><![CDATA[New comment by calf in "Sam Altman's response to Molotov cocktail incident"]]></title><description><![CDATA[
<p>The whole article is about how Sam will say one thing and then deny/opposite later</p>
]]></description><pubDate>Sat, 11 Apr 2026 05:05:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47727581</link><dc:creator>calf</dc:creator><comments>https://news.ycombinator.com/item?id=47727581</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47727581</guid></item><item><title><![CDATA[New comment by calf in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>That's neat, maybe this is analogous to those Olympiad LLM experiments. I am now curious what the runtime of such a simple query takes. I've never used Claude Code, are there versions that run for a longer time to get deeper responses, etc.</p>
]]></description><pubDate>Wed, 08 Apr 2026 14:04:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47690390</link><dc:creator>calf</dc:creator><comments>https://news.ycombinator.com/item?id=47690390</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47690390</guid></item><item><title><![CDATA[New comment by calf in "Native Americans had dice 12k years ago"]]></title><description><![CDATA[
<p>It doesn't matter. The first point raised was essentially"well the dice were just part of a belief system about divinity so they could not have been more sophisticated than that" and then I said that the article's logical reasoning is actually more interesting than that kind of kneejerk dismissal. Just that one line of thought mentioned in the article is intrinsically interesting, because it posits a kind of forcing argument, that if there is evidence for complexity behavior then there is evidence for complex thought required of it. That is an interesting cognitive science kind of argument, different than a flat argument of the type "oh their belief system would have prevented them from developing it".</p>
]]></description><pubDate>Wed, 08 Apr 2026 13:48:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47690205</link><dc:creator>calf</dc:creator><comments>https://news.ycombinator.com/item?id=47690205</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47690205</guid></item><item><title><![CDATA[New comment by calf in "Native Americans had dice 12k years ago"]]></title><description><![CDATA[
<p>That has barely to do with my specific point. The researcher in TFA said <i>if</i> they were doing complex counting <i>then</i> blah blah blah.<p>The general insight is that complex counting would force some kind of Bayesian or probabilistic reasoning even one that is informal, intuitive, rudimentary or partly incorrect. Whereas a theory of divining stones usage would have very little actual complex counting involved, maybe they had the tribal equivalent of fortune slips, and so they would not be cognitively challenged to reason about dice. What constitutes complex counting, I don't know, ask the researcher. But IMO it's not out realm of impossibility and time and again we have discovered the old ones of Homo sapiens were more cognitively/intellectually sophisticated than these kinds of scientists assumed earlier. I'm not wedded to this, it would be hard to prove, especially as a hypothesis involving human cognitive constraints/evolution, but I won't dismiss it as completely implausible either. It is an interesting if-then "archaeological cognitive science" argument, that's all.</p>
]]></description><pubDate>Wed, 08 Apr 2026 13:22:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47689894</link><dc:creator>calf</dc:creator><comments>https://news.ycombinator.com/item?id=47689894</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47689894</guid></item><item><title><![CDATA[New comment by calf in "Native Americans had dice 12k years ago"]]></title><description><![CDATA[
<p>I don't see the point of being confident about this in either direction. I will not assert for certain but (or, IF) they had dice for 12000 years (12,000!) and to be so certain they didn't know anything at all on an intuitive level is a bit strong a position to take, I don't see that as a safe null/default hypothesis.<p>I had also said "..., THEN it's not implausible" so I don't love how you quoted a strawman in the first place.</p>
]]></description><pubDate>Wed, 08 Apr 2026 13:12:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47689740</link><dc:creator>calf</dc:creator><comments>https://news.ycombinator.com/item?id=47689740</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47689740</guid></item><item><title><![CDATA[New comment by calf in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>As a curious passerby what does such a prompt look like? Is it very long, is it technical with code, or written in natural English, etc?</p>
]]></description><pubDate>Wed, 08 Apr 2026 06:40:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47686221</link><dc:creator>calf</dc:creator><comments>https://news.ycombinator.com/item?id=47686221</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47686221</guid></item><item><title><![CDATA[New comment by calf in "Native Americans had dice 12k years ago"]]></title><description><![CDATA[
<p>If his evidence of complex counting is convincing, then it's not implausible to me that they soon also had some rudimentary understanding of e.g. coin flip frequencies.</p>
]]></description><pubDate>Wed, 08 Apr 2026 06:35:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47686189</link><dc:creator>calf</dc:creator><comments>https://news.ycombinator.com/item?id=47686189</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47686189</guid></item><item><title><![CDATA[New comment by calf in "12k Tons of Dumped Orange Peel Grew into a Landscape Nobody Expected (2017)"]]></title><description><![CDATA[
<p>That's not the point, the point is nobody could know for certain at the time of decision making, so it is revisionism to frame dumping as a legitimate experiment. The outcomes do not justify the action made at the time given a reasonable analysis of ecological risks. The time order in which a rational decision is justifiable matters, unlike whatever the prior commenter was trying to suggest.</p>
]]></description><pubDate>Tue, 07 Apr 2026 23:30:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47682672</link><dc:creator>calf</dc:creator><comments>https://news.ycombinator.com/item?id=47682672</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47682672</guid></item></channel></rss>