<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: emp17344</title><link>https://news.ycombinator.com/user?id=emp17344</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 07:39:18 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=emp17344" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by emp17344 in "Tell HN: I'm sick of AI everything"]]></title><description><![CDATA[
<p>I’ve never heard this guy say anything negative about an AI product. Makes it impossible to trust him.</p>
]]></description><pubDate>Thu, 23 Apr 2026 01:55:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47871505</link><dc:creator>emp17344</dc:creator><comments>https://news.ycombinator.com/item?id=47871505</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47871505</guid></item><item><title><![CDATA[New comment by emp17344 in "The Abstraction Fallacy: Why AI Can Simulate but Not Instantiate Consciousness"]]></title><description><![CDATA[
<p>I think you’re possibly a bit confused… accepting Searle’s intuition on this thought experiment is agreeing with Searle. In light of this, I don’t understand your comment.</p>
]]></description><pubDate>Thu, 23 Apr 2026 01:17:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47871301</link><dc:creator>emp17344</dc:creator><comments>https://news.ycombinator.com/item?id=47871301</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47871301</guid></item><item><title><![CDATA[New comment by emp17344 in "Our eighth generation TPUs: two chips for the agentic era"]]></title><description><![CDATA[
<p>Well, yeah… turns out that goal wasn’t a good indicator for AGI, so we re-evaluated. That’s changing your hypothesis in the face of evidence, not “moving the goalposts” in the fallacious sense.</p>
]]></description><pubDate>Thu, 23 Apr 2026 01:10:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47871254</link><dc:creator>emp17344</dc:creator><comments>https://news.ycombinator.com/item?id=47871254</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47871254</guid></item><item><title><![CDATA[New comment by emp17344 in "The Abstraction Fallacy: Why AI Can Simulate but Not Instantiate Consciousness"]]></title><description><![CDATA[
<p>It’s a large survey of academic philosophers on famous philosophical arguments. In this case, the question is asking whether philosophers agree with Searle and believe the Chinese room does not understand Chinese, or disagree with Searle and believe the room does understand Chinese.</p>
]]></description><pubDate>Tue, 21 Apr 2026 01:02:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47843314</link><dc:creator>emp17344</dc:creator><comments>https://news.ycombinator.com/item?id=47843314</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47843314</guid></item><item><title><![CDATA[New comment by emp17344 in "The Abstraction Fallacy: Why AI Can Simulate but Not Instantiate Consciousness"]]></title><description><![CDATA[
<p>You know that Dennett and Hofstadter aren’t the beginning and end of Philosophy of Mind, right? Calling Searle’s Room “complete sophistry” is hilariously misguided, considering the vast majority of academic philosophers consider it valid: <a href="https://survey2020.philpeople.org/survey/results/5002#" rel="nofollow">https://survey2020.philpeople.org/survey/results/5002#</a></p>
]]></description><pubDate>Mon, 20 Apr 2026 22:57:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47842160</link><dc:creator>emp17344</dc:creator><comments>https://news.ycombinator.com/item?id=47842160</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47842160</guid></item><item><title><![CDATA[New comment by emp17344 in "Hyperscalers have already outspent most famous US megaprojects"]]></title><description><![CDATA[
<p>It’s just a classic bubble. They’ve happened before, and while they are irrational, the market sorts itself eventually.</p>
]]></description><pubDate>Sat, 18 Apr 2026 01:44:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47812479</link><dc:creator>emp17344</dc:creator><comments>https://news.ycombinator.com/item?id=47812479</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47812479</guid></item><item><title><![CDATA[New comment by emp17344 in "We reproduced Anthropic's Mythos findings with public models"]]></title><description><![CDATA[
<p>Great, it can compete with the cottage industry dedicated solely to hyping and exaggerating AI performance.</p>
]]></description><pubDate>Fri, 17 Apr 2026 14:55:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47806632</link><dc:creator>emp17344</dc:creator><comments>https://news.ycombinator.com/item?id=47806632</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47806632</guid></item><item><title><![CDATA[New comment by emp17344 in "Claude Opus 4.7"]]></title><description><![CDATA[
<p>This is quite hostile. Yes, criticism is valid without an accompanying essay detailing every aspect of the associated environment, because these tools are still quite flawed.</p>
]]></description><pubDate>Thu, 16 Apr 2026 19:52:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47798609</link><dc:creator>emp17344</dc:creator><comments>https://news.ycombinator.com/item?id=47798609</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47798609</guid></item><item><title><![CDATA[New comment by emp17344 in "Claude Code Found a Linux Vulnerability Hidden for 23 Years"]]></title><description><![CDATA[
<p>Personally, I’m tired of exaggerated claims and hype peddlers.<p>Edit: Frankly, accusing perceived opponents of being too afraid to see the truth is poor argumentative practice, and practically never true.</p>
]]></description><pubDate>Sat, 04 Apr 2026 21:39:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47643733</link><dc:creator>emp17344</dc:creator><comments>https://news.ycombinator.com/item?id=47643733</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47643733</guid></item><item><title><![CDATA[New comment by emp17344 in "Components of a Coding Agent"]]></title><description><![CDATA[
<p>>It's pretty easy to get determinism with a simple harness for a well-defined set of tasks with the recent models that are post-trained for tool use.<p>Do you have a source? Claude Code is the only genetic system that seems to really work well enough to be useful, and it’s equipped with an absolutely absurd amount of testing and redundancy to make it useful.</p>
]]></description><pubDate>Sat, 04 Apr 2026 21:25:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47643590</link><dc:creator>emp17344</dc:creator><comments>https://news.ycombinator.com/item?id=47643590</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47643590</guid></item><item><title><![CDATA[New comment by emp17344 in "Components of a Coding Agent"]]></title><description><![CDATA[
<p>There’s a lot of redundancy, because there has to be to make the system useful. It’s a hacked together mess.</p>
]]></description><pubDate>Sat, 04 Apr 2026 20:05:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47642828</link><dc:creator>emp17344</dc:creator><comments>https://news.ycombinator.com/item?id=47642828</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47642828</guid></item><item><title><![CDATA[New comment by emp17344 in "Components of a Coding Agent"]]></title><description><![CDATA[
<p>If you saw the Claude Code leak, you’d know the harness is anything but simple. It’s a sprawling, labyrinthine mess, but it’s required to make LLMs somewhat deterministic and useful as tools.</p>
]]></description><pubDate>Sat, 04 Apr 2026 19:25:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47642416</link><dc:creator>emp17344</dc:creator><comments>https://news.ycombinator.com/item?id=47642416</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47642416</guid></item><item><title><![CDATA[New comment by emp17344 in "OpenClaw privilege escalation vulnerability"]]></title><description><![CDATA[
<p>I think you’ve got your answer, then. If nobody can tell you what it’s really used for, it likely doesn’t have any real use cases.</p>
]]></description><pubDate>Fri, 03 Apr 2026 18:26:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47630191</link><dc:creator>emp17344</dc:creator><comments>https://news.ycombinator.com/item?id=47630191</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47630191</guid></item><item><title><![CDATA[New comment by emp17344 in "The case for zero-error horizons in trustworthy LLMs"]]></title><description><![CDATA[
<p>Fair enough</p>
]]></description><pubDate>Fri, 03 Apr 2026 04:26:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47623192</link><dc:creator>emp17344</dc:creator><comments>https://news.ycombinator.com/item?id=47623192</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47623192</guid></item><item><title><![CDATA[New comment by emp17344 in "The case for zero-error horizons in trustworthy LLMs"]]></title><description><![CDATA[
<p>There are other users in this very thread using inflammatory language to attack this paper and those who find the paper compelling. One user says, quote: “You just can't reason with the anti-LLM group.”<p>In light of this, why was my comment - which was in large part a reaction to the behavior of the users described above - the only one called out here?</p>
]]></description><pubDate>Thu, 02 Apr 2026 20:47:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47619958</link><dc:creator>emp17344</dc:creator><comments>https://news.ycombinator.com/item?id=47619958</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47619958</guid></item><item><title><![CDATA[New comment by emp17344 in "The case for zero-error horizons in trustworthy LLMs"]]></title><description><![CDATA[
<p>It matters if you’re curious about whether AGI is possible. Have we really built “thinking machines”, or are these systems just elaborate harnesses that leverage the non-deterministic nature of LLMs?</p>
]]></description><pubDate>Thu, 02 Apr 2026 18:36:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47618392</link><dc:creator>emp17344</dc:creator><comments>https://news.ycombinator.com/item?id=47618392</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47618392</guid></item><item><title><![CDATA[New comment by emp17344 in "Cursor 3"]]></title><description><![CDATA[
<p>AI labs think they’re building an autonomous replacement for software engineers, while software engineers see these systems as tools to supplement the process of software engineering.</p>
]]></description><pubDate>Thu, 02 Apr 2026 18:33:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47618357</link><dc:creator>emp17344</dc:creator><comments>https://news.ycombinator.com/item?id=47618357</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47618357</guid></item><item><title><![CDATA[New comment by emp17344 in "The case for zero-error horizons in trustworthy LLMs"]]></title><description><![CDATA[
<p>I think this is still useful research that calls into question how “smart” these models are. If the model needs a separate tool to solve a problem, has the model really solved the problem, or just outsourced it to a harness that it’s been trained - via reinforcement learning - to call upon?</p>
]]></description><pubDate>Thu, 02 Apr 2026 18:23:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47618214</link><dc:creator>emp17344</dc:creator><comments>https://news.ycombinator.com/item?id=47618214</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47618214</guid></item><item><title><![CDATA[New comment by emp17344 in "The case for zero-error horizons in trustworthy LLMs"]]></title><description><![CDATA[
<p>[flagged]</p>
]]></description><pubDate>Thu, 02 Apr 2026 18:13:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47618083</link><dc:creator>emp17344</dc:creator><comments>https://news.ycombinator.com/item?id=47618083</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47618083</guid></item><item><title><![CDATA[New comment by emp17344 in "Slop is not necessarily the future"]]></title><description><![CDATA[
<p>That is, in fact, how it comes across. You’re labeling perceived opponents as “emotional” and “dismissive”.</p>
]]></description><pubDate>Tue, 31 Mar 2026 21:44:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47593887</link><dc:creator>emp17344</dc:creator><comments>https://news.ycombinator.com/item?id=47593887</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47593887</guid></item></channel></rss>