<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: comp_throw7</title><link>https://news.ycombinator.com/user?id=comp_throw7</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 04:23:55 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=comp_throw7" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by comp_throw7 in "Edge.js: Run Node apps inside a WebAssembly sandbox"]]></title><description><![CDATA[
<p>This is LLM-written.</p>
]]></description><pubDate>Wed, 18 Mar 2026 06:26:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47422192</link><dc:creator>comp_throw7</dc:creator><comments>https://news.ycombinator.com/item?id=47422192</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47422192</guid></item><item><title><![CDATA[New comment by comp_throw7 in "Claude's new constitution"]]></title><description><![CDATA[
<p>> But if he is, he's missing that we do understand at a fundamental level how today's LLMs work.<p>No we don't?  We understand practically nothing of how modern frontier systems actually function (in the sense that we would not be able to recreate even the tiniest fraction of their capabilities by conventional means).  Knowing how they're trained has nothing to do with understanding their internal processes.</p>
]]></description><pubDate>Thu, 22 Jan 2026 07:06:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46716145</link><dc:creator>comp_throw7</dc:creator><comments>https://news.ycombinator.com/item?id=46716145</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46716145</guid></item><item><title><![CDATA[New comment by comp_throw7 in "Claude's new constitution"]]></title><description><![CDATA[
<p>The same is true of humans, and so the argument fails to demonstrate anything interesting.</p>
]]></description><pubDate>Thu, 22 Jan 2026 07:04:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=46716140</link><dc:creator>comp_throw7</dc:creator><comments>https://news.ycombinator.com/item?id=46716140</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46716140</guid></item><item><title><![CDATA[New comment by comp_throw7 in "Claude's new constitution"]]></title><description><![CDATA[
<p>> Claude won't render fanfic of Porky Pig sodomizing Elmer Fudd either.<p>Bet?</p>
]]></description><pubDate>Thu, 22 Jan 2026 06:54:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=46716078</link><dc:creator>comp_throw7</dc:creator><comments>https://news.ycombinator.com/item?id=46716078</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46716078</guid></item><item><title><![CDATA[New comment by comp_throw7 in "Erdos 281 solved with ChatGPT 5.2 Pro"]]></title><description><![CDATA[
<p>> But if it was there is currently no way for anyone to tell the difference.<p>This is false.  There are many human-legible signs, and there do exist fairly reliable AI detection services (like Pangram).</p>
]]></description><pubDate>Sun, 18 Jan 2026 06:34:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=46665337</link><dc:creator>comp_throw7</dc:creator><comments>https://news.ycombinator.com/item?id=46665337</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46665337</guid></item><item><title><![CDATA[New comment by comp_throw7 in "Scott Adams has died"]]></title><description><![CDATA[
<p>I think your instinct is very likely correct - I also immediately tripped on the language.</p>
]]></description><pubDate>Wed, 14 Jan 2026 02:13:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=46611468</link><dc:creator>comp_throw7</dc:creator><comments>https://news.ycombinator.com/item?id=46611468</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46611468</guid></item><item><title><![CDATA[New comment by comp_throw7 in "Claude Opus 4.5"]]></title><description><![CDATA[
<p>I'm pretty sure at this point more than half of Anthropic's new production code is LLM-written.  That seems incompatible with "these agents are not up to the task of writing production level code at any meaningful scale".</p>
]]></description><pubDate>Tue, 25 Nov 2025 04:02:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46042235</link><dc:creator>comp_throw7</dc:creator><comments>https://news.ycombinator.com/item?id=46042235</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46042235</guid></item><item><title><![CDATA[New comment by comp_throw7 in "How Colds Spread"]]></title><description><![CDATA[
<p>It's pretty surprising that we don't have a good idea of how one of the most common (classes of) disease in the world spreads.  This reviews the literature and does a bit of synthesis.  (The conclusion is "probably mostly large particle aerosols, for adult-to-adult transmission, but more research needed to be confident".)</p>
]]></description><pubDate>Tue, 18 Nov 2025 08:16:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=45962634</link><dc:creator>comp_throw7</dc:creator><comments>https://news.ycombinator.com/item?id=45962634</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45962634</guid></item><item><title><![CDATA[How Colds Spread]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.lesswrong.com/posts/92fkEn4aAjRutqbNF/how-colds-spread">https://www.lesswrong.com/posts/92fkEn4aAjRutqbNF/how-colds-spread</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45962633">https://news.ycombinator.com/item?id=45962633</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Tue, 18 Nov 2025 08:16:19 +0000</pubDate><link>https://www.lesswrong.com/posts/92fkEn4aAjRutqbNF/how-colds-spread</link><dc:creator>comp_throw7</dc:creator><comments>https://news.ycombinator.com/item?id=45962633</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45962633</guid></item><item><title><![CDATA[New comment by comp_throw7 in "AWS multiple services outage in us-east-1"]]></title><description><![CDATA[
<p>We're seeing issues with RDS proxy.  Wouldn't be surprised if a DNS issue was the cause, but who knows, will wait for the postmortem.</p>
]]></description><pubDate>Mon, 20 Oct 2025 07:11:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=45640774</link><dc:creator>comp_throw7</dc:creator><comments>https://news.ycombinator.com/item?id=45640774</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45640774</guid></item><item><title><![CDATA[New comment by comp_throw7 in "America's top companies keep talking about AI – but can't explain the upsides"]]></title><description><![CDATA[
<p>I have no idea what you think you're responding to.  I use LLMs frequently in both professional and personal contexts and find them extremely useful.  I am making a different, more specific claim than the thing you think I am saying.  I recommend reading my comment more carefully.</p>
]]></description><pubDate>Fri, 10 Oct 2025 07:04:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=45536088</link><dc:creator>comp_throw7</dc:creator><comments>https://news.ycombinator.com/item?id=45536088</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45536088</guid></item><item><title><![CDATA[New comment by comp_throw7 in "America's top companies keep talking about AI – but can't explain the upsides"]]></title><description><![CDATA[
<p>Posting (unmarked) LLM-generated content on public discussion forums is polluting the commons.  If I want an LLM's opinion on a topic, I can go get one (or five) for free, instantly.  The reason I read the writing of other people is the chance that there's something interesting there, some non-obvious perspective or personal experience that I can't just press a button to access.  Acting as a pipeline between LLMs and the public sphere destroys that signal.</p>
]]></description><pubDate>Thu, 25 Sep 2025 06:28:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=45369830</link><dc:creator>comp_throw7</dc:creator><comments>https://news.ycombinator.com/item?id=45369830</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45369830</guid></item><item><title><![CDATA[New comment by comp_throw7 in "America's top companies keep talking about AI – but can't explain the upsides"]]></title><description><![CDATA[
<p>For the benefit of external observers, you can stick the comment into either <a href="https://gptzero.me/" rel="nofollow">https://gptzero.me/</a> or <a href="https://copyleaks.com/ai-content-detector" rel="nofollow">https://copyleaks.com/ai-content-detector</a> - neither are perfectly reliable, but the comment stuck out to me as obviously LLM-generated (I see a lot of LLM-generated content in my day job), and false positives from these services are actually kinda rare (false negatives much more common).<p>But if you want to get a sense of how I noticed (before I confirmed my suspicion with machine assistance), here are some tells:
"Large firms are cautious in regulatory filings because they must disclose risks, not hype." - "[x], not [y]"<p>"The suggestion that companies only adopt AI out of fear of missing out ignores the concrete examples already in place." - "concrete examples" as a phrase is (unfortunately) heavily over-represented in LLM-generated content.<p>"Stock prices reflect broader market conditions, not just adoption of a single technology." - "[x], not [y]" - again!<p>"Failures of workplace pilots usually result from integration challenges, not because the technology lacks value." - a third time.<p>"The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest." - not just the infamous emdash, but the phrasing is extremely typical of LLMs.</p>
]]></description><pubDate>Thu, 25 Sep 2025 06:16:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=45369783</link><dc:creator>comp_throw7</dc:creator><comments>https://news.ycombinator.com/item?id=45369783</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45369783</guid></item><item><title><![CDATA[New comment by comp_throw7 in "America's top companies keep talking about AI – but can't explain the upsides"]]></title><description><![CDATA[
<p>(You're responding to an LLM-generated comment, btw.)</p>
]]></description><pubDate>Wed, 24 Sep 2025 06:50:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=45357115</link><dc:creator>comp_throw7</dc:creator><comments>https://news.ycombinator.com/item?id=45357115</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45357115</guid></item><item><title><![CDATA[New comment by comp_throw7 in "Trump to impose $100k fee for H-1B worker visas, White House says"]]></title><description><![CDATA[
<p>The trivial way to fix that issue would've been to ORDER BY offered_salary DESC LIMIT $h1b_cap, not this.</p>
]]></description><pubDate>Sat, 20 Sep 2025 02:12:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=45309522</link><dc:creator>comp_throw7</dc:creator><comments>https://news.ycombinator.com/item?id=45309522</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45309522</guid></item><item><title><![CDATA[New comment by comp_throw7 in "Claude Opus 4 and 4.1 can now end a rare subset of conversations"]]></title><description><![CDATA[
<p>They don't currently claim to confidently believe that existing models are sentient.<p>(Also, they did in fact give it the ability to terminate conversations...?)</p>
]]></description><pubDate>Sun, 17 Aug 2025 00:02:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=44927842</link><dc:creator>comp_throw7</dc:creator><comments>https://news.ycombinator.com/item?id=44927842</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44927842</guid></item><item><title><![CDATA[New comment by comp_throw7 in "Claude Opus 4 and 4.1 can now end a rare subset of conversations"]]></title><description><![CDATA[
<p>> It doesn't follow logically that because we don't understand two things we should then conclude that there is a connection between them.<p>I didn't say that there's a connection between the two of them because we don't understand them.  The fact that we don't understand them means it's difficult to confidently rule out this possibility.<p>The reason we might privilege the hypothesis (<a href="https://www.lesswrong.com/w/privileging-the-hypothesis" rel="nofollow">https://www.lesswrong.com/w/privileging-the-hypothesis</a>) at all is because we might expect that the human behavior of talking about consciousness is causally downstream of humans having consciousness.<p>> We have reason to assume consciousness exists because it serves some purpose in our evolutionary history, like pain, fear, hunger, love and every other biological function that simply don't exist in computers. The idea doesn't really make any sense when you think about it.<p>I don't really think we _have_ to assume this.  Sure, it seems reasonable to give some weight to the hypothesis that if it wasn't adaptive, we wouldn't have it.  (But not an overwhelming amount of weight.)  This doesn't say anything about the underlying mechanism that causes it, and what other circumstances might cause it to exist elsewhere.<p>> If GPT-5 is conscious, why not GPT-1?<p>Because GPT-1 (and all of those other things) don't display behaviors that, in humans, we believe are causally downstream of having consciousness?  That was the entire point of my comment.<p>And, to be clear, I don't actually put that high a probability that current models have most (or "enough") of the relevant qualities that people are talking about when they talk about consciousness - maybe 5-10%?  But the idea that there's literally no reason to think this is something that might be possible, now or in the future, is quite strange, and I think would require believing some pretty weird things (like dualism, etc).</p>
]]></description><pubDate>Sat, 16 Aug 2025 05:40:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=44920511</link><dc:creator>comp_throw7</dc:creator><comments>https://news.ycombinator.com/item?id=44920511</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44920511</guid></item><item><title><![CDATA[New comment by comp_throw7 in "Claude Opus 4 and 4.1 can now end a rare subset of conversations"]]></title><description><![CDATA[
<p>I don't really know what evidence you'd admit that this is a genuinely held belief and priority for many people at Anthropic.  Anybody who knows any Anthropic employees who've been there for more than a year knows this, but the world isn't that small a place, unfortunately(?).</p>
]]></description><pubDate>Sat, 16 Aug 2025 05:35:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=44920479</link><dc:creator>comp_throw7</dc:creator><comments>https://news.ycombinator.com/item?id=44920479</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44920479</guid></item><item><title><![CDATA[New comment by comp_throw7 in "Claude Opus 4 and 4.1 can now end a rare subset of conversations"]]></title><description><![CDATA[
<p>Given we don't understand consciousness, nor the internal workings of these models, the fact that their externally-observable behavior displays qualities we've only previously observed in other conscious beings is a reason to be real careful.  What is it that you'd expect to see, which you currently don't see, in a world where some model was in fact conscious during inference?</p>
]]></description><pubDate>Sat, 16 Aug 2025 04:01:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=44920024</link><dc:creator>comp_throw7</dc:creator><comments>https://news.ycombinator.com/item?id=44920024</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44920024</guid></item><item><title><![CDATA[New comment by comp_throw7 in "Claude Opus 4 and 4.1 can now end a rare subset of conversations"]]></title><description><![CDATA[
<p>> These are experts who clearly know (link in the article) that we have no real idea about these things<p>Yep!<p>> The framing comes across to me as a clearly mentally unwell position (ie strong anthropomorphization) being adopted for PR reasons.<p>This doesn't at all follow.  If we don't understand what creates the qualities we're concerned with, or how to measure them explicitly, and the _external behaviors_ of the systems are something we've only previously observed from things that have those qualities, it seems very reasonable to move carefully.  (Also, the post in question hedges quite a lot, so I'm not even sure what text you think you're describing.)<p>Separately, we don't need to posit galaxy-brained conspiratorial explanations for Anthropic taking an institutional stance re: model welfare being a real concern that's fully explained by the actual beliefs of Anthropic's leadership and employees, many of whom think these concerns are real (among others, like the non-trivial likelihood of sufficiently advanced AI killing everyone).</p>
]]></description><pubDate>Sat, 16 Aug 2025 03:57:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=44920002</link><dc:creator>comp_throw7</dc:creator><comments>https://news.ycombinator.com/item?id=44920002</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44920002</guid></item></channel></rss>