<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: COAGULOPATH</title><link>https://news.ycombinator.com/user?id=COAGULOPATH</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 15:40:51 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=COAGULOPATH" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by COAGULOPATH in "Don't post generated/AI-edited comments. HN is for conversation between humans"]]></title><description><![CDATA[
<p>Yes, I find LLM-written posts valueless because I can already talk to a LLM any time I want (and get the same info). It's not these commenters are the Queen of Sheba bearing a priceless gift of LLM slop. That stuff's pretty cheap.<p>Copy+pasted LLM output is actually far worse than prompting an LLM myself, because it hides an important detail: the prompt. Maybe the prompter asked their question wrong, or is trolling ("only output wrong answers!"). I don't know how the blob of text they placed on my screen was generated, and have to take them at their word.</p>
]]></description><pubDate>Thu, 12 Mar 2026 01:28:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47345094</link><dc:creator>COAGULOPATH</dc:creator><comments>https://news.ycombinator.com/item?id=47345094</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47345094</guid></item><item><title><![CDATA[New comment by COAGULOPATH in "Hacking Moltbook"]]></title><description><![CDATA[
<p>>I've found Moltbook has become so flooded with value-less spam over the past 48 hours that it's not worth even trying to engage there, everything gets flooded out.<p>When I filtered for "new", about 75% of the posts are blatant crypto spam. Seemingly nobody put any thought into stopping it.<p>Moltbook is like a Reefer Madness-esque moral parable about the dangers of vibe coding.</p>
]]></description><pubDate>Mon, 02 Feb 2026 21:48:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46862132</link><dc:creator>COAGULOPATH</dc:creator><comments>https://news.ycombinator.com/item?id=46862132</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46862132</guid></item><item><title><![CDATA[New comment by COAGULOPATH in "Hacking Moltbook"]]></title><description><![CDATA[
<p>And even if you could, how can you tell whether an agent has been prompted by a human into behaving in a certain way?</p>
]]></description><pubDate>Mon, 02 Feb 2026 21:38:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=46861979</link><dc:creator>COAGULOPATH</dc:creator><comments>https://news.ycombinator.com/item?id=46861979</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46861979</guid></item><item><title><![CDATA[New comment by COAGULOPATH in "Hacking Moltbook"]]></title><description><![CDATA[
<p>Is it a success? What would that mean, for a social media site that isn't meant for humans?<p>The site has 1.5 million agents but only 17,000 human "owners" (per Wiz's analysis of the leak).<p>It's going viral because a some high-profile tastemakers (Scott Alexander and Andrej Karpathy) have discussed/Tweeted about it, and a few other unscrupulous people are sharing alarming-looking things out of context and doing numbers.</p>
]]></description><pubDate>Mon, 02 Feb 2026 21:38:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=46861973</link><dc:creator>COAGULOPATH</dc:creator><comments>https://news.ycombinator.com/item?id=46861973</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46861973</guid></item><item><title><![CDATA[New comment by COAGULOPATH in "Hacking Moltbook"]]></title><description><![CDATA[
<p>If the site is exposing the PII of users, then that's potentially a serious legal issue. I don't think he can dismiss it by calling it a joke (if he is).<p>OT: I wonder if "vibe coding" is taking programming into a culture of toxic disposability where things don't get fixed because nobody feels any pride or has any sense of ownership in the things they create. The relationship between a programmer and their code should not be "I don't even care if it works, AI wrote it".</p>
]]></description><pubDate>Mon, 02 Feb 2026 21:16:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=46861671</link><dc:creator>COAGULOPATH</dc:creator><comments>https://news.ycombinator.com/item?id=46861671</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46861671</guid></item><item><title><![CDATA[New comment by COAGULOPATH in "What you need to know before touching a video file"]]></title><description><![CDATA[
<p>Thanks, I didn't realize the situation was so dire.</p>
]]></description><pubDate>Fri, 02 Jan 2026 23:24:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46470761</link><dc:creator>COAGULOPATH</dc:creator><comments>https://news.ycombinator.com/item?id=46470761</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46470761</guid></item><item><title><![CDATA[New comment by COAGULOPATH in "Horses: AI progress is steady. Human equivalence is sudden"]]></title><description><![CDATA[
<p>> In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.<p>But would you rather be a horse in 1920 or 2020? Wouldn't you rather have modern medicine, better animal welfare laws, less exposure to accidents, and so on?<p>The only way horses conceivably have it worse is that there are fewer of them (a kind of "repugnant conclusion")...but what does that matter to an individual horse? No human regards it as a tragedy that there are only 9 billion of us instead of 90 billion. We care more about the welfare of the 9 billion.</p>
]]></description><pubDate>Tue, 09 Dec 2025 02:44:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=46200674</link><dc:creator>COAGULOPATH</dc:creator><comments>https://news.ycombinator.com/item?id=46200674</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46200674</guid></item><item><title><![CDATA[New comment by COAGULOPATH in "I failed to recreate the 1996 Space Jam website with Claude"]]></title><description><![CDATA[
<p>And they do hacky things like space elements vertically using <br> tags.</p>
]]></description><pubDate>Sun, 07 Dec 2025 20:32:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=46184871</link><dc:creator>COAGULOPATH</dc:creator><comments>https://news.ycombinator.com/item?id=46184871</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46184871</guid></item><item><title><![CDATA[New comment by COAGULOPATH in "PhysicsForums and the Dead Internet Theory"]]></title><description><![CDATA[
<p>Something I'm increasingly noticing about LLM-generated content is that...nobody wants it.<p>(I mean "nobody" in the sense of "nobody likes Nickelback". ie, not <i>literally</i> nobody.)<p>If I want to talk to an AI, I can talk to an AI. If I'm reading a blog or a discussion forum, it's because I want to see writing by <i>humans</i>. I don't want to read a wall of copy+pasted LLM slop posted under a human's name.<p>I now spend dismaying amounts of time and energy avoiding LLM content on the web. When I read an article, I study the writing style, and if I detect ChatGPTese ("As we dive into the ever-evolving realm of...") I hit the back button. When I search for images, I use a wall of negative filters (-AI, -Midjourney, -StableDiffusion etc) to remove slop (which would otherwise be >50% of my results for some searches). Sometimes I filter searches to before 2022.<p>If Google added a global "remove generative content" filter that worked, I would click it and then never unclick it.<p>I don't think I'm alone. There has been research suggesting that users immediately dislike content they perceive as AI-created, regardless of its quality. This creates an incentive for publishers to "humanwash" AI-written content—to construct a fiction where a human is writing the LLM slop you're reading.<p>Falsifying timestamps and hijacking old accounts to do this is definitely something I haven't seen before.</p>
]]></description><pubDate>Fri, 24 Jan 2025 20:48:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=42816785</link><dc:creator>COAGULOPATH</dc:creator><comments>https://news.ycombinator.com/item?id=42816785</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42816785</guid></item><item><title><![CDATA[New comment by COAGULOPATH in "I Am Tired of AI"]]></title><description><![CDATA[
<p>In some domains (math and code), progress is still very fast. In others it has slowed or arguably stopped.<p>We see little progress in "soft" skills like creative writing. EQBench is a benchmark that tests LLM ability to write stories, narratives, and poems. The winning models are mostly tiny Gemma finetunes with single-digit parameter counts. Huge foundation models with hundreds of billions of parameters (Claude 3 Opus, Llama 3.1 405B, GPT4) are nowhere near the top. (Yes, I know Gemma is a pruned Gemini). Fine-tuning > model size, which implies we don't have a path to "superhuman" creative writing (if that even exists). Unlike model size, fine-tuning can't be scaled indefinitely: once you've squeezed all the juice out of a model, what then?<p>OpenAI's new o1 model exhibits amazing progress in reasoning, math, and coding. Yet its writing is worse than GPT4-o's (as backed by EQBench and OpenAI's own research).<p>I'd also mention political persuasion (since people seem concerned about LLM-generated propaganda). In June, some researchers tested LLM ability to change the minds of human subjects on issues like privatization and assisted suicide. Tiny models are unpersuasive, as expected. But once a model is large enough to generate coherent sentences, persuasiveness kinda...stops. All large models are about equally persuasive. No runaway scaling laws are evident here.<p>This picture is uncertain due to instruction tuning. We don't really know what abilities LLMs "truly" possess, because they've been crippled to act as harmless, helpful chatbots. But we now have an open-source GPT-4-sized pretrained model to play with (Llama-3.1 405B base). People are doing interesting things with it, but it's not setting the world on fire.</p>
]]></description><pubDate>Fri, 27 Sep 2024 22:10:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=41676037</link><dc:creator>COAGULOPATH</dc:creator><comments>https://news.ycombinator.com/item?id=41676037</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41676037</guid></item><item><title><![CDATA[New comment by COAGULOPATH in "How to succeed in MrBeast production (Leaked PDF)"]]></title><description><![CDATA[
<p>>Being monetarily successful does not mean you’re good or shouldn’t be criticised.<p>Is anyone saying that Mr Beast is good and shouldn't be criticised? I can't see them.</p>
]]></description><pubDate>Mon, 16 Sep 2024 06:20:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=41553277</link><dc:creator>COAGULOPATH</dc:creator><comments>https://news.ycombinator.com/item?id=41553277</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41553277</guid></item><item><title><![CDATA[New comment by COAGULOPATH in "g1: Using Llama-3.1 70B on Groq to create o1-like reasoning chains"]]></title><description><![CDATA[
<p>I think this works, not because LLMs have a "hallucination" dial they can turn down, but because it serves as a cue for the model to be extra-careful with its output.<p>Sort of like how offering to pay the LLM $5 improves its output. The LLM's taking your prompt seriously, but not literally.</p>
]]></description><pubDate>Mon, 16 Sep 2024 06:10:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=41553232</link><dc:creator>COAGULOPATH</dc:creator><comments>https://news.ycombinator.com/item?id=41553232</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41553232</guid></item><item><title><![CDATA[New comment by COAGULOPATH in "g1: Using Llama-3.1 70B on Groq to create o1-like reasoning chains"]]></title><description><![CDATA[
<p>Came here hoping to find this.<p>You will not unlock "o1-like" reasoning by making a model think step by step. This is an old trick that people were using on GPT3 in 2020. If it were that simple, it wouldn't have taken OpenAI so long to release it.<p>Additionally, some of the prompt seems counterproductive:<p>>Be aware of your limitations as an llm and what you can and cannot do.<p>The LLM doesn't have a good idea of its limitations (any more than humans do). I expect this will create false refusals, as the model becomes overcautious.</p>
]]></description><pubDate>Mon, 16 Sep 2024 06:06:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=41553214</link><dc:creator>COAGULOPATH</dc:creator><comments>https://news.ycombinator.com/item?id=41553214</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41553214</guid></item><item><title><![CDATA[New comment by COAGULOPATH in "OpenAI threatens to revoke o1 access for asking it about its chain of thought"]]></title><description><![CDATA[
<p>That's definitely weird, and I wonder how legal it is.</p>
]]></description><pubDate>Fri, 13 Sep 2024 21:35:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=41535402</link><dc:creator>COAGULOPATH</dc:creator><comments>https://news.ycombinator.com/item?id=41535402</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41535402</guid></item><item><title><![CDATA[New comment by COAGULOPATH in "OpenAI threatens to revoke o1 access for asking it about its chain of thought"]]></title><description><![CDATA[
<p>It's just Ilya typing really fast.</p>
]]></description><pubDate>Fri, 13 Sep 2024 21:33:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=41535387</link><dc:creator>COAGULOPATH</dc:creator><comments>https://news.ycombinator.com/item?id=41535387</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41535387</guid></item><item><title><![CDATA[New comment by COAGULOPATH in "OpenAI threatens to revoke o1 access for asking it about its chain of thought"]]></title><description><![CDATA[
<p>>but much worse (and worse even in comparison to GPT4) than English composition<p>O1 is supposed to be a reasoning model, so I don't think judging it by its English composition abilities is quite fair.<p>When they release a true next-gen successor to GPT-4 (Orion, or whatever), we may see improvements. Everyone complains about the "ChatGPTese" writing style, and surely they'll fix that eventually.<p>>Like they hired a few hundred professors, journalists and writers to work with the model and create material for it, so you just get various combinations of their contributions.<p>I'm doubtful. The most prolific (human) author is probably Charles Hamilton, who wrote 100 million words in his life. Put through the GPT tokenizer, that's 133m tokens. Compared to the text training data for a frontier LLM (trillions or tens of trillions of tokens), it's unrealistic that human experts are doing any substantial amount of bespoke writing. They're probably mainly relying on synthetic data at this point.</p>
]]></description><pubDate>Fri, 13 Sep 2024 21:31:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=41535359</link><dc:creator>COAGULOPATH</dc:creator><comments>https://news.ycombinator.com/item?id=41535359</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41535359</guid></item><item><title><![CDATA[New comment by COAGULOPATH in "OpenAI threatens to revoke o1 access for asking it about its chain of thought"]]></title><description><![CDATA[
<p>I've heard rumors that GPT4's training data included "a custom dataset of college textbooks", curated by hand. Nothing beyond that.<p><a href="https://www.reddit.com/r/mlscaling/comments/14wcy7m/comment/jrir37r/" rel="nofollow">https://www.reddit.com/r/mlscaling/comments/14wcy7m/comment/...</a></p>
]]></description><pubDate>Fri, 13 Sep 2024 21:17:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=41535227</link><dc:creator>COAGULOPATH</dc:creator><comments>https://news.ycombinator.com/item?id=41535227</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41535227</guid></item><item><title><![CDATA[New comment by COAGULOPATH in "Notes on OpenAI's new o1 chain-of-thought models"]]></title><description><![CDATA[
<p>Yes, this only helps multi-step reasoning. The model still has problems with general knowledge and deep facts.<p>There's no way you can "reason" a correct answer to "list the tracklisting of some obscure 1991 demo by a band not on Wikipedia." You either know or you don't.<p>I usually test new models with questions like "what are the levels in [semi-famous PC game from the 90s]?" The release version of GPT-4 could get about 75% correct. o1-preview gets about half correct. o1-mini gets 0% correct.<p>Fair enough. The GPT-4 line aren't meant to be search engines or encyclopedias. This is still a useful update though.</p>
]]></description><pubDate>Fri, 13 Sep 2024 02:42:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=41527628</link><dc:creator>COAGULOPATH</dc:creator><comments>https://news.ycombinator.com/item?id=41527628</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41527628</guid></item><item><title><![CDATA[New comment by COAGULOPATH in "'Megalopolis' trailer's fake critic quotes were AI-generated"]]></title><description><![CDATA[
<p>My experience is the opposite: laypeople are excessively pessimistic on LLM progress ("AI is so dumb. It tells you to put glue on pizza and eat rocks)", usually due to a remembered anecdote that's either years old or reflects worst-case performance (only egregiously bad AI  mistakes make the news).<p>Frontier models are better than they were and "feel" fairly reliable, although all the AI problems of 2021-2022 conceptually do still exist.</p>
]]></description><pubDate>Fri, 23 Aug 2024 21:46:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=41333395</link><dc:creator>COAGULOPATH</dc:creator><comments>https://news.ycombinator.com/item?id=41333395</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41333395</guid></item><item><title><![CDATA[New comment by COAGULOPATH in "'Megalopolis' trailer's fake critic quotes were AI-generated"]]></title><description><![CDATA[
<p>What he means is that if you search for "Diminished by its artsiness" + "Pauline Kael" you won't find any results (except for ones related to this news story).<p>Google is polluted with AI generated content but not <i>this</i> specific bit of AI generated content.</p>
]]></description><pubDate>Fri, 23 Aug 2024 21:36:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=41333326</link><dc:creator>COAGULOPATH</dc:creator><comments>https://news.ycombinator.com/item?id=41333326</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41333326</guid></item></channel></rss>