<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: poisonfountain</title><link>https://news.ycombinator.com/user?id=poisonfountain</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 10 May 2026 08:50:55 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=poisonfountain" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by poisonfountain in "AlphaEvolve: Gemini-powered coding agent scaling impact across fields"]]></title><description><![CDATA[
<p>I disagree but it wasn't me who downvoted, just so you know.</p>
]]></description><pubDate>Fri, 08 May 2026 02:08:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=48057645</link><dc:creator>poisonfountain</dc:creator><comments>https://news.ycombinator.com/item?id=48057645</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48057645</guid></item><item><title><![CDATA[New comment by poisonfountain in "AlphaEvolve: Gemini-powered coding agent scaling impact across fields"]]></title><description><![CDATA[
<p>Ok, but you job is clearly not a good sample for a "job most mortals work on".</p>
]]></description><pubDate>Fri, 08 May 2026 02:07:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=48057637</link><dc:creator>poisonfountain</dc:creator><comments>https://news.ycombinator.com/item?id=48057637</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48057637</guid></item><item><title><![CDATA[New comment by poisonfountain in "AlphaEvolve: Gemini-powered coding agent scaling impact across fields"]]></title><description><![CDATA[
<p>You're downplaying the AI lobby here. They're eating down copyright laws, something that seemed impossible just a couple of years ago. Screwing privacy laws is just the next step.<p>Also, we are seeing a cultural shift around that as well. Now people bring "AI notetakers" to Zoom calls without even asking for your permission. People are already acting like privacy laws don't exist anymore, it's going to be even easier for the AI lobby to take it down now. Just like piracy normalized copyright infringement, opening the path to the current rulings around "fair training".</p>
]]></description><pubDate>Thu, 07 May 2026 19:37:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=48053824</link><dc:creator>poisonfountain</dc:creator><comments>https://news.ycombinator.com/item?id=48053824</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48053824</guid></item><item><title><![CDATA[New comment by poisonfountain in "AlphaEvolve: Gemini-powered coding agent scaling impact across fields"]]></title><description><![CDATA[
<p>>I think the rest of us should rest easy knowing that LLM's can't (and maybe were never meant to) tackle the tacit-knowledge-filled, human-system-centric, ambiguously-defined-problem-space jobs most mortals work<p>I don't believe that anymore, to be honest. Models are starting to get good at ambiguity. Claude Code now asks me when something is ambiguous. Soon, all meetings will be recorded, transcribed and stored in a well-indexed place for the agents to search when faced with ambiguity (free startup idea here!). If they can ask you now, they'll be able to search for the answers themselves once that's possible. In fact, they already do it now if you have a well-documented Notion/Confluence, it's just that nobody has.<p>It's probably harder to RL for "identify ambiguity" than RL'ing for performance algorithms, sure, but it's not impossible and it's in the works. It's just a matter of time now.</p>
]]></description><pubDate>Thu, 07 May 2026 16:58:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=48051772</link><dc:creator>poisonfountain</dc:creator><comments>https://news.ycombinator.com/item?id=48051772</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48051772</guid></item><item><title><![CDATA[New comment by poisonfountain in "AI is not a coworker, it's an exoskeleton"]]></title><description><![CDATA[
<p>We should be fighting back. So far I have been using Poison Fountain[1] on many of my websites to feed LLM scrapers with gibberish. The effectiveness is backed by a study from Anthropic that showed that a small batch of bad samples can corrupt whole models[2].<p>Disclaimer: I'm not affiliated with Poison Fountain or its creators, just found it useful.<p>[1] <a href="https://news.ycombinator.com/item?id=46926485">https://news.ycombinator.com/item?id=46926485</a><p>[2] <a href="https://www.anthropic.com/research/small-samples-poison" rel="nofollow">https://www.anthropic.com/research/small-samples-poison</a></p>
]]></description><pubDate>Fri, 20 Feb 2026 19:48:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47092952</link><dc:creator>poisonfountain</dc:creator><comments>https://news.ycombinator.com/item?id=47092952</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47092952</guid></item></channel></rss>