<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: tossaway2000</title><link>https://news.ycombinator.com/user?id=tossaway2000</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 18 Apr 2026 03:59:23 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=tossaway2000" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by tossaway2000 in "Have we been wrong about why Mars is red?"]]></title><description><![CDATA[
<p>Is that assuming the water expelled from the person can be reused or not?</p>
]]></description><pubDate>Tue, 25 Feb 2025 21:33:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=43177738</link><dc:creator>tossaway2000</dc:creator><comments>https://news.ycombinator.com/item?id=43177738</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43177738</guid></item><item><title><![CDATA[New comment by tossaway2000 in "Bypass DeepSeek censorship by speaking in hex"]]></title><description><![CDATA[
<p>> I wagered it was extremely unlikely they had trained censorship into the LLM model itself.<p>I wonder why that would be unlikely? Seems better to me to apply censorship at the training phase. Then the model can be truly naive about the topic, and there's no way to circumvent the censor layer with clever tricks at inference time.</p>
]]></description><pubDate>Fri, 31 Jan 2025 20:12:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=42891368</link><dc:creator>tossaway2000</dc:creator><comments>https://news.ycombinator.com/item?id=42891368</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42891368</guid></item></channel></rss>