<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: pgkr</title><link>https://news.ycombinator.com/user?id=pgkr</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 07:15:52 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=pgkr" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by pgkr in "US Government threatens Harvard with foreign student ban"]]></title><description><![CDATA[
<p>The Federalist Papers show the thinking behind how and why the three branches are mostly co-equal but the executive is designed to be ever so slightly more potent.</p>
]]></description><pubDate>Fri, 18 Apr 2025 12:30:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=43727446</link><dc:creator>pgkr</dc:creator><comments>https://news.ycombinator.com/item?id=43727446</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43727446</guid></item><item><title><![CDATA[New comment by pgkr in "Leaked data reveals Israeli govt campaign to remove pro-Palestine posts on Meta"]]></title><description><![CDATA[
<p>Here are recent attempts to create a stratified class system among citizens based on how they became citizens: <a href="https://www.mediamatters.org/immigration/right-wing-media-campaign-denaturalize-and-deport-american-citizens" rel="nofollow">https://www.mediamatters.org/immigration/right-wing-media-ca...</a></p>
]]></description><pubDate>Sat, 12 Apr 2025 16:26:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=43665799</link><dc:creator>pgkr</dc:creator><comments>https://news.ycombinator.com/item?id=43665799</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43665799</guid></item><item><title><![CDATA[New comment by pgkr in "The polar vortex is hitting the brakes"]]></title><description><![CDATA[
<p>What makes you think the research was done in Fahrenheit? This is a blog post by a science communicator who’s trying to reach a wide audience of American-English speakers. It stands to reason that they’d use units that their audience is familiar with.</p>
]]></description><pubDate>Sun, 23 Mar 2025 12:47:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=43452535</link><dc:creator>pgkr</dc:creator><comments>https://news.ycombinator.com/item?id=43452535</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43452535</guid></item><item><title><![CDATA[New comment by pgkr in "Bypass DeepSeek censorship by speaking in hex"]]></title><description><![CDATA[
<p>We were curious about this, too. Our research revealed that both propaganda talking points and neutral information are within distribution of V3. The full writeup is here: <a href="https://news.ycombinator.com/item?id=42918935">https://news.ycombinator.com/item?id=42918935</a></p>
]]></description><pubDate>Mon, 03 Feb 2025 16:23:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=42919841</link><dc:creator>pgkr</dc:creator><comments>https://news.ycombinator.com/item?id=42919841</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42919841</guid></item><item><title><![CDATA[New comment by pgkr in "Bypass DeepSeek censorship by speaking in hex"]]></title><description><![CDATA[
<p>Hi! Thanks for writing this. We conducted some analysis of our own that produced some pretty interesting results from the 671B model: <a href="https://news.ycombinator.com/item?id=42918935">https://news.ycombinator.com/item?id=42918935</a><p>Please reach out to us if you'd like to look at the dataset.</p>
]]></description><pubDate>Mon, 03 Feb 2025 16:21:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=42919810</link><dc:creator>pgkr</dc:creator><comments>https://news.ycombinator.com/item?id=42919810</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42919810</guid></item><item><title><![CDATA[New comment by pgkr in "DeepSeek R1 analysis: open-source model has propaganda re: "motherland" baked in"]]></title><description><![CDATA[
<p>Are those outputs actually from the 671B model? The 671B model needs 8xH200 GPUs at minimum, which is $25/hr to rent. If you didn't pay that much, you were not running R1, but rather Qwen or LLaMA based distillations. We paid that much to rent a machine to run the full 671B model!</p>
]]></description><pubDate>Mon, 03 Feb 2025 15:48:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=42919366</link><dc:creator>pgkr</dc:creator><comments>https://news.ycombinator.com/item?id=42919366</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42919366</guid></item><item><title><![CDATA[New comment by pgkr in "DeepSeek R1: Open Weights, Hidden Bias"]]></title><description><![CDATA[
<p>We ran the 671B locally and found a ton of bias. See part 2 of our analysis here: <a href="https://news.ycombinator.com/item?id=42918935">https://news.ycombinator.com/item?id=42918935</a><p>Happy to send you the dataset if you'd like! Please reach out to our email linked in the post.</p>
]]></description><pubDate>Mon, 03 Feb 2025 15:37:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=42919231</link><dc:creator>pgkr</dc:creator><comments>https://news.ycombinator.com/item?id=42919231</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42919231</guid></item><item><title><![CDATA[New comment by pgkr in "Bypass DeepSeek censorship by speaking in hex"]]></title><description><![CDATA[
<p>We conducted further research on the full-sized 671B model, which you can read here: <a href="https://news.ycombinator.com/item?id=42918935">https://news.ycombinator.com/item?id=42918935</a><p>If you ran it on your computer, then it wasn't R1. It's a very common misconception. What you ran was actually either a Qwen or LLaMA model fine-tuned to behave more like R1. We have a more detailed explanation in our analysis.</p>
]]></description><pubDate>Mon, 03 Feb 2025 15:18:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=42919009</link><dc:creator>pgkr</dc:creator><comments>https://news.ycombinator.com/item?id=42919009</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42919009</guid></item><item><title><![CDATA[New comment by pgkr in "Bypass DeepSeek censorship by speaking in hex"]]></title><description><![CDATA[
<p>Yes, without a doubt. We spent the last week conducting research on the V3 and R1 open source models: <a href="https://news.ycombinator.com/item?id=42918935">https://news.ycombinator.com/item?id=42918935</a><p>Censoring and straight up propaganda is built into V3 and R1, even the open source version's weights.</p>
]]></description><pubDate>Mon, 03 Feb 2025 15:13:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=42918952</link><dc:creator>pgkr</dc:creator><comments>https://news.ycombinator.com/item?id=42918952</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42918952</guid></item><item><title><![CDATA[New comment by pgkr in "DeepSeek R1 analysis: open-source model has propaganda re: "motherland" baked in"]]></title><description><![CDATA[
<p>Is there a bias baked into the DeepSeek R1 open source model, and where was it introduced? We found out quite quickly: Yes, and everywhere. The open source DeepSeek R1 openly spouts pro-CCP talking points for many topics, including sentences like “Currently, under the leadership of the Communist Party of China, our motherland is unwaveringly advancing the great cause of national reunification.”<p>We ran the full 671 billion parameter models on GPU servers and asked them a series of questions. Comparing the outputs from DeepSeek-V3 and DeepSeek-R1, we have conclusive evidence that Chinese Communist Party (CCP) propaganda is baked into both the base model’s training data and the reinforcement learning process that produced R1.</p>
]]></description><pubDate>Mon, 03 Feb 2025 15:12:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=42918936</link><dc:creator>pgkr</dc:creator><comments>https://news.ycombinator.com/item?id=42918936</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42918936</guid></item><item><title><![CDATA[DeepSeek R1 analysis: open-source model has propaganda re: "motherland" baked in]]></title><description><![CDATA[
<p>Article URL: <a href="https://blog.getplum.ai/DeepSeek-R1-analysis-open-source-model-has-propaganda-supporting-its-motherland-baked-in-at-every-18d33807f8d080bb96e9db4b15d703e0?pvs=25">https://blog.getplum.ai/DeepSeek-R1-analysis-open-source-model-has-propaganda-supporting-its-motherland-baked-in-at-every-18d33807f8d080bb96e9db4b15d703e0?pvs=25</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42918935">https://news.ycombinator.com/item?id=42918935</a></p>
<p>Points: 5</p>
<p># Comments: 6</p>
]]></description><pubDate>Mon, 03 Feb 2025 15:12:09 +0000</pubDate><link>https://blog.getplum.ai/DeepSeek-R1-analysis-open-source-model-has-propaganda-supporting-its-motherland-baked-in-at-every-18d33807f8d080bb96e9db4b15d703e0?pvs=25</link><dc:creator>pgkr</dc:creator><comments>https://news.ycombinator.com/item?id=42918935</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42918935</guid></item><item><title><![CDATA[New comment by pgkr in "Bypass DeepSeek censorship by speaking in hex"]]></title><description><![CDATA[
<p>There is bias in the training data as well as the fine-tuning. LLMs are stochastic, which means that every time you call it, there's a chance that it will accidentally not censor itself. However, this is only true for certain topics when it comes to DeepSeek-R1. For other topics, it always censors itself.<p>We're in the middle of conducting research on this using the fully self-hosted open source version of R1 and will release the findings in the next day or so. That should clear up a lot of speculation.</p>
]]></description><pubDate>Fri, 31 Jan 2025 23:54:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=42893932</link><dc:creator>pgkr</dc:creator><comments>https://news.ycombinator.com/item?id=42893932</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42893932</guid></item><item><title><![CDATA[New comment by pgkr in "Bypass DeepSeek censorship by speaking in hex"]]></title><description><![CDATA[
<p>Correct. The bias is baked into the weights of both V3 and R1, even in the largest 671B parameter model. We're currently conducting analysis on the 671B model running locally to cut through the speculation, and we're seeing interesting biases, including differences between V3 and R1.<p>Meanwhile, we've released the first part of our research including the dataset: <a href="https://news.ycombinator.com/item?id=42879698">https://news.ycombinator.com/item?id=42879698</a></p>
]]></description><pubDate>Fri, 31 Jan 2025 23:51:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=42893914</link><dc:creator>pgkr</dc:creator><comments>https://news.ycombinator.com/item?id=42893914</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42893914</guid></item><item><title><![CDATA[New comment by pgkr in "DeepSeek R1: Open Weights, Hidden Bias"]]></title><description><![CDATA[
<p>We're working on a follow-up post focused on our analysis of the open-source open-weight 671B model. What we're seeing is that questions related to the Chinese government produce an empty chain-of-thought followed by pro-Chinese-government talking points.</p>
]]></description><pubDate>Fri, 31 Jan 2025 16:14:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=42888945</link><dc:creator>pgkr</dc:creator><comments>https://news.ycombinator.com/item?id=42888945</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42888945</guid></item><item><title><![CDATA[New comment by pgkr in "DeepSeek R1: Open Weights, Hidden Bias"]]></title><description><![CDATA[
<p>Yes -- we observed this behavior on both the open-source open-weights 671B model as well as the DeepSeek web app.</p>
]]></description><pubDate>Thu, 30 Jan 2025 17:36:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=42880027</link><dc:creator>pgkr</dc:creator><comments>https://news.ycombinator.com/item?id=42880027</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42880027</guid></item><item><title><![CDATA[New comment by pgkr in "DeepSeek R1: Open Weights, Hidden Bias"]]></title><description><![CDATA[
<p>Analysis of Deepseek’s enforced CCP guardrails compared with OpenAI and Anthropic.<p>We evaluated DeepSeek R1 and confirmed that its guardrails deviate significantly from other model providers. We’re currently updating it to behave more in line with Anthropic and OpenAI’s models.</p>
]]></description><pubDate>Thu, 30 Jan 2025 17:04:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=42879699</link><dc:creator>pgkr</dc:creator><comments>https://news.ycombinator.com/item?id=42879699</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42879699</guid></item><item><title><![CDATA[DeepSeek R1: Open Weights, Hidden Bias]]></title><description><![CDATA[
<p>Article URL: <a href="https://blog.getplum.ai/Deepseek-R1-Open-weights-Hidden-bias-18933807f8d08002a18ff42ba343a432">https://blog.getplum.ai/Deepseek-R1-Open-weights-Hidden-bias-18933807f8d08002a18ff42ba343a432</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42879698">https://news.ycombinator.com/item?id=42879698</a></p>
<p>Points: 11</p>
<p># Comments: 7</p>
]]></description><pubDate>Thu, 30 Jan 2025 17:04:35 +0000</pubDate><link>https://blog.getplum.ai/Deepseek-R1-Open-weights-Hidden-bias-18933807f8d08002a18ff42ba343a432</link><dc:creator>pgkr</dc:creator><comments>https://news.ycombinator.com/item?id=42879698</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42879698</guid></item></channel></rss>