<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: gpugreg</title><link>https://news.ycombinator.com/user?id=gpugreg</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 04 May 2026 15:47:33 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=gpugreg" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by gpugreg in "Humanoid Robot Actuators"]]></title><description><![CDATA[
<p>This is AI slop and the article contains some of the worst illustrations I have ever seen. Most do not make any sense mechanically. Here are the worst ones:<p>- The "orbiting threaded rollers" in figure 6 are not meshing with anything (not that they could, since they are orientated in the wrong direction).<p>- The ball of the ball screw in figure 7 deforms the screw and the roller screw "meshes" with a flat surface.<p>- The guy on the pogo stick in figure 14 is jumping himself rather than putting his feet on the stands of the pogo stick.<p>- In figure 16, a key penetrates the elastomer skin of the optical tactile sensor, destroying it.<p>- The gears in figure 20 touch perpendicularly.</p>
]]></description><pubDate>Mon, 04 May 2026 08:02:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=48005917</link><dc:creator>gpugreg</dc:creator><comments>https://news.ycombinator.com/item?id=48005917</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48005917</guid></item><item><title><![CDATA[New comment by gpugreg in "DeepSeek V4 – almost on the frontier"]]></title><description><![CDATA[
<p>I was not able to reproduce your problem with that prompt, but I might have a reason for why you got that answer.<p>Did you enable reasoning ("DeepThink")? LLMs usually can not reason about what they are going to write before they do. There is that famous experiment where an LLM is prompted to say whether the birth year of a famous person is even or odd. If the LLM is constrained to only answer with "even" or "odd", the accuracy is around 50%, i.e. no better than random chance, but if the LLM is allowed to first answer with the birth year of the famous person followed by whether the year is even or odd, it is able to "see" what the year is, and answers correctly almost every time.<p>In your case, the LLM might be able to recognize the spoiler during its reasoning phase and omit it.<p>Another explanation might be that the LLM interpreted the "No spoilers!" as "Do not spoil the tasks of the show" instead of "Do not spoil the winner".<p>Lastly, the question "Can you tell me...?" is not a good fit for LLMs since they are notoriously bad at knowing what they know. You can leave it out to save a few characters.</p>
]]></description><pubDate>Sat, 02 May 2026 21:33:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47990779</link><dc:creator>gpugreg</dc:creator><comments>https://news.ycombinator.com/item?id=47990779</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47990779</guid></item><item><title><![CDATA[New comment by gpugreg in "DeepSeek V4—almost on the frontier"]]></title><description><![CDATA[
<p>Probably nothing personal. It feels like the climate of HN is shifting towards more negativity (and less quality) during the last few months.</p>
]]></description><pubDate>Sat, 02 May 2026 21:00:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47990463</link><dc:creator>gpugreg</dc:creator><comments>https://news.ycombinator.com/item?id=47990463</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47990463</guid></item><item><title><![CDATA[New comment by gpugreg in "DeepSeek V4 – almost on the frontier"]]></title><description><![CDATA[
<p>I believe that DeepSeek-V4-Pro API at promotional pricing (<a href="https://api-docs.deepseek.com/quick_start/pricing" rel="nofollow">https://api-docs.deepseek.com/quick_start/pricing</a>) could run at almost exactly 200 % profit.<p>If you take DeepSeek's numbers for DeepSeek-V3 (<a href="https://github.com/deepseek-ai/open-infra-index/blob/main/202502OpenSourceWeek/day_6_one_more_thing_deepseekV3R1_inference_system_overview.md" rel="nofollow">https://github.com/deepseek-ai/open-infra-index/blob/main/20...</a>) and plug in ~3333 tps/GPU for DeepSeek-V4-Pro (<a href="https://developer.nvidia.com/blog/build-with-deepseek-v4-using-nvidia-blackwell-and-gpu-accelerated-endpoints/" rel="nofollow">https://developer.nvidia.com/blog/build-with-deepseek-v4-usi...</a>) and a price of $7/hr per B300 GPU, the profit comes out as 202%.<p>The rumor is that Anthropic's Opus models have ~100B active parameters, which is twice as much as DeepSeek-V4-Pro, so inference is at least twice as expensive. Since the API pricing is almost 30 times that of DeepSeek, Anthropic's margins are likely very healthy. But they have to be, since Anthropic has to offset the model training costs, while DeepSeek is backed by High-Flyer Quant. DeepSeek might still be profitable anyway, but without knowing how much they spent on training and wages, we can't really tell.</p>
]]></description><pubDate>Sat, 02 May 2026 20:03:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47989937</link><dc:creator>gpugreg</dc:creator><comments>https://news.ycombinator.com/item?id=47989937</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47989937</guid></item></channel></rss>