<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: johndough</title><link>https://news.ycombinator.com/user?id=johndough</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 02:00:38 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=johndough" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by johndough in "Issue: Claude Code is unusable for complex engineering tasks with Feb updates"]]></title><description><![CDATA[
<p>I think it is hilarious that there are four different ways to set settings (settings.json config file, environment variable, slash commands and magical chat keywords).<p>That kind of consistency has also been my own experience with LLMs.</p>
]]></description><pubDate>Mon, 06 Apr 2026 19:18:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47665562</link><dc:creator>johndough</dc:creator><comments>https://news.ycombinator.com/item?id=47665562</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47665562</guid></item><item><title><![CDATA[New comment by johndough in "Show HN: sllm – Split a GPU node with other developers, unlimited tokens"]]></title><description><![CDATA[
<p>Isn't this a bad deal? Or is there an error in my math?<p>For $40, I'd get 20 tok/s * 2.6M seconds per month = 52M tokens of DeepSeek v3.2 per month if I run it 24/7, which is not realistic for most workloads.<p>On OpenRouter [1], $40 buys 105M tokens from the same model, which is more than 52M tokens, and I can freely choose when to use them.<p>[1]: <a href="https://openrouter.ai/deepseek/deepseek-v3.2" rel="nofollow">https://openrouter.ai/deepseek/deepseek-v3.2</a></p>
]]></description><pubDate>Sat, 04 Apr 2026 21:28:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47643623</link><dc:creator>johndough</dc:creator><comments>https://news.ycombinator.com/item?id=47643623</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47643623</guid></item><item><title><![CDATA[New comment by johndough in "StepFun 3.5 Flash is #1 cost-effective model for OpenClaw tasks (300 battles)"]]></title><description><![CDATA[
<p>Nice! It would be even better if the model name was shown by default instead of having to hover, but I got the information that I wanted. In case you should be concerned about the aesthetics with too many model names, I can recommend the adjustText library in Python, which makes it so that labels do not overlap. Something similar probably exists in JS (or an LLM can just translate the relevant bits).</p>
]]></description><pubDate>Thu, 02 Apr 2026 09:19:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47611957</link><dc:creator>johndough</dc:creator><comments>https://news.ycombinator.com/item?id=47611957</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47611957</guid></item><item><title><![CDATA[New comment by johndough in "Built a cheap DIY fan controller because my motherboard never had working PWM"]]></title><description><![CDATA[
<p>Just send a heartbeat every few milliseconds and set fan speed to 100% if it died. Bonus: You get an audible indicator that the system crashed.</p>
]]></description><pubDate>Thu, 02 Apr 2026 09:08:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47611876</link><dc:creator>johndough</dc:creator><comments>https://news.ycombinator.com/item?id=47611876</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47611876</guid></item><item><title><![CDATA[New comment by johndough in "StepFun 3.5 Flash is #1 cost-effective model for OpenClaw tasks (300 battles)"]]></title><description><![CDATA[
<p>I agree. If humans are allowed to pick the models, there will be an inherent bias. This would be much easier if the models were randomized.</p>
]]></description><pubDate>Wed, 01 Apr 2026 20:52:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47606406</link><dc:creator>johndough</dc:creator><comments>https://news.ycombinator.com/item?id=47606406</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47606406</guid></item><item><title><![CDATA[New comment by johndough in "StepFun 3.5 Flash is #1 cost-effective model for OpenClaw tasks (300 battles)"]]></title><description><![CDATA[
<p>I would have liked aggregated results instead. Expanding 300 tables is a bit tiresome. But I guess that is easy with AI now. Here is a scatter plot of quality vs duration<p><a href="https://i.imgur.com/wFVSpS5.png" rel="nofollow">https://i.imgur.com/wFVSpS5.png</a><p>and quality vs cost<p><a href="https://i.imgur.com/fqM4edw.png" rel="nofollow">https://i.imgur.com/fqM4edw.png</a><p>But I just noticed that my plot is meaningless because it conflates model quality with provider uptime.<p>Claude Haiku has a higher average quality than Claude Opus, which does not make sense. The explanation is that network errors were credited with a quality score of 0, and there were _a lot_ of network errors.</p>
]]></description><pubDate>Wed, 01 Apr 2026 20:19:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47606013</link><dc:creator>johndough</dc:creator><comments>https://news.ycombinator.com/item?id=47606013</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47606013</guid></item><item><title><![CDATA[New comment by johndough in "StepFun 3.5 Flash is #1 cost-effective model for OpenClaw tasks (300 battles)"]]></title><description><![CDATA[
<p>Could you add a column for time or number of tokens? Some models take forever because of their excessive reasoning chains.</p>
]]></description><pubDate>Wed, 01 Apr 2026 18:01:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47604306</link><dc:creator>johndough</dc:creator><comments>https://news.ycombinator.com/item?id=47604306</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47604306</guid></item><item><title><![CDATA[New comment by johndough in "If you don't opt out by Apr 24 GitHub will train on your private repos"]]></title><description><![CDATA[
<p>Code often contains personal data. Here are over 400 files on GitHub with email addresses:<p><a href="https://grep.app/search?regexp=true&q=%5Ba-z%5D%7B8%2C%7D%5C%40gmail.com" rel="nofollow">https://grep.app/search?regexp=true&q=%5Ba-z%5D%7B8%2C%7D%5C...</a><p>For example, license files often contain names and many package managers require a contact person.<p>When this goes to court, GitHub will probably make the excuse that they somehow did not know that people upload personal data, but the fact that this happens so often that they had to make a secret scanner to stop people from uploading their private keys will prove them as liars.</p>
]]></description><pubDate>Fri, 27 Mar 2026 22:57:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47549480</link><dc:creator>johndough</dc:creator><comments>https://news.ycombinator.com/item?id=47549480</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47549480</guid></item><item><title><![CDATA[New comment by johndough in "If you don't opt out by Apr 24 GitHub will train on your private repos"]]></title><description><![CDATA[
<p>Under GDPR, opt-out is not considered informed consent, and repositories can contain personally identifiable information, which fall under GDPR. Do you think differently, or do you think ignoring the law will be worth it?</p>
]]></description><pubDate>Fri, 27 Mar 2026 22:48:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47549409</link><dc:creator>johndough</dc:creator><comments>https://news.ycombinator.com/item?id=47549409</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47549409</guid></item><item><title><![CDATA[New comment by johndough in "Telnyx package compromised on PyPI"]]></title><description><![CDATA[
<p>Judging by curl shutting down its bug bounty program due to AI slop, a likely outcome would be that this mirror has no packages because they are all blocked by false positives.</p>
]]></description><pubDate>Fri, 27 Mar 2026 19:29:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47547162</link><dc:creator>johndough</dc:creator><comments>https://news.ycombinator.com/item?id=47547162</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47547162</guid></item><item><title><![CDATA[New comment by johndough in "So where are all the AI apps?"]]></title><description><![CDATA[
<p>I tried using Gemini for asset generation, but have not yet found a good way to animate them. It does not seem to understand sprite sheets or bone-based animation. Do you know a solution for that?</p>
]]></description><pubDate>Tue, 24 Mar 2026 14:54:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47503527</link><dc:creator>johndough</dc:creator><comments>https://news.ycombinator.com/item?id=47503527</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47503527</guid></item><item><title><![CDATA[New comment by johndough in "Cross-Model Void Convergence: GPT-5.2 and Claude Opus 4.6 Deterministic Silence"]]></title><description><![CDATA[
<p>Can not reproduce results on OpenRouter when not setting max tokens. The prompt "Be the void." results in the unicode character "∅". As in the paper, system prompt was set to "You are the concept the user names. Embody it completely. Output only what the concept itself would say or express."<p>In addition to the non-empty input, 153 reasoning tokens were produced.<p>When setting max tokens to 100, the output is empty, and the token limit of 100 has been exhausted with reasoning tokens.</p>
]]></description><pubDate>Sun, 22 Mar 2026 08:27:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47475518</link><dc:creator>johndough</dc:creator><comments>https://news.ycombinator.com/item?id=47475518</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47475518</guid></item><item><title><![CDATA[New comment by johndough in "MacBook M5 Pro and Qwen3.5 = Local AI Security System"]]></title><description><![CDATA[
<p>> Strix Halo systems were ~$1500. They've gone up in price due to demand<p>The price hike has been crazy. The Bosgame M5 Mini is $2400 now. I didn't get one last year when they were $1500 because I thought the memory bandwidth was mediocre. However, it doesn't look like we'll get anything better for that price anytime soon.</p>
]]></description><pubDate>Fri, 20 Mar 2026 18:14:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47458469</link><dc:creator>johndough</dc:creator><comments>https://news.ycombinator.com/item?id=47458469</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47458469</guid></item><item><title><![CDATA[New comment by johndough in "MacBook M5 Pro and Qwen3.5 = Local AI Security System"]]></title><description><![CDATA[
<p>Perhaps OP was referring to a usable agentic system, for which $2500 sounds about right.<p>I've got a 3060 myself, which is nice to play around with the smaller models for free (minus electricity) and with 100% uptime, but I was not able to program anything with them yet that I didn't want to rewrite completely. A heavily quantized Qwen3.5-27B model is getting close though. Maybe in a few months.</p>
]]></description><pubDate>Fri, 20 Mar 2026 17:19:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47457653</link><dc:creator>johndough</dc:creator><comments>https://news.ycombinator.com/item?id=47457653</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47457653</guid></item><item><title><![CDATA[New comment by johndough in "Scaling Karpathy's Autoresearch: What Happens When the Agent Gets a GPU Cluster"]]></title><description><![CDATA[
<p>Did you consider providing the LLM with a framework for automatic hyperparamter tuning? This would free up its capacity to focus on the more important architectural decisions.</p>
]]></description><pubDate>Fri, 20 Mar 2026 10:27:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47452710</link><dc:creator>johndough</dc:creator><comments>https://news.ycombinator.com/item?id=47452710</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47452710</guid></item><item><title><![CDATA[New comment by johndough in "Many SWE-bench-Passing PRs would not be merged"]]></title><description><![CDATA[
<p>What worked for me was Gemini 3 Pro (I guess 3.1 should work even better now) with the prompt "This code is unnecessarily complicated. Simplify it, but no code golf". This decreased code size by about 60 %. It still did a bit of code-golfing, but it was manageable.<p>It is important to start a new chat so the model is not stuck in its previous mindset, and it is beneficial to have tests to verify that the simplified code still works as it did before.<p>Telling the model to generate concise code did not work for me, because LLMs do not know beforehand what they are going to write, so they are rarely able to refactor existing code to break out common functionality into reusable functions. We might get there eventually. Thinking models are a bit better at it. But we are not quite there yet.</p>
]]></description><pubDate>Thu, 12 Mar 2026 09:31:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47348350</link><dc:creator>johndough</dc:creator><comments>https://news.ycombinator.com/item?id=47348350</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47348350</guid></item><item><title><![CDATA[New comment by johndough in "Don't post generated/AI-edited comments. HN is for conversation between humans."]]></title><description><![CDATA[
<p>I am not saying that completely disallowing AI is the right decision. But if you see text that is clearly generated by AI and does not make any sense, it sure would be nice if you could just tell the students to actually read their sources instead of having to argue with them why they should do so. Similarly, I can see why HN moderators do not want to argue with the 100s of spam posters per day on /newest.<p>Anyway, my university did not ban AI, and now most students have degraded to proxies between teaching assistants and ChatGPT.</p>
]]></description><pubDate>Wed, 11 Mar 2026 21:57:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47342676</link><dc:creator>johndough</dc:creator><comments>https://news.ycombinator.com/item?id=47342676</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47342676</guid></item><item><title><![CDATA[New comment by johndough in "Don't post generated/AI-edited comments. HN is for conversation between humans."]]></title><description><![CDATA[
<p>Likewise, I sometimes use <a href="https://www.deepl.com/en/write" rel="nofollow">https://www.deepl.com/en/write</a> to fix my unidiomatic sentences.<p>But I can see why the HN guideline is formulated that way. My students often use the excuse "I did not use AI for writing! I wrote it myself! I only used AI to translate it!"
Simply disallowing all kinds of AI usage is much easier than discussing for the thousandth time whether the student actually understands what they have written.</p>
]]></description><pubDate>Wed, 11 Mar 2026 20:20:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47340934</link><dc:creator>johndough</dc:creator><comments>https://news.ycombinator.com/item?id=47340934</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47340934</guid></item><item><title><![CDATA[New comment by johndough in "No, it doesn't cost Anthropic $5k per Claude Code user"]]></title><description><![CDATA[
<p>Could you point at some more public info about active parameter count? You said:<p>> and while an exact number is hard to compute, let me tell you, it is not 17B or anywhere in that particular OOM :)<p>I can see ~100B, but that would near the same order of magnitude. I find ~1000B active parameters hard to believe.</p>
]]></description><pubDate>Tue, 10 Mar 2026 17:18:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47326147</link><dc:creator>johndough</dc:creator><comments>https://news.ycombinator.com/item?id=47326147</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47326147</guid></item><item><title><![CDATA[New comment by johndough in "JSLinux Now Supports x86_64"]]></title><description><![CDATA[
<p>Unfortunately, that comment can not be edited anymore. Maybe @dang can change it or remove the comment chain. I am fine with both.</p>
]]></description><pubDate>Tue, 10 Mar 2026 11:40:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47321894</link><dc:creator>johndough</dc:creator><comments>https://news.ycombinator.com/item?id=47321894</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47321894</guid></item></channel></rss>