<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: dajonker</title><link>https://news.ycombinator.com/user?id=dajonker</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 11:33:36 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=dajonker" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by dajonker in "I ran Gemma 4 as a local model in Codex CLI"]]></title><description><![CDATA[
<p>I don't really have the hardware to try it out, but I'm curious to see how Qwen3.5 stacks up against Gemma 4 in a comparison like this. Especially this model that was fine tuned to be good at tool calling that has more than 500k downloads as of this moment:
<a href="https://huggingface.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled" rel="nofollow">https://huggingface.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-...</a></p>
]]></description><pubDate>Mon, 13 Apr 2026 09:10:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47749575</link><dc:creator>dajonker</dc:creator><comments>https://news.ycombinator.com/item?id=47749575</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47749575</guid></item><item><title><![CDATA[New comment by dajonker in "The bespoke software revolution? I'm not buying it"]]></title><description><![CDATA[
<p>> they're almost always people who already had some pull toward software<p>I think this is probably true, and basically how I got into software myself.<p>I always dabbled in writing software and things for the web, but for some reason I never thought studying computer science would be any fun and that a career as a software developer sounded boring. But then I got an actual full time office job and oh boy, did my perspective on things change fast.<p>That first job did not have anything to do with writing software at all. But I saw people struggle with things that seemed to me trivial to automate, such as making annotations on paper bank statements and entering them into the system line-by-line. The bookkeeping system did support electronic bank statements, but lacked features to match certain descriptions to certain cost places. In the end it was indeed faster to go the paper route... It took me a couple of hours to write something that saved hours every week and that basically kick started my software career.<p>Would AI have made much of a difference here? Yes, in terms of getting to the correct solution faster, but probably not in terms of who would have done that. People would still come to the person who came up with the solution to ask for maintenance and new features.</p>
]]></description><pubDate>Fri, 20 Mar 2026 21:43:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47461039</link><dc:creator>dajonker</dc:creator><comments>https://news.ycombinator.com/item?id=47461039</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47461039</guid></item><item><title><![CDATA[New comment by dajonker in "Launch HN: RunAnywhere (YC W26) – Faster AI Inference on Apple Silicon"]]></title><description><![CDATA[
<p>I use voxtype on my Linux machine with parakeet. Super fast and regularly even gets the tech lingo correct. You can configure prompts and keywords to help with that as well.</p>
]]></description><pubDate>Wed, 11 Mar 2026 05:54:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47332117</link><dc:creator>dajonker</dc:creator><comments>https://news.ycombinator.com/item?id=47332117</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47332117</guid></item><item><title><![CDATA[New comment by dajonker in "Apple Studio Display and Studio Display XDR"]]></title><description><![CDATA[
<p>Look at how many people only use their 14 inch laptop screen, it's ridiculous and terribly unergonomic.</p>
]]></description><pubDate>Fri, 06 Mar 2026 22:47:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47282169</link><dc:creator>dajonker</dc:creator><comments>https://news.ycombinator.com/item?id=47282169</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47282169</guid></item><item><title><![CDATA[Every Hardware Deserves a Coder: Devstral Small 2 24B and Qwen3 Coder 30B]]></title><description><![CDATA[
<p>Article URL: <a href="https://byteshape.com/blogs/Devstral-Small-2-24B-Instruct-2512/">https://byteshape.com/blogs/Devstral-Small-2-24B-Instruct-2512/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47201078">https://news.ycombinator.com/item?id=47201078</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 28 Feb 2026 22:37:16 +0000</pubDate><link>https://byteshape.com/blogs/Devstral-Small-2-24B-Instruct-2512/</link><dc:creator>dajonker</dc:creator><comments>https://news.ycombinator.com/item?id=47201078</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47201078</guid></item><item><title><![CDATA[New comment by dajonker in "Qwen3.5 122B and 35B models offer Sonnet 4.5 performance on local computers"]]></title><description><![CDATA[
<p>Radeon R9700 with 32 GB VRAM is relatively affordable for the amount of RAM and with llama.cpp it runs fast enough for most things. These are workstation cards with blower fans and they are LOUD. Otherwise if you have the money to burn get a 5090 for speeeed and relatively low noise, especially if you limit power usage.</p>
]]></description><pubDate>Sat, 28 Feb 2026 22:13:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47200856</link><dc:creator>dajonker</dc:creator><comments>https://news.ycombinator.com/item?id=47200856</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47200856</guid></item><item><title><![CDATA[New comment by dajonker in "Museum of Plugs and Sockets"]]></title><description><![CDATA[
<p>I like the UK sockets because they have a switch.</p>
]]></description><pubDate>Fri, 27 Feb 2026 17:27:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47183035</link><dc:creator>dajonker</dc:creator><comments>https://news.ycombinator.com/item?id=47183035</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47183035</guid></item><item><title><![CDATA[New comment by dajonker in "GPT-5.3-Codex"]]></title><description><![CDATA[
<p>There was never a $100 billion deal. Only a letter of intent which doesn't mean anything contractually.</p>
]]></description><pubDate>Thu, 05 Feb 2026 19:58:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=46904322</link><dc:creator>dajonker</dc:creator><comments>https://news.ycombinator.com/item?id=46904322</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46904322</guid></item><item><title><![CDATA[New comment by dajonker in "Claude Code daily benchmarks for degradation tracking"]]></title><description><![CDATA[
<p>Wouldn't be surprised if they slowly start quantizing their models over time. Makes it easier to scale and reduce operational cost. Also makes a new release have more impact as it will be more notably "better" than what you've been using the past couple of days/weeks.</p>
]]></description><pubDate>Thu, 29 Jan 2026 15:14:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=46811279</link><dc:creator>dajonker</dc:creator><comments>https://news.ycombinator.com/item?id=46811279</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46811279</guid></item><item><title><![CDATA[New comment by dajonker in "Trinity large: An open 400B sparse MoE model"]]></title><description><![CDATA[
<p>You are equally understating past performance as you are overstating current performance.<p>One year ago I already ran qwen2.5-coder 7B locally for pretty decent autocomplete. And I still use it today as I haven't found anything better, having tried plenty of alternatives.<p>Today I let LLM agents write probably 60-80% of the code, but I frequently have to steer and correct it and that final 20% still takes 80% of the time.</p>
]]></description><pubDate>Thu, 29 Jan 2026 07:31:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=46806948</link><dc:creator>dajonker</dc:creator><comments>https://news.ycombinator.com/item?id=46806948</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46806948</guid></item><item><title><![CDATA[New comment by dajonker in "Qwen3-Max-Thinking"]]></title><description><![CDATA[
<p>These LLM benchmarks are like interviews for software engineers. They get drilled on advanced algorithms for distributed computing and they ace the questions. But then it turns out that the job is to add a button the user interface and it uses new tailwind classes instead of reusing the existing ones so it is just not quite right.</p>
]]></description><pubDate>Mon, 26 Jan 2026 22:06:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46772246</link><dc:creator>dajonker</dc:creator><comments>https://news.ycombinator.com/item?id=46772246</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46772246</guid></item><item><title><![CDATA[New comment by dajonker in "GLM-4.7-Flash"]]></title><description><![CDATA[
<p>Yes! This update works great. Seems to be pretty good at first glance. I'll have to setup an interesting task and see how different models approach the problem.</p>
]]></description><pubDate>Mon, 26 Jan 2026 13:21:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46765311</link><dc:creator>dajonker</dc:creator><comments>https://news.ycombinator.com/item?id=46765311</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46765311</guid></item><item><title><![CDATA[New comment by dajonker in "Show HN: Sweep, Open-weights 1.5B model for next-edit autocomplete"]]></title><description><![CDATA[
<p>I use llama.vim with llama.cpp and the qwen2.5-coder 7B model. Easily fits on a 16 GB GPU and is fast even on a tiny RTX 2000 card with 70 watts of power. Quality of completions is good enough for me, if I want something more sophisticated I use something like Codex</p>
]]></description><pubDate>Thu, 22 Jan 2026 08:06:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=46716481</link><dc:creator>dajonker</dc:creator><comments>https://news.ycombinator.com/item?id=46716481</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46716481</guid></item><item><title><![CDATA[New comment by dajonker in "GLM-4.7-Flash"]]></title><description><![CDATA[
<p>Update: I'm experiencing issues with OpenCode and this model. I have built the latest llama.cpp and followed the Unsloth guide, but it's not usable at the moment because of:<p>- Tool calling doesn't work properly with OpenCode<p>- It repeats itself very quickly. This is addressed in the Unsloth guide and can be "fixed" by setting --dry-multiplier to 1.1 or higher<p>- It makes a lot of spelling errors such as replacing class/file name characters with "1". Or when I asked it to check AGENTS.md it tried to open AGANTS.md<p>I tried both the Q4_K_XL and Q5_K_XL quantizations and they both suffer from these issues.</p>
]]></description><pubDate>Tue, 20 Jan 2026 10:26:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=46690272</link><dc:creator>dajonker</dc:creator><comments>https://news.ycombinator.com/item?id=46690272</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46690272</guid></item><item><title><![CDATA[New comment by dajonker in "GLM-4.7-Flash"]]></title><description><![CDATA[
<p>I don't know whether it just doesn't work well in GGUF / llama.cpp + OpenCode but I can't get anything useful out of Devstal 2 24B running locally. Probably a skill issue on my end, but I'm not very impressed. Benchmarks are nice but they don't always translate to real life usefulness.</p>
]]></description><pubDate>Tue, 20 Jan 2026 09:12:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=46689653</link><dc:creator>dajonker</dc:creator><comments>https://news.ycombinator.com/item?id=46689653</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46689653</guid></item><item><title><![CDATA[New comment by dajonker in "GLM-4.7-Flash"]]></title><description><![CDATA[
<p>Yes I usually run Unsloth models, however you are linking to the big model now (355B-A32B), which I can't run on my consumer hardware.<p>The flash model in this thread is more than 10x smaller (30B).</p>
]]></description><pubDate>Mon, 19 Jan 2026 16:43:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=46681100</link><dc:creator>dajonker</dc:creator><comments>https://news.ycombinator.com/item?id=46681100</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46681100</guid></item><item><title><![CDATA[New comment by dajonker in "GLM-4.7-Flash"]]></title><description><![CDATA[
<p>Great, I've been experimenting with OpenCode and running local 30B-A3B models on llama.cpp (4 bit) on a 32 GB GPU so there's plenty of VRAM left for 128k context. So far Qwen3-coder gives the me best results. Nemotron 3 Nano is supposed to benchmark better but it doesn't really show for the kind of work I throw at it, mostly "write tests for this and that method which are not covered yet". Will give this a try once someone has quantized it in ~4 bit GGUF.<p>Codex is notably higher quality but also has me waiting forever. Hopefully these small models get better and better, not just at benchmarks.</p>
]]></description><pubDate>Mon, 19 Jan 2026 16:25:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=46680815</link><dc:creator>dajonker</dc:creator><comments>https://news.ycombinator.com/item?id=46680815</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46680815</guid></item><item><title><![CDATA[New comment by dajonker in "IPv6 just turned 30 and still hasn't taken over the world"]]></title><description><![CDATA[
<p>Of course you can subnet ipv6, in fact I run several ipv6 subnets at home. You have to delegate a different prefix to each subnet.</p>
]]></description><pubDate>Sat, 03 Jan 2026 11:07:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46475253</link><dc:creator>dajonker</dc:creator><comments>https://news.ycombinator.com/item?id=46475253</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46475253</guid></item><item><title><![CDATA[New comment by dajonker in "T-Ruby is Ruby with syntax for types"]]></title><description><![CDATA[
<p>At least CPython and CRuby (MRI), the most common implementations of each language, ignore all type hints and they are not able to use them for anything during compile or runtime. So the performance argument is complete nonsense for at least these two languages.<p>Both Python and Ruby (the languages themselves) only specify the type hint <i>syntax</i>, but neither specifies anything about <i>checking</i> the actual types. That exercise is left for the implementations of third party type checkers.</p>
]]></description><pubDate>Sat, 27 Dec 2025 18:35:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=46404024</link><dc:creator>dajonker</dc:creator><comments>https://news.ycombinator.com/item?id=46404024</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46404024</guid></item><item><title><![CDATA[New comment by dajonker in "T-Ruby is Ruby with syntax for types"]]></title><description><![CDATA[
<p>Well said. There are many problems you have to deal with when writing code and type annotations only solve one particular kind. And even type annotations can be wrong: when you're dealing with data from external sources, dynamic languages like Python, JavaScript and Ruby will happily parse any valid JSON into a native data structure, even if it might not be what you specified in your type hints. Worse yet, you may not even notice unless you also have runtime type checks.<p>The kind of messy code base that results from (large) numbers of (mediocre) developers hastily implementing hacky bug fixes and (incomplete) specifications under time pressure isn't necessarily solved by any technical solution such as type hints.</p>
]]></description><pubDate>Sat, 27 Dec 2025 14:29:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=46402084</link><dc:creator>dajonker</dc:creator><comments>https://news.ycombinator.com/item?id=46402084</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46402084</guid></item></channel></rss>