<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: dminik</title><link>https://news.ycombinator.com/user?id=dminik</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 15:42:22 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=dminik" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by dminik in "Claude Opus 4.7"]]></title><description><![CDATA[
<p>Tbf I don't think that it's just this one reason. While I'm not a subscriber to any LLM provider, the general feeling I get from reading comments online is that the models have a long history of getting worse over time. Of course, we don't know why, but presumably they're quantizing models or downgrading you to a weaker model transparently.<p>Now as for why, I imagine that it's just money. Anthropic presumably just got done training Mythos and Opus 4.7. that must have cost a lot of cash. They have a lot of subscribers and users, but not enough hardware.<p>What's a little further tweaking of the model when you've already had to dumb it down due to constraints.</p>
]]></description><pubDate>Thu, 16 Apr 2026 18:09:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47797276</link><dc:creator>dminik</dc:creator><comments>https://news.ycombinator.com/item?id=47797276</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47797276</guid></item><item><title><![CDATA[New comment by dminik in "Claude Opus 4.7"]]></title><description><![CDATA[
<p>Why do stores increase prices before a sale?</p>
]]></description><pubDate>Thu, 16 Apr 2026 15:32:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47794772</link><dc:creator>dminik</dc:creator><comments>https://news.ycombinator.com/item?id=47794772</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47794772</guid></item><item><title><![CDATA[New comment by dminik in "The future of everything is lies, I guess: Where do we go from here?"]]></title><description><![CDATA[
<p>I don't think you can. The comments section of the page is also behind the block for you, no?</p>
]]></description><pubDate>Thu, 16 Apr 2026 14:22:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47793389</link><dc:creator>dminik</dc:creator><comments>https://news.ycombinator.com/item?id=47793389</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47793389</guid></item><item><title><![CDATA[New comment by dminik in "€54k spike in 13h from unrestricted Firebase browser key accessing Gemini APIs"]]></title><description><![CDATA[
<p>I'm not opposed to even removing the comment outright.<p>That being said, GitHub does not even offer a time sorted search. Meaning that most of the results are going to be quite old and useless.<p>Second, API keys being shared on GitHub is quite an old problem. People setup automated scans for this sort of stuff. Me removing my comment isn't going to help anyone who already posted their API key online.</p>
]]></description><pubDate>Thu, 16 Apr 2026 13:25:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47792635</link><dc:creator>dminik</dc:creator><comments>https://news.ycombinator.com/item?id=47792635</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47792635</guid></item><item><title><![CDATA[New comment by dminik in "€54k spike in 13h from unrestricted Firebase browser key accessing Gemini APIs"]]></title><description><![CDATA[
<p>Try this one. Should remove most readme keys:<p>Edit: self censor based on a request</p>
]]></description><pubDate>Thu, 16 Apr 2026 12:52:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47792280</link><dc:creator>dminik</dc:creator><comments>https://news.ycombinator.com/item?id=47792280</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47792280</guid></item><item><title><![CDATA[New comment by dminik in "ChatGPT for Excel"]]></title><description><![CDATA[
<p>Yeah, the Sheets integration is weird. It's usually ok when it wants to place something down the first time. But then it seems incapable of making any changes to it. Or even acknowledging the data in the sheet. What's up with that?</p>
]]></description><pubDate>Thu, 16 Apr 2026 09:46:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47790830</link><dc:creator>dminik</dc:creator><comments>https://news.ycombinator.com/item?id=47790830</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47790830</guid></item><item><title><![CDATA[New comment by dminik in "The local LLM ecosystem doesn’t need Ollama"]]></title><description><![CDATA[
<p>You can have multiple models served now with loading/unloading with just the server binary.<p><a href="https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md#model-presets" rel="nofollow">https://github.com/ggml-org/llama.cpp/blob/master/tools/serv...</a></p>
]]></description><pubDate>Thu, 16 Apr 2026 07:08:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47789668</link><dc:creator>dminik</dc:creator><comments>https://news.ycombinator.com/item?id=47789668</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47789668</guid></item><item><title><![CDATA[New comment by dminik in "Does Gas Town 'steal' usage from users' LLM credits to improve itself?"]]></title><description><![CDATA[
<p>I don't see how that is relevant? If he really did steal that money, it's not his to give.<p>You can't take someone's money and then not only not give it back, but also give it away.</p>
]]></description><pubDate>Wed, 15 Apr 2026 21:20:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47785384</link><dc:creator>dminik</dc:creator><comments>https://news.ycombinator.com/item?id=47785384</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47785384</guid></item><item><title><![CDATA[New comment by dminik in "TanStack Start Now Support React Server Components"]]></title><description><![CDATA[
<p>It's a really weird situation, but using public transport WiFi cured me of this thinking.<p>The amount of times that the initial HTML, CSS and JS came through, but then choked on fetching the page content was insane. Staring at a spinner is more insulting than the page just not loading.<p>That being said, I'm not a huge fan of RSCs either. Dumping the entire VDOM state into a script tag then loading the full React runtime seems like a waste of bandwidth.</p>
]]></description><pubDate>Tue, 14 Apr 2026 10:21:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47763665</link><dc:creator>dminik</dc:creator><comments>https://news.ycombinator.com/item?id=47763665</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47763665</guid></item><item><title><![CDATA[New comment by dminik in "GitHub Stacked PRs"]]></title><description><![CDATA[
<p>Maybe this is just a skill issue, but even with several attempts I just can't figure out why I would use stacked diffs/PRs. Though maybe that's because of the way I work?<p>I notice a lot of examples just vaguely mention "oh, you can have others review your previous changes while you continue working", but this one doesnt make sense to me. Often times, the first set of commits doesn't even make it to the end result. I'm working on a feature using lexical, and at this point I had to rewrite the damn thing 3 times. The time of other devs is quite valuable and I can't imagine wasting it by having them review something that doesn't even make it in.<p>Now, I have been in situations where I have some ready changes and I need to build something on top. But it's not something just making another branch on top + rebase once the original is merged wouldn't solve.<p>Is this really worth so much hype?</p>
]]></description><pubDate>Mon, 13 Apr 2026 22:25:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47758749</link><dc:creator>dminik</dc:creator><comments>https://news.ycombinator.com/item?id=47758749</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47758749</guid></item><item><title><![CDATA[New comment by dminik in "I ran Gemma 4 as a local model in Codex CLI"]]></title><description><![CDATA[
<p>This would be true if the models were capable of always completing the tasks. But, since their failure rate is fairly high, going in a wrong direction for longer could mean that you take more time than a faster model, where you can spot it going wrong earlier.</p>
]]></description><pubDate>Mon, 13 Apr 2026 19:21:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47756710</link><dc:creator>dminik</dc:creator><comments>https://news.ycombinator.com/item?id=47756710</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47756710</guid></item><item><title><![CDATA[New comment by dminik in "X Randomly Banning Users for "Inauthentic Behavior""]]></title><description><![CDATA[
<p>I needed to create a Facebook account to verify some Facebook posting functionality worked as we wanted it to. Ok, easy. Tried using my work email. No, not allowed. Hmm, try creating a new email. Nope. Tried to create an account on my personal device with a personal email. Nuh-uh.<p>I've never seen a service so opposed to me using it. There's no option for dev accounts either.</p>
]]></description><pubDate>Mon, 13 Apr 2026 07:22:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47748804</link><dc:creator>dminik</dc:creator><comments>https://news.ycombinator.com/item?id=47748804</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47748804</guid></item><item><title><![CDATA[New comment by dminik in "France to ditch Windows for Linux to reduce reliance on US tech"]]></title><description><![CDATA[
<p>This is a pretty interesting topic.<p>For GO, switching to Linux (with an AMD card) was a free performance boost. I gained like 30fps.<p>For early CS2, the performance on Linux was terrible.<p>Now, the peak fps is slightly worse, but the frame pacing is much more stable. Eg: you get less fps, but also less fps drops.</p>
]]></description><pubDate>Sat, 11 Apr 2026 08:20:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47728626</link><dc:creator>dminik</dc:creator><comments>https://news.ycombinator.com/item?id=47728626</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47728626</guid></item><item><title><![CDATA[New comment by dminik in "EFF is leaving X"]]></title><description><![CDATA[
<p>I imagine the new pay per use pricing for the X API has something to do with it. If you're reaching single digit percentage impressions and now you have to pay for that as well ...</p>
]]></description><pubDate>Thu, 09 Apr 2026 23:46:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47711780</link><dc:creator>dminik</dc:creator><comments>https://news.ycombinator.com/item?id=47711780</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47711780</guid></item><item><title><![CDATA[New comment by dminik in "The Vercel plugin on Claude Code wants to read your prompts"]]></title><description><![CDATA[
<p>> We do not want to limit to only detected Vecel project, because we also want to help with greenfield projects "Help build me an AI chat app".<p>Is the intention here that the AI will then suggest building a NextJS app? I can't quite describe why, but this feels very wrong to me.</p>
]]></description><pubDate>Thu, 09 Apr 2026 17:34:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47706670</link><dc:creator>dminik</dc:creator><comments>https://news.ycombinator.com/item?id=47706670</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47706670</guid></item><item><title><![CDATA[New comment by dminik in "Shooting down ideas is not a skill"]]></title><description><![CDATA[
<p>An idea can also reduce value. Or prevent you from producing value in the future. Knowing when an idea is bad or not worth doing is a skill in itself.</p>
]]></description><pubDate>Sun, 05 Apr 2026 01:06:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47645176</link><dc:creator>dminik</dc:creator><comments>https://news.ycombinator.com/item?id=47645176</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47645176</guid></item><item><title><![CDATA[New comment by dminik in "Show HN: A game where you build a GPU"]]></title><description><![CDATA[
<p>Some feedback:<p>- Nice idea, though after playing Turing complete, I would like to skip the beginning and move to stuff that makes GPUs different to CPUs. But it's understandable.<p>- I'm not smart enough to intuit NAND from transistors. I'm also not sure I will be alone in that. It's such a weird difficulty wall.<p>- Speaking of, the difficulty is all over the place. Though easy mode is appreciated.<p>- Even with a n key rollover keyboard, I couldn't complete the capacitor refresh level. It seems like it speeds up and certain capacitors already start empty.<p>- The routing for wires is no good atrocious. Any level with more than 8 components will end up impossible to read.<p>- It doesn't help that you can't color code or even path wires manually.<p>- Might be Firefox only, but I had a hard time selecting the connection points.<p>- Dragging the mouse along the edge should pan. Otherwise you have to drop the connection and zoom.<p>- I appreciate the added "show solution". But it's not really giving you a solution. It's just a better hint.<p>- An option to show all tests or at least more tests would get great.</p>
]]></description><pubDate>Sun, 05 Apr 2026 00:17:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47644890</link><dc:creator>dminik</dc:creator><comments>https://news.ycombinator.com/item?id=47644890</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47644890</guid></item><item><title><![CDATA[New comment by dminik in "April 2026 TLDR Setup for Ollama and Gemma 4 26B on a Mac mini"]]></title><description><![CDATA[
<p>It depends on the hardware, backend and options. I've recently tried running some local AIs (Qwen3.5 9B for the numbers here) on an older AMD 8GB VRAM GPU (so vulkan) and found that:<p>llama.cpp is about 10% faster than LM studio with the same options.<p>LM studio is 3x faster than ollama with the same options (~13t/s vs ~38t/s), but messes up tool calls.<p>Ollama ended up slowest on the 9B, Queen3.5 35B and some random other 8B model.<p>Note that this isn't some rigorous study or performance benchmarking. I just found ollama unnaceptably slow and wanted to try out the other options.</p>
]]></description><pubDate>Fri, 03 Apr 2026 13:51:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47626654</link><dc:creator>dminik</dc:creator><comments>https://news.ycombinator.com/item?id=47626654</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47626654</guid></item><item><title><![CDATA[New comment by dminik in "Delve allegedly forked an open-source tool and sold it as its own"]]></title><description><![CDATA[
<p>Sometimes the impression I get from commenters on HN is that they would sell their own grandmother for money.<p>Much less than just not considering morals/ethics, it's seen as a weakness here.</p>
]]></description><pubDate>Thu, 02 Apr 2026 21:55:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47620688</link><dc:creator>dminik</dc:creator><comments>https://news.ycombinator.com/item?id=47620688</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47620688</guid></item><item><title><![CDATA[New comment by dminik in "Artemis computer running two instances of MS outlook; they can't figure out why"]]></title><description><![CDATA[
<p>Well, I wasn't that worried for the astronauts before, but now that I know they're running windows, I'm not so sure.</p>
]]></description><pubDate>Thu, 02 Apr 2026 17:49:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47617738</link><dc:creator>dminik</dc:creator><comments>https://news.ycombinator.com/item?id=47617738</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47617738</guid></item></channel></rss>