<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: Scene_Cast2</title><link>https://news.ycombinator.com/user?id=Scene_Cast2</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 10 Apr 2026 05:04:42 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=Scene_Cast2" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by Scene_Cast2 in "Reallocating $100/Month Claude Code Spend to Zed and OpenRouter"]]></title><description><![CDATA[
<p>Out of curiosity, how many tokens are people using? I checked my openrouter activity - I used about 550 million tokens in the last month, 320M with Gemini and 240M with Opus. This cost me $600 in the past 30 days. $200 on Gemini, $400 on Opus.</p>
]]></description><pubDate>Thu, 09 Apr 2026 22:44:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47711241</link><dc:creator>Scene_Cast2</dc:creator><comments>https://news.ycombinator.com/item?id=47711241</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47711241</guid></item><item><title><![CDATA[New comment by Scene_Cast2 in "Sam Altman may control our future – can he be trusted?"]]></title><description><![CDATA[
<p>Have you tried the latest (3.1 pro) Gemini? In my experience, it's notably better for a similar type of problems than Opus 4.6. However, I don't really use OpenAI products to  compare.</p>
]]></description><pubDate>Tue, 07 Apr 2026 13:32:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47675131</link><dc:creator>Scene_Cast2</dc:creator><comments>https://news.ycombinator.com/item?id=47675131</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47675131</guid></item><item><title><![CDATA[New comment by Scene_Cast2 in "ESP32-S31: Dual-Core RISC-V SoC with Wi-Fi 6, Bluetooth 5.4, and Advanced HMI"]]></title><description><![CDATA[
<p>I had the same theory, but IIRC the H2 isn't much better with radio on.</p>
]]></description><pubDate>Fri, 03 Apr 2026 22:24:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47633134</link><dc:creator>Scene_Cast2</dc:creator><comments>https://news.ycombinator.com/item?id=47633134</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47633134</guid></item><item><title><![CDATA[New comment by Scene_Cast2 in "AI Tokens Are Mana"]]></title><description><![CDATA[
<p>They remind me more of the Korean TV series Cashero. There, the main hero has strength superpowers, but the power comes by crunching through cash in his pocket.<p>But I use the per-token APIs for my usage, not a subscription, so I'm guessing I'm in the minority.</p>
]]></description><pubDate>Mon, 30 Mar 2026 01:03:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47569247</link><dc:creator>Scene_Cast2</dc:creator><comments>https://news.ycombinator.com/item?id=47569247</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47569247</guid></item><item><title><![CDATA[New comment by Scene_Cast2 in "Claude loses its >99% uptime in Q1 2026"]]></title><description><![CDATA[
<p>I also use them per-token (and strongly prefer that due to a lack of lock-in).<p>However, from a game theory perspective, when there's a subscription, the model makers are incentivized to maximize problem solving in the minimum amount of tokens. With per-token pricing, the incentive is to maximize problem solving while increasing token usage.</p>
]]></description><pubDate>Fri, 27 Mar 2026 15:26:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47543858</link><dc:creator>Scene_Cast2</dc:creator><comments>https://news.ycombinator.com/item?id=47543858</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47543858</guid></item><item><title><![CDATA[New comment by Scene_Cast2 in "Cockpit is a web-based graphical interface for servers"]]></title><description><![CDATA[
<p>Same here. Using it on two boxes, makes Linux sysadmin work easier.</p>
]]></description><pubDate>Thu, 19 Mar 2026 22:36:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47447359</link><dc:creator>Scene_Cast2</dc:creator><comments>https://news.ycombinator.com/item?id=47447359</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47447359</guid></item><item><title><![CDATA[New comment by Scene_Cast2 in "Cockpit is a web-based graphical interface for servers"]]></title><description><![CDATA[
<p>Some more love for the updates page. E.g. select a subset of updates to install, be more clear that the last update time could be different if you installed updates via CLI, that kind of thing.</p>
]]></description><pubDate>Thu, 19 Mar 2026 22:34:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47447338</link><dc:creator>Scene_Cast2</dc:creator><comments>https://news.ycombinator.com/item?id=47447338</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47447338</guid></item><item><title><![CDATA[New comment by Scene_Cast2 in "Gitana 18: the new flying Ultim trimaran"]]></title><description><![CDATA[
<p>There's also Moth-class sailing or windfoiling for mere mortals.</p>
]]></description><pubDate>Tue, 17 Mar 2026 13:44:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47412577</link><dc:creator>Scene_Cast2</dc:creator><comments>https://news.ycombinator.com/item?id=47412577</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47412577</guid></item><item><title><![CDATA[New comment by Scene_Cast2 in "Comparing Python Type Checkers: Typing Spec Conformance"]]></title><description><![CDATA[
<p>Are there any good static (i.e. not runtime) type checkers for arrays and tensors? E.g. "16x64x256 fp16" in numpy, pytorch, jax, cupy, or whatever framework. Would be pretty useful for ML work.</p>
]]></description><pubDate>Mon, 16 Mar 2026 15:42:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47400493</link><dc:creator>Scene_Cast2</dc:creator><comments>https://news.ycombinator.com/item?id=47400493</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47400493</guid></item><item><title><![CDATA[New comment by Scene_Cast2 in "My Homelab Setup"]]></title><description><![CDATA[
<p>Of typical homelabs that are posted and discussed.<p>The online activity of the homelab community leans towards those who treat it as an enjoyable hobby as opposed to a pragmatic solution.<p>I'm on the other side of the spectrum. Devops is (at best) a neutral activity; I personally do it because I strongly dislike companies being able to do a rug-pull. I don't think you'll see setups like mine too often, as there isn't anything to brag about or to show off.</p>
]]></description><pubDate>Sun, 08 Mar 2026 18:40:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47299823</link><dc:creator>Scene_Cast2</dc:creator><comments>https://news.ycombinator.com/item?id=47299823</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47299823</guid></item><item><title><![CDATA[New comment by Scene_Cast2 in "Nano Banana 2: Google's latest AI image generation model"]]></title><description><![CDATA[
<p>It still seems to have the same pitfalls as all the other image generation models. I ran it through my test prompt (wary of posting it here, lest it gets trained on) - it still cannot generate something along the lines of "object A, but with feature X from Y", where that combo has never been seen in the training data. I wonder how the "astronaut riding unicorn on the moon" was solved...<p>EDIT: after significant prompting, it actually solved it. I think it's the first one to do so in my testing.</p>
]]></description><pubDate>Thu, 26 Feb 2026 17:32:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47169199</link><dc:creator>Scene_Cast2</dc:creator><comments>https://news.ycombinator.com/item?id=47169199</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47169199</guid></item><item><title><![CDATA[New comment by Scene_Cast2 in "Gemini 3 Deep Think"]]></title><description><![CDATA[
<p>It's a shame that it's not on OpenRouter. I hate platform lock-in, but the top-tier "deep think" models have been increasingly requiring the use of their own platform.</p>
]]></description><pubDate>Thu, 12 Feb 2026 19:17:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=46993617</link><dc:creator>Scene_Cast2</dc:creator><comments>https://news.ycombinator.com/item?id=46993617</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46993617</guid></item><item><title><![CDATA[New comment by Scene_Cast2 in "Text classification with Python 3.14's ZSTD module"]]></title><description><![CDATA[
<p>Sweet! I love clever information theory things like that.<p>It goes the other way too. Given that LLMs are just lossless compression machines, I do sometimes wonder how much better they are at compressing plain text compared to zstd or similar. Should be easy to calculate...<p>EDIT: lossless when they're used as the probability estimator and paired with something like an arithmetic coder.</p>
]]></description><pubDate>Wed, 11 Feb 2026 23:19:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46982630</link><dc:creator>Scene_Cast2</dc:creator><comments>https://news.ycombinator.com/item?id=46982630</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46982630</guid></item><item><title><![CDATA[New comment by Scene_Cast2 in "GLM-5: Targeting complex systems engineering and long-horizon agentic tasks"]]></title><description><![CDATA[
<p>I just ran this with Gemini 3 Pro, Opus 4.6, and Grok 4 (the models I personally find the smartest for my work). All three answered correctly.</p>
]]></description><pubDate>Wed, 11 Feb 2026 18:12:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46978575</link><dc:creator>Scene_Cast2</dc:creator><comments>https://news.ycombinator.com/item?id=46978575</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46978575</guid></item><item><title><![CDATA[New comment by Scene_Cast2 in "Everything – Locate files and folders by name instantly"]]></title><description><![CDATA[
<p>WizTree uses a similar idea - load the file system indices and works almost instantly.</p>
]]></description><pubDate>Sun, 08 Feb 2026 21:41:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46938790</link><dc:creator>Scene_Cast2</dc:creator><comments>https://news.ycombinator.com/item?id=46938790</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46938790</guid></item><item><title><![CDATA[New comment by Scene_Cast2 in "AI fatigue is real and nobody talks about it"]]></title><description><![CDATA[
<p>Not the OP, but the new LLMs together with harnesses (OpenCode in my case) can handle larger scopes of work - so the workflow moves away from pair programming (single-file changes, small scope diffs) to full-feature PR reviewing.</p>
]]></description><pubDate>Sun, 08 Feb 2026 15:11:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46934888</link><dc:creator>Scene_Cast2</dc:creator><comments>https://news.ycombinator.com/item?id=46934888</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46934888</guid></item><item><title><![CDATA[New comment by Scene_Cast2 in "My AI Adoption Journey"]]></title><description><![CDATA[
<p>Just to nitpick - compilers (and, to some extent, processors) weren't deterministic a few decades ago. Getting them to be deterministic has been a monumental effort - see build reproducibility.</p>
]]></description><pubDate>Fri, 06 Feb 2026 18:08:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46916084</link><dc:creator>Scene_Cast2</dc:creator><comments>https://news.ycombinator.com/item?id=46916084</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46916084</guid></item><item><title><![CDATA[New comment by Scene_Cast2 in "Lessons learned shipping 500 units of my first hardware product"]]></title><description><![CDATA[
<p>Try looking into videography COBs. I would recommend something like the Zhiyun Molus G300, SmallRig RC 220B or similar.<p>The absolute cheapest lumens per dollar COB would be the GVM SD300D, although I highly question the reliability and light quality.</p>
]]></description><pubDate>Wed, 04 Feb 2026 01:04:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=46879956</link><dc:creator>Scene_Cast2</dc:creator><comments>https://news.ycombinator.com/item?id=46879956</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46879956</guid></item><item><title><![CDATA[New comment by Scene_Cast2 in "Lessons learned shipping 500 units of my first hardware product"]]></title><description><![CDATA[
<p>Oh hey, I have one of these! I really like it. It's quite a unique design. People (especially where it snows and gets gloomy) should have more lumens, and I'd recommend this lamp for others.<p>One downside is that the active fan cooling design is questionable - the air goes over the top of the LEDs, and there aren't any dedicated exit holes so the air is just squeezing through the very small gap between the glass and the heatsink. There are also blotches of paint that worsen the TIM contact between the PCB and the heatsink. I used a rotary tool to remove those blotches.</p>
]]></description><pubDate>Wed, 04 Feb 2026 00:58:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=46879896</link><dc:creator>Scene_Cast2</dc:creator><comments>https://news.ycombinator.com/item?id=46879896</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46879896</guid></item><item><title><![CDATA[New comment by Scene_Cast2 in "Ask HN: Is there anyone here who still uses slide rules?"]]></title><description><![CDATA[
<p>I bought a vintage (pre-WW1) circular slide rule. But it's too much of an "artifact" to use - I'm afraid I'd damage it.</p>
]]></description><pubDate>Tue, 03 Feb 2026 14:53:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46871721</link><dc:creator>Scene_Cast2</dc:creator><comments>https://news.ycombinator.com/item?id=46871721</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46871721</guid></item></channel></rss>