<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: polyrand</title><link>https://news.ycombinator.com/user?id=polyrand</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 28 Apr 2026 22:14:13 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=polyrand" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by polyrand in "MacBook Neo"]]></title><description><![CDATA[
<p>I find this a very exciting release. I was actually hoping we would somehow get macOS on mobile 'A' chips some day. And I think this is better than putting 'M' chips on an iPad.<p>My iPad with an 'M1' chip actually consumes more battery than much older iPads when both are locked and with the screen off. I ended up figuring it was probably because, in the 'M' chip, the lowest possible energy usage is way higher than the 'A' chip. So even small background wake-ups used more energy.<p>I'm still hoping one day we have an iPad with macOS.</p>
]]></description><pubDate>Wed, 04 Mar 2026 21:14:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47253946</link><dc:creator>polyrand</dc:creator><comments>https://news.ycombinator.com/item?id=47253946</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47253946</guid></item><item><title><![CDATA[Python HTTP server using Erlang and BEAM]]></title><description><![CDATA[
<p>Article URL: <a href="https://hornbeam.dev/">https://hornbeam.dev/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47054219">https://news.ycombinator.com/item?id=47054219</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 17 Feb 2026 22:18:41 +0000</pubDate><link>https://hornbeam.dev/</link><dc:creator>polyrand</dc:creator><comments>https://news.ycombinator.com/item?id=47054219</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47054219</guid></item><item><title><![CDATA[New comment by polyrand in "Start all of your commands with a comma (2009)"]]></title><description><![CDATA[
<p>I do this, and it's a huge quality of life improvement. No so much because of shadowing existing binaries, but for better command auto-complete. For example: I have a bunch of tmux utilities and all start with `,t` which is not a polluted command-name prefix compared to just `t`.<p>But I'm now facing the problem that LLM agents don't like this, and when I instruct them to run certain tools, they remove the leading comma. It's normally fixed with one extra sentence in the prompt, but still inconvenient.</p>
]]></description><pubDate>Sat, 07 Feb 2026 16:47:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=46925245</link><dc:creator>polyrand</dc:creator><comments>https://news.ycombinator.com/item?id=46925245</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46925245</guid></item><item><title><![CDATA[New comment by polyrand in "My AI Adoption Journey"]]></title><description><![CDATA[
<p>> a period of inefficiency<p>I think this is something people ignore, and is significant. The only way to get good at coding with LLMs is actually trying to do it. Even if it's inefficient or slower at first. It's just another skill to develop [0].<p>And it's not really about using all the plugins and features available. In fact, many plugins and features are counter-productive. Just learn how to prompt and steer the LLM better.<p>[0]: <a href="https://ricardoanderegg.com/posts/getting-better-coding-llms-agents/" rel="nofollow">https://ricardoanderegg.com/posts/getting-better-coding-llms...</a></p>
]]></description><pubDate>Thu, 05 Feb 2026 21:59:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46905979</link><dc:creator>polyrand</dc:creator><comments>https://news.ycombinator.com/item?id=46905979</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46905979</guid></item><item><title><![CDATA[High Performance LLM Inference Operator Library from Tencent]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/Tencent/hpc-ops">https://github.com/Tencent/hpc-ops</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46777146">https://news.ycombinator.com/item?id=46777146</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 27 Jan 2026 08:36:35 +0000</pubDate><link>https://github.com/Tencent/hpc-ops</link><dc:creator>polyrand</dc:creator><comments>https://news.ycombinator.com/item?id=46777146</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46777146</guid></item><item><title><![CDATA[New comment by polyrand in "Apple, What Have You Done?"]]></title><description><![CDATA[
<p>I share the same feeling. I waited as much as possible to upgrade to iOS 26 / macOS Tahoe.<p>Two days ago, I finally upgraded. Liquid Glass is one of the worst things I've ever seen in terms of design. It reminds me of when I personalized old cheap android phones or Linux distros just "to look cool". Cool-looking: yes. Unusable: also yes. Tasteful design: almost absent.<p>Just the increase of the border-radius in all elements makes it hideous. Apps with a search bar on a scrollable list look like a CSS bug when the search bar is on top of the elements. Neither the search bar nor the element underneath are visible. Although this applies to most transparency effects on Liquid Glass. Neither the elements above nor below the "glass" are visible. And the extra value added is zero.<p>The thing is, I can still adapt to it, or tweak transparency and contrast. But I've seen elderly relatives struggle just because WhatsApp decided to add the "Meta AI" floating button. I can't imagine what this "inaccessible" UI changes can do.</p>
]]></description><pubDate>Mon, 26 Jan 2026 10:26:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46763925</link><dc:creator>polyrand</dc:creator><comments>https://news.ycombinator.com/item?id=46763925</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46763925</guid></item><item><title><![CDATA[New comment by polyrand in "Many Small Queries Are Efficient in SQLite"]]></title><description><![CDATA[
<p>Don't forget that if you're using SQLite on something like EBS, multiple queries may not be efficient.<p>I'm saying this as a huge SQLite fan, but also beware of what kind of storage you're using in your instance.</p>
]]></description><pubDate>Sat, 24 Jan 2026 21:08:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=46747694</link><dc:creator>polyrand</dc:creator><comments>https://news.ycombinator.com/item?id=46747694</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46747694</guid></item><item><title><![CDATA[Software Willy Wonka]]></title><description><![CDATA[
<p>Article URL: <a href="https://ricardoanderegg.com/posts/if-llms-replace-programmers-be-willy-wonka/">https://ricardoanderegg.com/posts/if-llms-replace-programmers-be-willy-wonka/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46710611">https://news.ycombinator.com/item?id=46710611</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 21 Jan 2026 19:47:48 +0000</pubDate><link>https://ricardoanderegg.com/posts/if-llms-replace-programmers-be-willy-wonka/</link><dc:creator>polyrand</dc:creator><comments>https://news.ycombinator.com/item?id=46710611</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46710611</guid></item><item><title><![CDATA[AMD Ryzen AI Halo]]></title><description><![CDATA[
<p>Article URL: <a href="https://twitter.com/AMDRyzen/status/2013642938106986713">https://twitter.com/AMDRyzen/status/2013642938106986713</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46699021">https://news.ycombinator.com/item?id=46699021</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 20 Jan 2026 23:20:02 +0000</pubDate><link>https://twitter.com/AMDRyzen/status/2013642938106986713</link><dc:creator>polyrand</dc:creator><comments>https://news.ycombinator.com/item?id=46699021</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46699021</guid></item><item><title><![CDATA[New comment by polyrand in "GLM-4.7-Flash"]]></title><description><![CDATA[
<p>I've been using z.ai models through their coding plan (incredible price/performance ratio), and since GLM-4.7 I'm even more confident with the results it gives me. I use it both with regular claude-code and opencode (more opencode lately, since claude-code is obviously designed to work much better with Anthropic models).<p>Also notice that this is the "-Flash" version. They were previously at 4.5-Flash (they skipped 4.6-Flash). This is supposed to be equivalent to Haiku. Even on their coding plan docs, they mention this model is supposed to be used for `ANTHROPIC_DEFAULT_HAIKU_MODEL`.</p>
]]></description><pubDate>Mon, 19 Jan 2026 17:51:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=46682173</link><dc:creator>polyrand</dc:creator><comments>https://news.ycombinator.com/item?id=46682173</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46682173</guid></item><item><title><![CDATA[GLM-Image: Open-Source Auto-Regressive Model for Image Generation]]></title><description><![CDATA[
<p>Article URL: <a href="https://z.ai/blog/glm-image">https://z.ai/blog/glm-image</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46617033">https://news.ycombinator.com/item?id=46617033</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Wed, 14 Jan 2026 15:21:03 +0000</pubDate><link>https://z.ai/blog/glm-image</link><dc:creator>polyrand</dc:creator><comments>https://news.ycombinator.com/item?id=46617033</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46617033</guid></item><item><title><![CDATA[New comment by polyrand in "Fabrice Bellard Releases MicroQuickJS"]]></title><description><![CDATA[
<p>Not sure about the impact of these, I guess it depends on the context where this engine is used. But there seems to be already exploits for the engine:<p><a href="https://x.com/itszn13/status/2003707921679679563" rel="nofollow">https://x.com/itszn13/status/2003707921679679563</a><p><a href="https://x.com/itszn13/status/2003808443761938602" rel="nofollow">https://x.com/itszn13/status/2003808443761938602</a></p>
]]></description><pubDate>Wed, 24 Dec 2025 15:59:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46376653</link><dc:creator>polyrand</dc:creator><comments>https://news.ycombinator.com/item?id=46376653</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46376653</guid></item><item><title><![CDATA[New comment by polyrand in "GLM-4.7: Advancing the Coding Capability"]]></title><description><![CDATA[
<p>That's interesting, thanks for sharing!<p>It's a pattern I saw more often with claude code, at least in terms of how frequently it says it (much improved now). But it's true that just this pattern alone is not enough to infer the training methods.</p>
]]></description><pubDate>Tue, 23 Dec 2025 06:13:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=46362891</link><dc:creator>polyrand</dc:creator><comments>https://news.ycombinator.com/item?id=46362891</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46362891</guid></item><item><title><![CDATA[New comment by polyrand in "GLM-4.7: Advancing the Coding Capability"]]></title><description><![CDATA[
<p>A few comments mentioning distillation. If you use claude-code with the z.ai coding plan, I think it quickly becomes obvious they did train on other models. Even the "you're absolutely right" was there. But that's ok. The price/performance ratio is unmatched.</p>
]]></description><pubDate>Mon, 22 Dec 2025 22:29:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=46359927</link><dc:creator>polyrand</dc:creator><comments>https://news.ycombinator.com/item?id=46359927</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46359927</guid></item><item><title><![CDATA[Jax-JS: a machine learning library and compiler for the web]]></title><description><![CDATA[
<p>Article URL: <a href="https://jax-js.com/">https://jax-js.com/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46359882">https://news.ycombinator.com/item?id=46359882</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 22 Dec 2025 22:24:07 +0000</pubDate><link>https://jax-js.com/</link><dc:creator>polyrand</dc:creator><comments>https://news.ycombinator.com/item?id=46359882</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46359882</guid></item><item><title><![CDATA[New comment by polyrand in "Structured outputs create false confidence"]]></title><description><![CDATA[
<p>I enjoyed the post. I was about to link the "Let Me Speak Freely" paper and "Say What You Mean" response from dottxt, but that's already been posted in the comments.<p>I'm a huge fan of structured outputs, but also recently started splitting both steps, and I think it has a bunch of upsides normally not discussed:<p>1. Separate concerns, schema validation errors don't invalidate the whole LLM response. If the only error is in generating schema-compliant tokens (something I've seen frequently), retries are much cheaper.<p>2. Having the original response as free text AND the structured output has value.<p>3. In line with point 1, it allows using a more expensive (reasoning) model for free-text generation, then a smaller model like gemini-2.5-flash to convert the outputs to structured text.</p>
]]></description><pubDate>Mon, 22 Dec 2025 09:35:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=46352596</link><dc:creator>polyrand</dc:creator><comments>https://news.ycombinator.com/item?id=46352596</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46352596</guid></item><item><title><![CDATA[More databases should be single-threaded]]></title><description><![CDATA[
<p>Article URL: <a href="https://blog.konsti.xyz/more-databases-should-be-single-threaded/">https://blog.konsti.xyz/more-databases-should-be-single-threaded/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46352234">https://news.ycombinator.com/item?id=46352234</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 22 Dec 2025 08:24:54 +0000</pubDate><link>https://blog.konsti.xyz/more-databases-should-be-single-threaded/</link><dc:creator>polyrand</dc:creator><comments>https://news.ycombinator.com/item?id=46352234</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46352234</guid></item><item><title><![CDATA[Preact without a build step, including routing and signals]]></title><description><![CDATA[
<p>Article URL: <a href="https://ricardoanderegg.com/posts/preact-without-build-step-including-routing/">https://ricardoanderegg.com/posts/preact-without-build-step-including-routing/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46344528">https://news.ycombinator.com/item?id=46344528</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 21 Dec 2025 13:00:04 +0000</pubDate><link>https://ricardoanderegg.com/posts/preact-without-build-step-including-routing/</link><dc:creator>polyrand</dc:creator><comments>https://news.ycombinator.com/item?id=46344528</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46344528</guid></item><item><title><![CDATA[GNU recutils: Plain text database]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.gnu.org/software/recutils/">https://www.gnu.org/software/recutils/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46265811">https://news.ycombinator.com/item?id=46265811</a></p>
<p>Points: 161</p>
<p># Comments: 49</p>
]]></description><pubDate>Sun, 14 Dec 2025 19:08:57 +0000</pubDate><link>https://www.gnu.org/software/recutils/</link><dc:creator>polyrand</dc:creator><comments>https://news.ycombinator.com/item?id=46265811</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46265811</guid></item><item><title><![CDATA[New comment by polyrand in "A “frozen” dictionary for Python"]]></title><description><![CDATA[
<p>A frozen dictionary would be very welcome. You can already do something similar using MappingProxyType [0]<p><pre><code>  from types import MappingProxyType
  
  d = {}
  
  d["a"] = 1
  d["b"] = 2
  
  print(d)
  
  frozen = MappingProxyType(d)
  
  print(frozen["a"])
  
  # Error:
  frozen["b"] = "new"

</code></pre>
[0]: <a href="https://docs.python.org/3/library/types.html#types.MappingProxyType" rel="nofollow">https://docs.python.org/3/library/types.html#types.MappingPr...</a></p>
]]></description><pubDate>Thu, 11 Dec 2025 11:33:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=46230124</link><dc:creator>polyrand</dc:creator><comments>https://news.ycombinator.com/item?id=46230124</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46230124</guid></item></channel></rss>