<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: b89kim</title><link>https://news.ycombinator.com/user?id=b89kim</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 01 May 2026 13:00:29 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=b89kim" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by b89kim in "Pyodide: a Python distribution based on WebAssembly"]]></title><description><![CDATA[
<p>ChatGPT's Canvas uses Pyodide for sandboxing, but it's not designed for coding agents. Node.js environment is usually better for agents.  Pyodide restricts server-side functionality, and fetching external URLs often needs proxying due to sandbox. By the way, pyodide is still good option for interactive visualizer or deploying small webapps require data processing.</p>
]]></description><pubDate>Tue, 17 Mar 2026 03:56:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47408419</link><dc:creator>b89kim</dc:creator><comments>https://news.ycombinator.com/item?id=47408419</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47408419</guid></item><item><title><![CDATA[New comment by b89kim in "How to run Qwen 3.5 locally"]]></title><description><![CDATA[
<p>I’ve been testing these on other tasks—IK, Kalman filters, and UI/DB boilerplate. Qwen3.5 is multimodal and specialized for js/webdev or agentic coding. It’s not surprising MoE model have some limitations in specific area. I understand most LLM have limited ability in mathematical/physical reasoning. And I don't think these tasks represent general performance. I'm just sharing personal experiences for those curious.</p>
]]></description><pubDate>Sun, 08 Mar 2026 17:05:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47298919</link><dc:creator>b89kim</dc:creator><comments>https://news.ycombinator.com/item?id=47298919</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47298919</guid></item><item><title><![CDATA[New comment by b89kim in "How to run Qwen 3.5 locally"]]></title><description><![CDATA[
<p>I’ve been benchmarking GGUF quants for Python tasks under some hardware configs.<p><pre><code>  - 4090 : 27b-q4_k_m
  - A100: 27b-q6_k
  - 3*A100: 122b-a10b-q6_k_L
</code></pre>
Using the Qwen team's "thinking" presets, I found that non-agentic coding performance doesn't feel significant leap over unquantized GPT-OSS-120B. It shows some hallucination and repetition for mujoco codes with default presence penalty. 27b-q4_k_m with 4090 generates 30~35 tok/s in good quality.</p>
]]></description><pubDate>Sun, 08 Mar 2026 08:51:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47295706</link><dc:creator>b89kim</dc:creator><comments>https://news.ycombinator.com/item?id=47295706</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47295706</guid></item></channel></rss>