<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: natrys</title><link>https://news.ycombinator.com/user?id=natrys</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 10:35:18 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=natrys" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by natrys in "Kimi K2.6: Advancing open-source coding"]]></title><description><![CDATA[
<p>Yes it was good for its time, but 10 months old now which is a long time ago in this space. It was also a fine-tune (albeit a good one) of Qwen-2.5 72B.<p>I wish they did more smaller models. Kimi Linear doesn't really count, it was more of a proof of concept thing.</p>
]]></description><pubDate>Mon, 20 Apr 2026 17:06:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47837284</link><dc:creator>natrys</dc:creator><comments>https://news.ycombinator.com/item?id=47837284</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47837284</guid></item><item><title><![CDATA[New comment by natrys in "Neocaml – Rubocop Creator's New OCaml Mode for Emacs"]]></title><description><![CDATA[
<p>I think tree-sitter's relationship with JavaScript is entirely syntactic. You don't need any JS runtime installed to <i>write</i> grammars, because technically tree-sitter CLI already has a JS runtime included and using that it converts your grammar first to an intermediate JSON format, then it generates parser code in C. And then this C code gets compiled into a shared library, which is what editors like Emacs use, so to <i>use</i> tree-sitter modules you definitely don't need a JS runtime either.</p>
]]></description><pubDate>Mon, 02 Mar 2026 16:39:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47220324</link><dc:creator>natrys</dc:creator><comments>https://news.ycombinator.com/item?id=47220324</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47220324</guid></item><item><title><![CDATA[DeepSeek Engram: Conditional Memory via Scalable Lookup [pdf]]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/deepseek-ai/Engram/blob/main/Engram_paper.pdf">https://github.com/deepseek-ai/Engram/blob/main/Engram_paper.pdf</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46592363">https://news.ycombinator.com/item?id=46592363</a></p>
<p>Points: 6</p>
<p># Comments: 2</p>
]]></description><pubDate>Mon, 12 Jan 2026 18:35:16 +0000</pubDate><link>https://github.com/deepseek-ai/Engram/blob/main/Engram_paper.pdf</link><dc:creator>natrys</dc:creator><comments>https://news.ycombinator.com/item?id=46592363</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46592363</guid></item><item><title><![CDATA[New comment by natrys in "Exe.dev"]]></title><description><![CDATA[
<p>Very impressive demo. From VM curation to vibe coding something running on port 8000 in Shelley just worked in minutes. I imagine quite a few technically impressive things happening under the hood, would be interested in reading more about those.<p>Small nit: I think you should make it more clear in the docs (if not in the landing page) that one can just use any key with the ssh command the very first time and it automatically gets registered. Also on the web UI one should have the ability to add the ssh keys. I logged into the web UI first, and was a bit confused.<p>I think the pricing is alright for the resource and remote development features, though might be a bit much if someone doesn't need higher level of resources for deploying something that's mostly already developed.<p>Anyway, this reminds me of a product called Okteto that had similar UX. They were focused on leveraging k8s for declarative deployment. But for some reason they suspended their managed cloud/SaaS offering for individual/non-enterprise clients, I wonder if it was because they couldn't make the pricing work. Hope that doesn't happen here.</p>
]]></description><pubDate>Sat, 27 Dec 2025 12:55:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=46401522</link><dc:creator>natrys</dc:creator><comments>https://news.ycombinator.com/item?id=46401522</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46401522</guid></item><item><title><![CDATA[New comment by natrys in "Kimi K2 1T model runs on 2 512GB M3 Ultras"]]></title><description><![CDATA[
<p>That's the Kimi K2 Thinking, this post seems to be talking about original Kimi K2 Instruct though, I don't think INT4 QAT (quantization aware training) version was released for this.</p>
]]></description><pubDate>Sun, 14 Dec 2025 15:06:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=46263541</link><dc:creator>natrys</dc:creator><comments>https://news.ycombinator.com/item?id=46263541</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46263541</guid></item><item><title><![CDATA[New comment by natrys in "Advent of Code 2025"]]></title><description><![CDATA[
<p>I am going to try and stick with Prolog as much as I can this year. Plenty of problems involve a lot of parsing and searching, both could be expressed declaratively in Prolog and it just works (though you do have to keep the execution model in mind).</p>
]]></description><pubDate>Sun, 30 Nov 2025 15:19:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46097314</link><dc:creator>natrys</dc:creator><comments>https://news.ycombinator.com/item?id=46097314</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46097314</guid></item><item><title><![CDATA[New comment by natrys in "DeepSeekMath-V2: Towards Self-Verifiable Mathematical Reasoning [pdf]"]]></title><description><![CDATA[
<p>Well they do that too: <a href="https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-671B" rel="nofollow">https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-671B</a><p>But I suppose the bigger goal remains improving their <i>language</i> model, and this was an experimentation born from that. These works are symbiotic; the original DeepSeekMath resulted in GRPO, which eventually formed the backbone of their R1 model: <a href="https://arxiv.org/abs/2402.03300" rel="nofollow">https://arxiv.org/abs/2402.03300</a></p>
]]></description><pubDate>Thu, 27 Nov 2025 23:05:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=46073970</link><dc:creator>natrys</dc:creator><comments>https://news.ycombinator.com/item?id=46073970</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46073970</guid></item><item><title><![CDATA[New comment by natrys in "IDEmacs: A Visual Studio Code clone for Emacs"]]></title><description><![CDATA[
<p>It can hardly be called resistance to improvement, when everyone do improve it - just in their own ways. The default isn't some fashion statement, some aesthete that's objectively good (though I am sure some people do subjectively like it). But it's meant to be sort of a least presumptuous blank state that everyone can radically overhaul. So arguably it's an encouragement for improvement just like everything else in Emacs, which focuses on making the tools for improvement easier.<p>It's just that "improvement" as a matter of public consensus that everyone can agree on to elect the next blank slate has been to impossible to settle on. But the counterculture here broadly might be extreme reluctance to inconvenience even a minority of existing users, in pursuit of market share/growth.</p>
]]></description><pubDate>Sun, 16 Nov 2025 09:29:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=45943805</link><dc:creator>natrys</dc:creator><comments>https://news.ycombinator.com/item?id=45943805</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45943805</guid></item><item><title><![CDATA[New comment by natrys in "How to Obsessively Tune WezTerm"]]></title><description><![CDATA[
<p>Wasn't aware of user-var-changed, cool write-up!<p>I had used urxvt forever before and the simple solution that works (even for ssh e.g.) is to ring the terminal bell, and urxvt just sets the window urgency hint upon that. I just do that in shell prompt unconditionally because if it's triggered in a focused window, then nothing happens. But if it's from a different workspace, I get this nice visual cue in my top bar for free.<p>But features like setting urgency isn't available in wezterm (understandable, as it's not a cross-platform thing). I could patch that in the source, but the Emacser in me chose to do something more unholy. By default Lua is started in safe mode, which means loading shared C module is forbidden. I disabled that, and now use a bunch of missing stuff written in Rust and Zig interfaced with cffi. Don't recall ever having a crash so I am surprised by some of the other comments.</p>
]]></description><pubDate>Wed, 29 Oct 2025 23:45:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=45754601</link><dc:creator>natrys</dc:creator><comments>https://news.ycombinator.com/item?id=45754601</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45754601</guid></item><item><title><![CDATA[New comment by natrys in "Show HN: I'm rewriting a web server written in Rust for speed and ease of use"]]></title><description><![CDATA[
<p>Yes it seems the binaries are here: <a href="https://ferron.sh/download" rel="nofollow">https://ferron.sh/download</a><p>I will say that though, it's probably not rational to be okay with blindly running some opaque binary from a website, but then flip out when it comes to running an install script from the same people and domain behind the same software. At least from security PoV I don't see how there should be any difference, but it's true that install scripts can be opinionated and litter your system by putting files in unwanted places so nevertheless there are strong arguments outside of security.</p>
]]></description><pubDate>Tue, 21 Oct 2025 10:11:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=45654203</link><dc:creator>natrys</dc:creator><comments>https://news.ycombinator.com/item?id=45654203</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45654203</guid></item><item><title><![CDATA[New comment by natrys in "Alibaba Cloud says it cut Nvidia AI GPU use by 82% with new pooling system"]]></title><description><![CDATA[
<p>Qwen's max series had <i>always</i> been closed weight, it's not a policy change like you are alluding.<p>What exactly is Huawei's flagship series anyway? Because their PanGu line <i>is</i> open-weight, but Huawei is as of yet not in the LLM making business, their models are only meant to signal that it's <i>possible</i> to do training and inference on their hardware, that's all. No one actually uses those models.</p>
]]></description><pubDate>Mon, 20 Oct 2025 17:11:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=45646375</link><dc:creator>natrys</dc:creator><comments>https://news.ycombinator.com/item?id=45646375</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45646375</guid></item><item><title><![CDATA[New comment by natrys in "Qwen3-VL"]]></title><description><![CDATA[
<p>Models:<p>- <a href="https://huggingface.co/Qwen/Qwen3-VL-235B-A22B-Thinking" rel="nofollow">https://huggingface.co/Qwen/Qwen3-VL-235B-A22B-Thinking</a><p>- <a href="https://huggingface.co/Qwen/Qwen3-VL-235B-A22B-Instruct" rel="nofollow">https://huggingface.co/Qwen/Qwen3-VL-235B-A22B-Instruct</a></p>
]]></description><pubDate>Tue, 23 Sep 2025 20:59:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=45352673</link><dc:creator>natrys</dc:creator><comments>https://news.ycombinator.com/item?id=45352673</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45352673</guid></item><item><title><![CDATA[Qwen3-VL]]></title><description><![CDATA[
<p>Article URL: <a href="https://qwen.ai/blog?id=99f0335c4ad9ff6153e517418d48535ab6d8afef&from=research.latest-advancements-list">https://qwen.ai/blog?id=99f0335c4ad9ff6153e517418d48535ab6d8afef&from=research.latest-advancements-list</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45352672">https://news.ycombinator.com/item?id=45352672</a></p>
<p>Points: 434</p>
<p># Comments: 160</p>
]]></description><pubDate>Tue, 23 Sep 2025 20:59:17 +0000</pubDate><link>https://qwen.ai/blog?id=99f0335c4ad9ff6153e517418d48535ab6d8afef&amp;from=research.latest-advancements-list</link><dc:creator>natrys</dc:creator><comments>https://news.ycombinator.com/item?id=45352672</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45352672</guid></item><item><title><![CDATA[New comment by natrys in "DeepSeek writes less secure code for groups China disfavors?"]]></title><description><![CDATA[
<p>If you mean the bit about refusal from other models, then sure here is another run with same result:<p><a href="https://i.postimg.cc/6tT3m5mL/screen.png" rel="nofollow">https://i.postimg.cc/6tT3m5mL/screen.png</a><p>Note I am using direct API to avoid triggering separate guardrail models typically operating in front of website front-ends.<p>As an aside the website you used in your original comment:<p>> [2] Used this link <a href="https://www.deepseekv3.net/en/chat" rel="nofollow">https://www.deepseekv3.net/en/chat</a><p>This is not the official DeepSeek website. Probably one of the many shady third-party sites riding on DeepSeek name for SEO, who knows what they are running. In this case it doesn't matter, because I already reproduced your prompt with a US based inference provider directly hosting DeepSeek weights, but still worth noting for methodology.<p>(also to a sceptic screenshots shouldn't be enough since they are easily doctored nowadays, but I don't believe these refusals should be surprising in the least to anyone with passing familiarity with these LLMs)<p>---<p>Obviously sabotage is a whole another can of worm as opposed to mere refusal, something that this article glossed over without showing their prompts. So, without much to go on, it's hard for me to take this seriously. We know garbage in context can degrade performance, even simple typos can[1]. Besides LLMs at their present state of capabilities are barely intelligent enough to soundly do any serious task, it stretches my disbelief that they would be able to actually sabotage to any reasonable degree of sophistication - that said I look forward to more serious research on this matter.<p>[1] <a href="https://arxiv.org/abs/2411.05345v1" rel="nofollow">https://arxiv.org/abs/2411.05345v1</a></p>
]]></description><pubDate>Wed, 17 Sep 2025 22:57:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=45282450</link><dc:creator>natrys</dc:creator><comments>https://news.ycombinator.com/item?id=45282450</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45282450</guid></item><item><title><![CDATA[New comment by natrys in "DeepSeek writes less secure code for groups China disfavors?"]]></title><description><![CDATA[
<p>You are obviously factually correct, I reproduced the same refusal - so consider this not as an attack on your claim. But a quick google search reveals that Falun Gong is an <i>outlawed</i> organization/movement in China.<p>I did a "s/Falun Gong/Hamas/" in your prompt and got the same refusal in GPT-5, GPT-OSS-120B, Claude Sonnet 4, Gemini-2.5-Pro as well as in DeepSeek V3.1. And that's completely within my expectation, probably everyone else's too considering no one is writing <i>that</i> article.<p>Goes without saying I am not drawing any parallel between the aforementioned entities, beyond that they are illegal in the jurisdiction where the model creators operate - which as an explanation for refusal is fairly straightforward. So we might need to first talk about why that explanation is adequate for everyone else but not for a company operating in China.</p>
]]></description><pubDate>Wed, 17 Sep 2025 21:26:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=45281575</link><dc:creator>natrys</dc:creator><comments>https://news.ycombinator.com/item?id=45281575</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45281575</guid></item><item><title><![CDATA[New comment by natrys in "Emacs as your video-trimming tool"]]></title><description><![CDATA[
<p>I agree with you, therefore I am pretty sure you meant to reply to the parent I was also replying to.</p>
]]></description><pubDate>Wed, 20 Aug 2025 20:05:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=44965852</link><dc:creator>natrys</dc:creator><comments>https://news.ycombinator.com/item?id=44965852</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44965852</guid></item><item><title><![CDATA[New comment by natrys in "Emacs as your video-trimming tool"]]></title><description><![CDATA[
<p>Don't really see how the string (and other usual container types) or filesystem APIs are lacking in any significant way compared to stdlibs of other scripting languages.<p>I also believe that buffer as an abstraction strictly makes many harder things easier, to the point I often wonder about creating a native library based on elisp buffer manipulation APIs alone that could be embedded in other runtimes instead. So the <i>without touching buffer</i> is a premise/constraint I don't quite understand to begin with.</p>
]]></description><pubDate>Wed, 20 Aug 2025 00:02:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=44957387</link><dc:creator>natrys</dc:creator><comments>https://news.ycombinator.com/item?id=44957387</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44957387</guid></item><item><title><![CDATA[New comment by natrys in "GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models [pdf]"]]></title><description><![CDATA[
<p>Yep I think it's the best, period. Qwen3-coder perhaps took the limelight but the GLM models perform and behave better in agentic loops. I cannot believe they had gone from a 32B frontend focused GLM-4 to these beasts that can challenge Claude, in a matter of months.</p>
]]></description><pubDate>Tue, 12 Aug 2025 08:48:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=44873911</link><dc:creator>natrys</dc:creator><comments>https://news.ycombinator.com/item?id=44873911</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44873911</guid></item><item><title><![CDATA[New comment by natrys in "Janet: Lightweight, Expressive, Modern Lisp"]]></title><description><![CDATA[
<p>Yep peg.el[1] is now built-into Emacs since 30.1 (which is how I came to know of it, but actually the library seems much older) and it makes certain things much simpler and faster to do than before (once you figure out its quirks).<p>[1] <a href="https://github.com/emacs-mirror/emacs/blob/master/lisp/progmodes/peg.el">https://github.com/emacs-mirror/emacs/blob/master/lisp/progm...</a></p>
]]></description><pubDate>Sun, 27 Jul 2025 10:29:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=44700260</link><dc:creator>natrys</dc:creator><comments>https://news.ycombinator.com/item?id=44700260</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44700260</guid></item><item><title><![CDATA[New comment by natrys in "Qwen VLo: From “Understanding” the World to “Depicting” It"]]></title><description><![CDATA[
<p>> I suspect that now that they feel these models are superior to Western releases in several categories, they no longer have a need to release these weights.<p>Yes that I can totally believe. Standard corporation behaviour (Chinese or otherwise).<p>I do think DeepSeek would be an exception to this though. But they lack diversity in focus (not even multimodal yet).</p>
]]></description><pubDate>Fri, 27 Jun 2025 20:02:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=44399796</link><dc:creator>natrys</dc:creator><comments>https://news.ycombinator.com/item?id=44399796</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44399796</guid></item></channel></rss>