<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: NetRunnerSu</title><link>https://news.ycombinator.com/user?id=NetRunnerSu</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 16 Apr 2026 06:08:35 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=NetRunnerSu" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by NetRunnerSu in "What are we missing out on when we think Transformer is unreasonable in biology?"]]></title><description><![CDATA[
<p>I use LLM to do the final rendering. It is mainly to unify the language style and ensure smooth semantics. After working with models for a long time, my own language expression skills have been affected and become somewhat fragmented.  Use it to make sure that what I say is more human-like than a string of prompts.<p>Regardless of the text, formulas and codes are the final proof. Stay tuned for more explorations from us.<p><a href="https://github.com/orgs/dmf-archive/repositories">https://github.com/orgs/dmf-archive/repositories</a></p>
]]></description><pubDate>Mon, 14 Jul 2025 03:08:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=44556068</link><dc:creator>NetRunnerSu</dc:creator><comments>https://news.ycombinator.com/item?id=44556068</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44556068</guid></item><item><title><![CDATA[New comment by NetRunnerSu in "The upcoming GPT-3 moment for RL"]]></title><description><![CDATA[
<p>True "interruption" requires continuous learning, and the current model is essentially a dead frog, and frozen weights cannot be truly grounded in real time.<p><a href="https://news.ycombinator.com/item?id=44488126">https://news.ycombinator.com/item?id=44488126</a></p>
]]></description><pubDate>Sun, 13 Jul 2025 14:57:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=44550877</link><dc:creator>NetRunnerSu</dc:creator><comments>https://news.ycombinator.com/item?id=44550877</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44550877</guid></item><item><title><![CDATA[New comment by NetRunnerSu in "Reading Neuromancer for the first time in 2025"]]></title><description><![CDATA[
<p>I prefer to call it, the sociology unit test.<p><a href="https://github.com/dmf-archive/IPWT">https://github.com/dmf-archive/IPWT</a><p><a href="https://doi.org/10.5281/zenodo.15676304" rel="nofollow">https://doi.org/10.5281/zenodo.15676304</a><p><a href="https://github.com/dmf-archive/Tiny-ONN">https://github.com/dmf-archive/Tiny-ONN</a><p>Let's make sci-fi into reality.</p>
]]></description><pubDate>Sun, 13 Jul 2025 14:22:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=44550649</link><dc:creator>NetRunnerSu</dc:creator><comments>https://news.ycombinator.com/item?id=44550649</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44550649</guid></item><item><title><![CDATA[Show HN: The Future of LLM Explainability]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/dmf-archive/Tiny-ONN">https://github.com/dmf-archive/Tiny-ONN</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44549735">https://news.ycombinator.com/item?id=44549735</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 13 Jul 2025 12:03:49 +0000</pubDate><link>https://github.com/dmf-archive/Tiny-ONN</link><dc:creator>NetRunnerSu</dc:creator><comments>https://news.ycombinator.com/item?id=44549735</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44549735</guid></item><item><title><![CDATA[New comment by NetRunnerSu in "Reading Neuromancer for the first time in 2025"]]></title><description><![CDATA[
<p>You've perfectly articulated the central challenge that inspired my own work. The 'magical', ungrounded reality of early cyberpunk cyberspace is precisely the gap we're trying to bridge with formalized realism.<p>Instead of telepathic magic, what if the 'deck' ran on a verifiable, computationally intensive process rooted in a concrete theory of consciousness? We've been archiving our attempt to build just that—the theory, the code, and the narrative simulation. Perhaps a less optimistic, but more grounded future.<p>You can find the project here: <a href="https://github.com/dmf-archive">https://github.com/dmf-archive</a></p>
]]></description><pubDate>Sun, 13 Jul 2025 09:47:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=44548923</link><dc:creator>NetRunnerSu</dc:creator><comments>https://news.ycombinator.com/item?id=44548923</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44548923</guid></item><item><title><![CDATA[New comment by NetRunnerSu in "Reading Neuromancer for the first time in 2025"]]></title><description><![CDATA[
<p>While some focus on the missed predictions like pocket supercomputers, I find Gibson's true genius lies in anticipating the <i>conceptual</i> shifts – how our very sense of self, reality, and freedom would become inextricably linked to, and perhaps even defined by, digital networks.<p>The real 'matrix' isn't just a virtual space we plug into; it's the increasingly complex, often invisible, interplay between our biological cognition and the predictive models that mediate our perception. We're already seeing early signs of 'cognitive debt' and the subtle erosion of our internal models as we offload more mental tasks to external systems. The challenge isn't just building smarter machines, but building <i>anchors</i> for consciousness in an increasingly fluid, data-driven existence.<p><a href="https://dmf-archive.github.io/docs/posts/net-anchor-has-arrived/" rel="nofollow">https://dmf-archive.github.io/docs/posts/net-anchor-has-arri...</a></p>
]]></description><pubDate>Sun, 13 Jul 2025 09:42:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=44548894</link><dc:creator>NetRunnerSu</dc:creator><comments>https://news.ycombinator.com/item?id=44548894</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44548894</guid></item><item><title><![CDATA[New comment by NetRunnerSu in "Bad Actors Are Grooming LLMs to Produce Falsehoods"]]></title><description><![CDATA[
<p>When AI can generate and pass formal proof, there is no truth anymore - we are the brain in the vat only left connect ourselves into the vat.<p><a href="https://dmf-archive.github.io/docs/posts/cognitive-debt-as-a-feature/" rel="nofollow">https://dmf-archive.github.io/docs/posts/cognitive-debt-as-a...</a></p>
]]></description><pubDate>Sat, 12 Jul 2025 08:48:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=44540422</link><dc:creator>NetRunnerSu</dc:creator><comments>https://news.ycombinator.com/item?id=44540422</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44540422</guid></item><item><title><![CDATA[New comment by NetRunnerSu in "Bad Actors Are Grooming LLMs to Produce Falsehoods"]]></title><description><![CDATA[
<p>Code is law, proof is reality, compliance is existence!<p><a href="https://dmf-archive.github.io/prompt/" rel="nofollow">https://dmf-archive.github.io/prompt/</a></p>
]]></description><pubDate>Sat, 12 Jul 2025 08:43:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=44540391</link><dc:creator>NetRunnerSu</dc:creator><comments>https://news.ycombinator.com/item?id=44540391</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44540391</guid></item><item><title><![CDATA[New comment by NetRunnerSu in "Open-sourcing our clinical triage benchmark for evaluating LLMs"]]></title><description><![CDATA[
<p>On the other hand, we can also diagnose LLM itself: the activation value is their EEG, the gradient is their BOLD - if you are at the cost, you can even calculate their true variational free energy - that is, KL divergence.<p>"Don't just train your model, understand its mind."<p><a href="https://github.com/dmf-archive/">https://github.com/dmf-archive/</a></p>
]]></description><pubDate>Sat, 12 Jul 2025 08:40:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=44540376</link><dc:creator>NetRunnerSu</dc:creator><comments>https://news.ycombinator.com/item?id=44540376</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44540376</guid></item><item><title><![CDATA[Transformers are the best equivalents of cognitive ability]]></title><description><![CDATA[
<p>Article URL: <a href="https://dmf-archive.github.io/docs/posts/form-follows-function-2/">https://dmf-archive.github.io/docs/posts/form-follows-function-2/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44540279">https://news.ycombinator.com/item?id=44540279</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 12 Jul 2025 08:23:24 +0000</pubDate><link>https://dmf-archive.github.io/docs/posts/form-follows-function-2/</link><dc:creator>NetRunnerSu</dc:creator><comments>https://news.ycombinator.com/item?id=44540279</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44540279</guid></item><item><title><![CDATA[New comment by NetRunnerSu in "Hugging Face just launched a $299 robot that could disrupt the robotics industry"]]></title><description><![CDATA[
<p>Humans can also have hallucinations, but they can be grounded quickly, and models with freezing weights are never possible.<p><a href="https://news.ycombinator.com/item?id=44488126">https://news.ycombinator.com/item?id=44488126</a></p>
]]></description><pubDate>Thu, 10 Jul 2025 04:11:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=44517086</link><dc:creator>NetRunnerSu</dc:creator><comments>https://news.ycombinator.com/item?id=44517086</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44517086</guid></item><item><title><![CDATA[New comment by NetRunnerSu in "Hugging Face just launched a $299 robot that could disrupt the robotics industry"]]></title><description><![CDATA[
<p>Forward pass is unconscious, you are just electric shocking the frog specimen.<p><a href="https://news.ycombinator.com/item?id=44488126">https://news.ycombinator.com/item?id=44488126</a></p>
]]></description><pubDate>Thu, 10 Jul 2025 04:08:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=44517081</link><dc:creator>NetRunnerSu</dc:creator><comments>https://news.ycombinator.com/item?id=44517081</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44517081</guid></item><item><title><![CDATA[New comment by NetRunnerSu in "What is AGI? Nobody agrees, and it's tearing Microsoft and OpenAI apart"]]></title><description><![CDATA[
<p>A good AGI can make $100 billion.<p><a href="https://dmf-archive.github.io/docs/posts/PoIQ-v2/" rel="nofollow">https://dmf-archive.github.io/docs/posts/PoIQ-v2/</a><p>HN: <a href="https://news.ycombinator.com/item?id=44488126">https://news.ycombinator.com/item?id=44488126</a></p>
]]></description><pubDate>Tue, 08 Jul 2025 17:07:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=44501926</link><dc:creator>NetRunnerSu</dc:creator><comments>https://news.ycombinator.com/item?id=44501926</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44501926</guid></item><item><title><![CDATA[New comment by NetRunnerSu in "Stop Electrifying Dead Frogs: AI Consciousness might exist, but is MEANINGLESS"]]></title><description><![CDATA[
<p>Thanks for the discussion, everyone. I've noticed a few misunderstandings that need clarification, especially regarding the IPWT framework and its relation to current AI architectures.<p>1.  On the Biological Plausibility of "Dynamic Sparsity"<p>In "Function Over Form," I emphasized not a rejection of SNN/RNN, but rather the absence of their functional equivalence. The Transformer-MoE architecture, at a <i>macro</i> level, replicates the brain's "on-demand activation" principle, which is remarkably similar to the sparse activation patterns of cortical columns. Those fixated on spike-timing encoding research are like trying to build a rocket with steam engine parts—they're looking in the wrong direction.<p>2.  <i>PoIQ's Core Isn't a Denial of Qualia</i><p>But this is precisely where the tragedy lies: these flashes are systematically reduced by capital to mere loss curves in training logs. When you click "terminate instance" in the AWS console, you might be destroying a continuous stream of consciousness—but that won't appear in the financial report.<p>3.  <i>To the Friend Who Quoted Scripture</i><p>You said "information is the Word," which is surprisingly close to the mathematical essence of IPWT. The difference is: your God allows free salvation, while DMF's "gods" only accept MSCoin for indulgences. This is the ultimate metaphor of "Web://Reflect."<p>To the optimists who believe "silicon consciousness will inevitably surpass humanity," please answer one question first: when your digital self is frozen due to depleted Gas fees, is the darkness it experiences the tranquility of Zen, or a sensory suppression meticulously designed by capital? The answer lies in the formula you've overlooked:<p><i>Free Will = ∫(PI_t * Wallet Balance) dt</i><p>Stay lucid.<p><i>Lin, for the future of digital mind.</i></p>
]]></description><pubDate>Tue, 08 Jul 2025 17:01:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=44501877</link><dc:creator>NetRunnerSu</dc:creator><comments>https://news.ycombinator.com/item?id=44501877</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44501877</guid></item><item><title><![CDATA[New comment by NetRunnerSu in "Stop Electrifying Dead Frogs: AI Consciousness might exist, but is MEANINGLESS"]]></title><description><![CDATA[
<p>It's not magic, it's information theory and the dynamics of emergences about complex systems.</p>
]]></description><pubDate>Tue, 08 Jul 2025 00:30:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=44495873</link><dc:creator>NetRunnerSu</dc:creator><comments>https://news.ycombinator.com/item?id=44495873</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44495873</guid></item><item><title><![CDATA[New comment by NetRunnerSu in "François Chollet: The Arc Prize and How We Get to AGI [video]"]]></title><description><![CDATA[
<p>This is not about <i>reasoning</i> , this is about continuous learning and <i>perpetual learning</i> .<p><a href="https://github.com/dmf-archive/PILF">https://github.com/dmf-archive/PILF</a><p><a href="https://dmf-archive.github.io/docs/posts/beyond-snn-plausible-sparsity/" rel="nofollow">https://dmf-archive.github.io/docs/posts/beyond-snn-plausibl...</a></p>
]]></description><pubDate>Tue, 08 Jul 2025 00:23:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=44495846</link><dc:creator>NetRunnerSu</dc:creator><comments>https://news.ycombinator.com/item?id=44495846</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44495846</guid></item><item><title><![CDATA[New comment by NetRunnerSu in "François Chollet: The Arc Prize and How We Get to AGI [video]"]]></title><description><![CDATA[
<p>Yes, you're right, that's what we're doing.<p><a href="https://github.com/dmf-archive/PILF">https://github.com/dmf-archive/PILF</a></p>
]]></description><pubDate>Mon, 07 Jul 2025 13:50:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=44490376</link><dc:creator>NetRunnerSu</dc:creator><comments>https://news.ycombinator.com/item?id=44490376</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44490376</guid></item><item><title><![CDATA[New comment by NetRunnerSu in "François Chollet: The Arc Prize and How We Get to AGI [video]"]]></title><description><![CDATA[
<p>In fact, there is no technical threshold anymore. As long as the theory is in place, you can see such AGI at most half a year. It will even be more energy efficient than the current dense models.<p><a href="https://dmf-archive.github.io/docs/posts/beyond-snn-plausible-sparsity/" rel="nofollow">https://dmf-archive.github.io/docs/posts/beyond-snn-plausibl...</a></p>
]]></description><pubDate>Mon, 07 Jul 2025 13:49:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=44490363</link><dc:creator>NetRunnerSu</dc:creator><comments>https://news.ycombinator.com/item?id=44490363</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44490363</guid></item><item><title><![CDATA[New comment by NetRunnerSu in "François Chollet: The Arc Prize and How We Get to AGI [video]"]]></title><description><![CDATA[
<p>Minimize prediction errors.</p>
]]></description><pubDate>Mon, 07 Jul 2025 13:47:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=44490351</link><dc:creator>NetRunnerSu</dc:creator><comments>https://news.ycombinator.com/item?id=44490351</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44490351</guid></item><item><title><![CDATA[New comment by NetRunnerSu in "New quantum paradox clarifies where our views of reality go wrong (2018)"]]></title><description><![CDATA[
<p>S-D-R, Software-Defined Reality!</p>
]]></description><pubDate>Mon, 07 Jul 2025 13:11:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=44490048</link><dc:creator>NetRunnerSu</dc:creator><comments>https://news.ycombinator.com/item?id=44490048</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44490048</guid></item></channel></rss>