<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: Zondartul</title><link>https://news.ycombinator.com/user?id=Zondartul</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 01 May 2026 08:40:42 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=Zondartul" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by Zondartul in "Show HN: Axe – A 12MB binary that replaces your AI framework"]]></title><description><![CDATA[
<p>It is easier to trust in the correctness and reliability of an LLM when you treat it as a glorified NLP function with a very narrow scope and limited responsibilities. That is to say, LLMs rarely mess up specific low level instructions, compared to open-ended, long-horizon tasks.</p>
]]></description><pubDate>Thu, 12 Mar 2026 18:35:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47355235</link><dc:creator>Zondartul</dc:creator><comments>https://news.ycombinator.com/item?id=47355235</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47355235</guid></item><item><title><![CDATA[New comment by Zondartul in "Kilo Code: Speedrunning open source coding AI"]]></title><description><![CDATA[
<p>In my unqualified opinion, LLMs would do better at niche languages or even specific versions of mainstream languages, as well as niche frameworks, if they were better at consultig the documentation for the language or framework, for example, the user could give the LLM a link to the docs or an offline copy, and the LLM would prioritise the docs over the pretrained code. Currently this is not feasible because 1. limited context is shared with the actual code, 2. RAG is one-way injection i to the LLM, the LLM usually wouldn't "ask for a specific docs page" even if they probably should.</p>
]]></description><pubDate>Wed, 26 Mar 2025 20:38:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=43487029</link><dc:creator>Zondartul</dc:creator><comments>https://news.ycombinator.com/item?id=43487029</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43487029</guid></item><item><title><![CDATA[New comment by Zondartul in "Why the weak nuclear force is short range"]]></title><description><![CDATA[
<p>At some point our understanding of fundamental reality will be limited not by how much the physicists have uncovered but by how many years of university it would take to explain it. In the end each of us only has one lifetime.</p>
]]></description><pubDate>Wed, 15 Jan 2025 17:41:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=42714262</link><dc:creator>Zondartul</dc:creator><comments>https://news.ycombinator.com/item?id=42714262</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42714262</guid></item><item><title><![CDATA[New comment by Zondartul in "Cognitive load is what matters"]]></title><description><![CDATA[
<p>Comments that explain the intent, rather than implementation, are the more useful kind. And when intent doesn't match the actual code, that's a good hint - it might be why the code doesn't work.</p>
]]></description><pubDate>Thu, 26 Dec 2024 14:44:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=42515465</link><dc:creator>Zondartul</dc:creator><comments>https://news.ycombinator.com/item?id=42515465</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42515465</guid></item><item><title><![CDATA[New comment by Zondartul in "The unbearable slowness of being: Why do we live at 10 bits/s?"]]></title><description><![CDATA[
<p>Ask stupid questions, receive stupid answers.</p>
]]></description><pubDate>Wed, 18 Dec 2024 15:50:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=42451439</link><dc:creator>Zondartul</dc:creator><comments>https://news.ycombinator.com/item?id=42451439</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42451439</guid></item><item><title><![CDATA[New comment by Zondartul in "Making memcpy(NULL, NULL, 0) well-defined"]]></title><description><![CDATA[
<p>What does "speculative" mean in this case? I understand it as CPU-level speculative execution a.k.a. branch mis-prediction, but that shouldn't have any real-world effects (or else we'd have segfaults all the time due to executing code that didn't really happen)</p>
]]></description><pubDate>Wed, 11 Dec 2024 17:52:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=42390578</link><dc:creator>Zondartul</dc:creator><comments>https://news.ycombinator.com/item?id=42390578</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42390578</guid></item><item><title><![CDATA[New comment by Zondartul in "NASA Investigates Laser-Beam Welding in a Vacuum for In-Space Manufacturing"]]></title><description><![CDATA[
<p>Normal welding needs heat to melt the metals. Cold welding happens without heat. Two metal parts will cold-weld on any smooth, touching faces if the air molecules that keep the two separated disappear.</p>
]]></description><pubDate>Thu, 14 Nov 2024 11:08:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=42135074</link><dc:creator>Zondartul</dc:creator><comments>https://news.ycombinator.com/item?id=42135074</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42135074</guid></item><item><title><![CDATA[New comment by Zondartul in "NASA Investigates Laser-Beam Welding in a Vacuum for In-Space Manufacturing"]]></title><description><![CDATA[
<p>Cold welding is unintentional, spontaneous joining of two metal parts in vacuum. You don't want that to happen, especially if the parts are meant to move.<p>Normal welding is intentional application of heat to partially melt two parts at the seam, so that they "mix" in semi-liquid state and become one part when they solidify. Welding may or may not use a third material (solder) to aid the process.</p>
]]></description><pubDate>Thu, 14 Nov 2024 05:43:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=42133434</link><dc:creator>Zondartul</dc:creator><comments>https://news.ycombinator.com/item?id=42133434</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42133434</guid></item><item><title><![CDATA[New comment by Zondartul in "The surprising effectiveness of test-time training for abstract reasoning [pdf]"]]></title><description><![CDATA[
<p>ARC problems are too hard for me. I'm no longer sure I'm generally intelligent.</p>
]]></description><pubDate>Tue, 12 Nov 2024 19:18:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=42118650</link><dc:creator>Zondartul</dc:creator><comments>https://news.ycombinator.com/item?id=42118650</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42118650</guid></item><item><title><![CDATA[New comment by Zondartul in "Show HN: LlamaPReview – AI GitHub PR reviewer that learns your codebase"]]></title><description><![CDATA[
<p>By "learns" do you mean "just shove the entire codebase into the context window", or does actual training-on-my-data take place?</p>
]]></description><pubDate>Wed, 30 Oct 2024 19:13:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=41999040</link><dc:creator>Zondartul</dc:creator><comments>https://news.ycombinator.com/item?id=41999040</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41999040</guid></item><item><title><![CDATA[New comment by Zondartul in "The Copenhagen Book: general guideline on implementing auth in web applications"]]></title><description><![CDATA[
<p>To be fair, once someone has physical access to the machine, them having full access is just a matter of time and effort. So at that point it's security-through-too-much-effort-to-bother.</p>
]]></description><pubDate>Fri, 11 Oct 2024 04:49:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=41806271</link><dc:creator>Zondartul</dc:creator><comments>https://news.ycombinator.com/item?id=41806271</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41806271</guid></item><item><title><![CDATA[New comment by Zondartul in "It's Time to Stop Taking Sam Altman at His Word"]]></title><description><![CDATA[
<p>Future NeuroLink collab? Grab the experience of qualia right from the brains of those who do the experiencing.</p>
]]></description><pubDate>Sat, 05 Oct 2024 19:05:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=41752126</link><dc:creator>Zondartul</dc:creator><comments>https://news.ycombinator.com/item?id=41752126</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41752126</guid></item><item><title><![CDATA[New comment by Zondartul in "It's Time to Stop Taking Sam Altman at His Word"]]></title><description><![CDATA[
<p>If we figure out AGI, that still doesn't mean a singularity. I'm going to speak as though we're on the brink of AI outthinking every human on earth (we are not) but bear with me, I want to make it clear we're not going jobless any time soon.<p>For starters, we still need the AI (LLMs for now) to be more efficient, i.e. not require a datacenter to train and deploy. Yes, I know there are tiny models you can run on your home pc, but that's comparing a bycicle to a jet.<p>Second, for an AGI it meaningfully improve itself, it has to be smarter than not just any one person, but the sum total of all people it took to invent it. Until then no single AI can replace our human tech sphere of activity.<p>As long as there are limits to how smart an AI can get, there are places where humans can contribute economically. If there is ever to be a singularity, it's going to be a slow one, and large human AI vompanies will be part of the process for many decades still.</p>
]]></description><pubDate>Sat, 05 Oct 2024 18:59:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=41752084</link><dc:creator>Zondartul</dc:creator><comments>https://news.ycombinator.com/item?id=41752084</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41752084</guid></item><item><title><![CDATA[New comment by Zondartul in "GPTs and Hallucination"]]></title><description><![CDATA[
<p>You can say "Bullshit". LLMs bullshit all the time. Talk without regard to the truth or falsity of statements. It also doesn't pressupose that the trueness is known, nir deny it, so it should satisfy both camps; unlike hallucination which implies that truth and fiction are separate.<p>I wonder if there is some sort of transition between recalling declarative facts (some of which have been shown to be decodable from activations) on one hand and completing the sentence with the most fitting word on the other hand. The dream that "hallucination" can be eliminated requires that the two states be separable, yet it is not evident to me that these "facts" are at all accessible without a sentence to complete.</p>
]]></description><pubDate>Tue, 10 Sep 2024 18:35:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=41504062</link><dc:creator>Zondartul</dc:creator><comments>https://news.ycombinator.com/item?id=41504062</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41504062</guid></item><item><title><![CDATA[New comment by Zondartul in "A Word, Please: Coffee-shop prompt stirs ChatGPT to brew up bland copy"]]></title><description><![CDATA[
<p>My hunch is that since LLMs are trained on a per word basis (okay, per-token), vacuus verbosity is overrepresented.<p>If you have one normal sentence and one overly verbose, the latter will have more tokens and therefore more weight.</p>
]]></description><pubDate>Sun, 08 Sep 2024 18:48:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=41482245</link><dc:creator>Zondartul</dc:creator><comments>https://news.ycombinator.com/item?id=41482245</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41482245</guid></item><item><title><![CDATA[New comment by Zondartul in "How the Higgs field gives mass to elementary particles"]]></title><description><![CDATA[
<p>Rule #1 of talking about the aether is "don't call it aether". 
Nowadays it's "spacetime this" and "mass-energy tensor that" and "properties of vacuum something else".... and we still end up with empty space behaving like a funky fluid.</p>
]]></description><pubDate>Tue, 03 Sep 2024 18:22:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=41437597</link><dc:creator>Zondartul</dc:creator><comments>https://news.ycombinator.com/item?id=41437597</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41437597</guid></item><item><title><![CDATA[New comment by Zondartul in "Maker Skill Trees"]]></title><description><![CDATA[
<p>I applaud the effort, but, these templated skill trees are just not very good imo. I have only one issue but it's a big one:<p>It's missing a tree structure, so there is no ordering of skills (learn easy stuff before hard stuff because there is a learning curve to anything).<p>They might be good as prints to hang on your wall, but in the current state they're more "achievement lists" rather than anything resembling a tech tree.</p>
]]></description><pubDate>Wed, 28 Aug 2024 18:59:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=41382973</link><dc:creator>Zondartul</dc:creator><comments>https://news.ycombinator.com/item?id=41382973</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41382973</guid></item><item><title><![CDATA[New comment by Zondartul in "AI solves International Math Olympiad problems at silver medal level"]]></title><description><![CDATA[
<p>As far as I understand, and I may be wrong here, the system is composed of two networks: Gemini and AlphaZero. Gemini, being an ordinary LLM with some fine-tunes, only does translation from natural to formal language. Then, AlphaZero solves the problem. AlphaZero, unburdened with natural language and only dealing with "playing a game in the proof space"  (where the "moves" are commands to the Lean theorem prover), does not hallucinate in the same way an LLM does because it is nothing like an LLM.</p>
]]></description><pubDate>Fri, 26 Jul 2024 05:52:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=41076106</link><dc:creator>Zondartul</dc:creator><comments>https://news.ycombinator.com/item?id=41076106</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41076106</guid></item><item><title><![CDATA[New comment by Zondartul in "NASA's Curiosity rover discovers a surprise in a Martian rock"]]></title><description><![CDATA[
<p>It's cool how some minerals are just lying out in the open on Mars. On Earth this would have been washed away or buried under the soil.</p>
]]></description><pubDate>Fri, 19 Jul 2024 14:52:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=41007131</link><dc:creator>Zondartul</dc:creator><comments>https://news.ycombinator.com/item?id=41007131</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41007131</guid></item><item><title><![CDATA[New comment by Zondartul in "Disney's Internal Slack Breached? NullBulge Leaks 1.1 TiB of Data"]]></title><description><![CDATA[
<p>With how big and aggressive Disney is I'd expect it to be under ongoing litigation 24/7/365.</p>
]]></description><pubDate>Sat, 13 Jul 2024 19:19:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=40956199</link><dc:creator>Zondartul</dc:creator><comments>https://news.ycombinator.com/item?id=40956199</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40956199</guid></item></channel></rss>