<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: d4rkn0d3z</title><link>https://news.ycombinator.com/user?id=d4rkn0d3z</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 16 May 2026 11:10:37 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=d4rkn0d3z" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by d4rkn0d3z in "AI: A Semiclassical Synthesis of Objective Collapse and Galactic Dynamics"]]></title><description><![CDATA[
<p>Put this together using AI.  Questions, comments, feedback, and hurled invective welcome.</p>
]]></description><pubDate>Wed, 17 Dec 2025 10:25:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46300246</link><dc:creator>d4rkn0d3z</dc:creator><comments>https://news.ycombinator.com/item?id=46300246</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46300246</guid></item><item><title><![CDATA[AI: A Semiclassical Synthesis of Objective Collapse and Galactic Dynamics]]></title><description><![CDATA[
<p>Article URL: <a href="https://drive.google.com/file/d/1yE3Yamp_qX46J1keVFvvqztV69QjNZI_/view?usp=drive_link">https://drive.google.com/file/d/1yE3Yamp_qX46J1keVFvvqztV69QjNZI_/view?usp=drive_link</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46300245">https://news.ycombinator.com/item?id=46300245</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Wed, 17 Dec 2025 10:25:53 +0000</pubDate><link>https://drive.google.com/file/d/1yE3Yamp_qX46J1keVFvvqztV69QjNZI_/view?usp=drive_link</link><dc:creator>d4rkn0d3z</dc:creator><comments>https://news.ycombinator.com/item?id=46300245</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46300245</guid></item><item><title><![CDATA[New comment by d4rkn0d3z in "AI should explain itself (more)"]]></title><description><![CDATA[
<p>We need pedagogy to displace ignorance and fear.</p>
]]></description><pubDate>Mon, 15 Dec 2025 10:43:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46272756</link><dc:creator>d4rkn0d3z</dc:creator><comments>https://news.ycombinator.com/item?id=46272756</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46272756</guid></item><item><title><![CDATA[AI should explain itself (more)]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.noemamag.com/the-politics-of-superintelligence/">https://www.noemamag.com/the-politics-of-superintelligence/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46272755">https://news.ycombinator.com/item?id=46272755</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 15 Dec 2025 10:43:39 +0000</pubDate><link>https://www.noemamag.com/the-politics-of-superintelligence/</link><dc:creator>d4rkn0d3z</dc:creator><comments>https://news.ycombinator.com/item?id=46272755</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46272755</guid></item><item><title><![CDATA[New comment by d4rkn0d3z in "An Interesting Set of Artifacts"]]></title><description><![CDATA[
<p>Soon, very soon, there will be no way to discern the output of AI from that of a human, just as there is no way to determine whether I added two numbers by hand on paper manually, or I pushed the "+" on my calculator, given only their sum.  Would you refuse to read an article if someone used a calculator to obtain results? What about numerical integrations? Would you refuse to read a book because a printing press made it?  Perhaps books should have labels that say "WARNING this book was printed by a machine".<p>What if over the next few decades, science is creatively destroyed; no more science remains that isn't in some way produced using AI.  I'm quite critical in using these tools but I'm not a Luddite.</p>
]]></description><pubDate>Sun, 14 Dec 2025 10:18:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=46262063</link><dc:creator>d4rkn0d3z</dc:creator><comments>https://news.ycombinator.com/item?id=46262063</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46262063</guid></item><item><title><![CDATA[New comment by d4rkn0d3z in "An Interesting Set of Artifacts"]]></title><description><![CDATA[
<p>I thought the word "artifact" gave a clue, so much for subtlety. Is there a reflexivity to avoiding AI?</p>
]]></description><pubDate>Sun, 14 Dec 2025 01:27:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46260013</link><dc:creator>d4rkn0d3z</dc:creator><comments>https://news.ycombinator.com/item?id=46260013</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46260013</guid></item><item><title><![CDATA[New comment by d4rkn0d3z in "An Interesting Set of Artifacts"]]></title><description><![CDATA[
<p>Isn't that in the text above?  Seems clear enough, no?  I mean it wasn't like I asked how the day was going, there was significant prompting.</p>
]]></description><pubDate>Sat, 13 Dec 2025 19:31:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=46257184</link><dc:creator>d4rkn0d3z</dc:creator><comments>https://news.ycombinator.com/item?id=46257184</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46257184</guid></item><item><title><![CDATA[New comment by d4rkn0d3z in "An Interesting Set of Artifacts"]]></title><description><![CDATA[
<p>I had an interaction with chatGPT that I thought was fun and interesting for three reasons:<p>1) There could be something novel about this, even if it's just the way it all hangs together.<p>2) If not, then it could be mundane but consistent output which is encouraging, with respect to previous interactions.<p>3) It could be wrong but then it is quite convincingly so, I have not yet checked the details.<p>Do you agree? Tell me what you think?</p>
]]></description><pubDate>Sat, 13 Dec 2025 19:00:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=46256965</link><dc:creator>d4rkn0d3z</dc:creator><comments>https://news.ycombinator.com/item?id=46256965</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46256965</guid></item><item><title><![CDATA[An Interesting Set of Artifacts]]></title><description><![CDATA[
<p>Article URL: <a href="https://drive.google.com/drive/folders/18uXq5Leil2rAY5ICOfKGryRFyDFYMh80?usp=drive_link">https://drive.google.com/drive/folders/18uXq5Leil2rAY5ICOfKGryRFyDFYMh80?usp=drive_link</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46256964">https://news.ycombinator.com/item?id=46256964</a></p>
<p>Points: 1</p>
<p># Comments: 7</p>
]]></description><pubDate>Sat, 13 Dec 2025 19:00:03 +0000</pubDate><link>https://drive.google.com/drive/folders/18uXq5Leil2rAY5ICOfKGryRFyDFYMh80?usp=drive_link</link><dc:creator>d4rkn0d3z</dc:creator><comments>https://news.ycombinator.com/item?id=46256964</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46256964</guid></item><item><title><![CDATA[New comment by d4rkn0d3z in "How geometry is fundamental for chess"]]></title><description><![CDATA[
<p>Geometry is fundamental, period.</p>
]]></description><pubDate>Fri, 12 Dec 2025 08:16:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=46241973</link><dc:creator>d4rkn0d3z</dc:creator><comments>https://news.ycombinator.com/item?id=46241973</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46241973</guid></item><item><title><![CDATA[New comment by d4rkn0d3z in "Horses: AI progress is steady. Human equivalence is sudden"]]></title><description><![CDATA[
<p>It might be better to think about what a horse is to a human, mostly a horse is an energy slave.  The history of humanity is a story about how many energy slaves are available to the average human.<p>In times past, the only people on earth who had their standard of living raised to a level that allowed them to cast there gaze upon the stars were the Kings and there courts, vassals, and noblemen.  As time passed we have learned to make technologies that provide enough energy slaves to the common man that everyone lives a life that a king would have envied in times past.<p>So the question arises as to whether AI or the pursuit of AGI provides more or less energy slaves to the common man?</p>
]]></description><pubDate>Tue, 09 Dec 2025 08:17:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46202535</link><dc:creator>d4rkn0d3z</dc:creator><comments>https://news.ycombinator.com/item?id=46202535</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46202535</guid></item><item><title><![CDATA[New comment by d4rkn0d3z in "Bag of words, have mercy on us"]]></title><description><![CDATA[
<p>A better anology would be a virus. In some sense LLMs, and all other very sophisticated technologies, lean on our resources to replicate themselves.  With LLMs you actually do have a projection of intelligemce in the language domain. Even though it is rather corpse-like, as though you shot intelligence in the face and shoved its body in the direction of language, just so you could draw a chaulk outline around it.<p>Despite all that, one can adopt the view that an LLM is a form of silicon based life akin to a virus and we are its environmental hosts exerting selective pressure and supplying much needed energy.  Whether that life is intelligent or not is another issue which is probably related to whether an LLM can tell that a cat cannot be, at the same time and in the same respect, not a cat.  The paths through the meaning manifold contructed by an LLM are not geodesic, they are not reversible, while in human reason the correct path is lossless. An LLM literally "thinks", up is a little bit down, and vice versa, by design.</p>
]]></description><pubDate>Mon, 08 Dec 2025 13:02:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46191715</link><dc:creator>d4rkn0d3z</dc:creator><comments>https://news.ycombinator.com/item?id=46191715</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46191715</guid></item><item><title><![CDATA[New comment by d4rkn0d3z in "Bag of words, have mercy on us"]]></title><description><![CDATA[
<p>An LLM creates a high fidelity statistical probabistic model of human language.  The hope is to capture the input/output of various hierarchical formal and semiformal systems of logic that transit from human to human, which we know as "Intelligence".<p>Unfortunately, its corpus is bound to contain noise/nonsense that follows no formal reasoning system but contributes to the ill advised idea that an AI should sound like a human to be considered intelligent.  Therefore it is not a bag of words but a bag of probabilities perhaps.  This is important because the fundamental problem is that an LLM is not able, by design, to correctly model the most fundamental precept of human reason, namely the law of non-contradiction. An LLM must, I repeat must assign nonvanishing probability to both sides of a contradiction, and what's worse is the winning side loses, since long chains of reason are modelled with probability the longer the chain, the less likely an LLM is to follow it. Moreover, whenever there is actual debate on an issue such that the corpus is ambiguous the LLM becomes chaotic, necessarily, on that issue.<p>I literally just had an AI prove the forgoing with some rigor, and in the very next prompt, I asked it to check my logical reasoning for consistency and it claimed it was able to do so (->|<-).</p>
]]></description><pubDate>Mon, 08 Dec 2025 10:26:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=46190667</link><dc:creator>d4rkn0d3z</dc:creator><comments>https://news.ycombinator.com/item?id=46190667</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46190667</guid></item><item><title><![CDATA[New comment by d4rkn0d3z in "The AI wildfire is coming. it's going to be painful and healthy"]]></title><description><![CDATA[
<p>A bad thing with some positive side effects is not a good thing.  Wildfire is bad, too frequent wildfires will turn forest into savannah.  I didn't see where this part of the analogy was discussed.</p>
]]></description><pubDate>Mon, 08 Dec 2025 08:31:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=46189808</link><dc:creator>d4rkn0d3z</dc:creator><comments>https://news.ycombinator.com/item?id=46189808</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46189808</guid></item><item><title><![CDATA[New comment by d4rkn0d3z in "Idempotency keys for exactly-once processing"]]></title><description><![CDATA[
<p>Again stopping the execution of the handler based on an ID is not idempotency, but rather a corrective measure due to the fact that the handler is <i>not</i> idempotent.  Idempotency is a property that says the handler can run as many times as I like, as diametrically opposed to the notion that it can run only once.</p>
]]></description><pubDate>Mon, 08 Dec 2025 08:07:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=46189654</link><dc:creator>d4rkn0d3z</dc:creator><comments>https://news.ycombinator.com/item?id=46189654</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46189654</guid></item><item><title><![CDATA[New comment by d4rkn0d3z in "Idempotency keys for exactly-once processing"]]></title><description><![CDATA[
<p>We are saying the same thing using different words.  I view this as a strategy for dealing with a lack of idempotency in handlers with a great deal of overhead.  So I guess I would call it a non-idempotency key since a handler that is not idempotent will necessarily use it.  I think this strays too close to a contradiction in terms.</p>
]]></description><pubDate>Sun, 07 Dec 2025 10:09:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46180600</link><dc:creator>d4rkn0d3z</dc:creator><comments>https://news.ycombinator.com/item?id=46180600</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46180600</guid></item><item><title><![CDATA[New comment by d4rkn0d3z in "Most technical problems are people problems"]]></title><description><![CDATA[
<p>And most people problems are math problems.</p>
]]></description><pubDate>Sat, 06 Dec 2025 09:01:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=46171796</link><dc:creator>d4rkn0d3z</dc:creator><comments>https://news.ycombinator.com/item?id=46171796</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46171796</guid></item><item><title><![CDATA[New comment by d4rkn0d3z in "Idempotency keys for exactly-once processing"]]></title><description><![CDATA[
<p>I agree, this whole thread seems to turn the concept of idempotency on its head.  As far as I know, an idempotent operation is one that can be repeated without ill-effect rather than the opposite which is a process that will cause errors if executed repeatedly.<p>The article doesn't propose anything especially different from Lamport clocks. What this article suggests is a way to deal with non-idempotent message handlers.</p>
]]></description><pubDate>Sat, 06 Dec 2025 08:37:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46171681</link><dc:creator>d4rkn0d3z</dc:creator><comments>https://news.ycombinator.com/item?id=46171681</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46171681</guid></item><item><title><![CDATA[New comment by d4rkn0d3z in "Some models of reality are bolder than others"]]></title><description><![CDATA[
<p>Time can also be viewed as a length.</p>
]]></description><pubDate>Fri, 05 Dec 2025 20:31:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46166849</link><dc:creator>d4rkn0d3z</dc:creator><comments>https://news.ycombinator.com/item?id=46166849</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46166849</guid></item><item><title><![CDATA[New comment by d4rkn0d3z in "Some models of reality are bolder than others"]]></title><description><![CDATA[
<p>Yes of course, not only can we but it makes everything so much easier.</p>
]]></description><pubDate>Fri, 05 Dec 2025 20:28:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46166818</link><dc:creator>d4rkn0d3z</dc:creator><comments>https://news.ycombinator.com/item?id=46166818</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46166818</guid></item></channel></rss>