<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: TuringTest</title><link>https://news.ycombinator.com/user?id=TuringTest</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 08 May 2026 16:46:44 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=TuringTest" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by TuringTest in "Agents need control flow, not more prompts"]]></title><description><![CDATA[
<p>I would just reverse the architecture of the whole system. Build a classic deterministic program, and use LLMs as heuristics adapting the system to the environment - the functions that you call on the 'if's and 'switch' statements to decide where the system should go.<p>I see this as the most robust way to build a predictable system that runs in a controlled way while taking advantage of probabilistic AIs while reducing the impact of their alucinations.<p>LLMs simply can't be trusted to follow instructions in the general case, no matter how much you constraint them. The power of very large probabilistic models is that they basically solved the _frame problem_ of classic AI: logical reasoning didn't work for general tasks because you can't encode all common sense knowledge as axioms, and inference engines lost their way trying to solve large problems.<p>LLMs fix those handicaps, as they contain huge amounts of real world knowledge and they're capable of finding facts relevant to the problem at hand in an efficient way. Any autonomous system using them should exploit this benefit.</p>
]]></description><pubDate>Thu, 07 May 2026 19:30:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=48053732</link><dc:creator>TuringTest</dc:creator><comments>https://news.ycombinator.com/item?id=48053732</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48053732</guid></item><item><title><![CDATA[New comment by TuringTest in "Isaac Asimov: The Last Question (1956)"]]></title><description><![CDATA[
<p>Yes, my point is that those three arguments may be compelling but they assume that reality is correlated to the shape of their thoughts. What they have in common is that they all miss the insight that you need to actually test your assumptions to improve your certainties, and that's not feasible for theoretical all powerful entities that can bend reality.</p>
]]></description><pubDate>Sat, 18 Apr 2026 11:35:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47815073</link><dc:creator>TuringTest</dc:creator><comments>https://news.ycombinator.com/item?id=47815073</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47815073</guid></item><item><title><![CDATA[New comment by TuringTest in "Isaac Asimov: The Last Question (1956)"]]></title><description><![CDATA[
<p>I find Pascal's wager is of the same nature as Aquinas' Five Ways to prove God, or accelerationists about the inevitability of a Singularity: believing that your own rational argument can be the basis to prove a fact about reality merely because it feels internally consistent.<p>Needless to say, I don’t find them at all convincing. This 'nothing' is much better than catching unconvincing unneeded supernatural entities.</p>
]]></description><pubDate>Fri, 17 Apr 2026 21:28:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47810772</link><dc:creator>TuringTest</dc:creator><comments>https://news.ycombinator.com/item?id=47810772</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47810772</guid></item><item><title><![CDATA[New comment by TuringTest in "The rational conclusion of doomerism is violence"]]></title><description><![CDATA[
<p>No, I'm saying that they are not cause and effect but coevolution. Their agitprop could have such huge impact because of the conditions of workers in Zarist Russia and the Republic of China respectively. They wouldn't have worked in a different society; so no, they didn't single-handedly create the conditions for their own power, there was a previous substrate they could work on.</p>
]]></description><pubDate>Wed, 15 Apr 2026 05:19:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47774974</link><dc:creator>TuringTest</dc:creator><comments>https://news.ycombinator.com/item?id=47774974</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47774974</guid></item><item><title><![CDATA[New comment by TuringTest in "The rational conclusion of doomerism is violence"]]></title><description><![CDATA[
<p>You think Lenin and Mao didn't have behind them an ideology in their societies that supported them? Why did people follow their orders then, mind control?</p>
]]></description><pubDate>Tue, 14 Apr 2026 18:24:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47769322</link><dc:creator>TuringTest</dc:creator><comments>https://news.ycombinator.com/item?id=47769322</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47769322</guid></item><item><title><![CDATA[New comment by TuringTest in "The rational conclusion of doomerism is violence"]]></title><description><![CDATA[
<p>> I would argue plenty of significant societal changes were caused by the behavior of relatively small number of people<p>Specific breaking points in history yeah, maybe. But that's possible because they're <i>well connected</i> people near the center of the network.<p>Those breakpoints are possible because either those few people share a viewpoint held by a large number of their peers, or benefit from knowledge accumulated throughout their civilization. Think how every dictator needs support from a huge following to get their power (and how easy it is to find another dictator to replace them if they die), or how often some breakthrough discoveries are made by multiple people at the same time. There's always a last straw that breaks the camel's back, but the lone wolf hardly ever gets a significant impact on society at large; they need a receptive audience to get any impact. Humans are herd animals.<p>Following the metaphor, the butterfly effect is only possible because a storm was brewing in the first place; the butterfly wings only decide where it will appear. Butterfly wings just don't have that much energy.<p>History is told from the perspective of kings, but kings can reign only within a society that believes in their divine right to rule.</p>
]]></description><pubDate>Tue, 14 Apr 2026 04:03:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47761095</link><dc:creator>TuringTest</dc:creator><comments>https://news.ycombinator.com/item?id=47761095</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47761095</guid></item><item><title><![CDATA[New comment by TuringTest in "The rational conclusion of doomerism is violence"]]></title><description><![CDATA[
<p>Who gets to say that the demos is fundamentally flawed? Each in-group have their own opinions on what's a flaw.<p>Society evolves through epiphenomena caused by the behaviour of the majority; the fact that some minorities view that evolution as 'flawed' cannot change that evolution, unless they're able to influence the majority to also see it that way.<p>Now, democracy is essentially a way for everybody to broadcast their views on society's flaws on non-violent ways. The alternative is that some groups broadcast their opinions in violent ways, and we have learned to see that situation as undesirable.</p>
]]></description><pubDate>Mon, 13 Apr 2026 17:59:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47755678</link><dc:creator>TuringTest</dc:creator><comments>https://news.ycombinator.com/item?id=47755678</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47755678</guid></item><item><title><![CDATA[New comment by TuringTest in "Android’s new sideload settings will carry over to new devices"]]></title><description><![CDATA[
<p>The problem with that thought is that Goole isn't creating a good solution, it's creating this specific one.</p>
]]></description><pubDate>Sun, 29 Mar 2026 07:17:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47561042</link><dc:creator>TuringTest</dc:creator><comments>https://news.ycombinator.com/item?id=47561042</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47561042</guid></item><item><title><![CDATA[New comment by TuringTest in "HyperCard discovery: Neuromancer, Count Zero, Mona Lisa Overdrive (2022)"]]></title><description><![CDATA[
<p>We're finally getting there. The model of web notebooks look a lot like Hypercard stacks in terms of usability; there's only missing someone packing them in and easy-to-use distribution and sharing environment that does not depend on users installing their own web server.<p>And if that package includes some reasonable local LLM model, creating simple programs by end users could be even easier than it ever was with Hypercard.</p>
]]></description><pubDate>Tue, 10 Mar 2026 20:20:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47328324</link><dc:creator>TuringTest</dc:creator><comments>https://news.ycombinator.com/item?id=47328324</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47328324</guid></item><item><title><![CDATA[New comment by TuringTest in "I don't know how you get here from “predict the next word”"]]></title><description><![CDATA[
<p>Isn't that the same as <i>compressing the whole book</i>, in a special differential format that compares how the text looks from any given point before and after?</p>
]]></description><pubDate>Thu, 26 Feb 2026 08:08:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47163289</link><dc:creator>TuringTest</dc:creator><comments>https://news.ycombinator.com/item?id=47163289</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47163289</guid></item><item><title><![CDATA[New comment by TuringTest in "Coding agents have replaced every framework I used"]]></title><description><![CDATA[
<p>I can only speak for myself but for me, it's all about the syntax. I am terrible at recalling the exact name of all the functions in a library or parameters in an API, which really slows me down when writing code. I've also explored all kinds of programming languages in different paradigms, which makes it hard to recall the exact syntax of operators (is comparison '=' or '==' in this language? Comments are // or /*? How many parameters does this function take, and in what order...) or control structures. But I'm good at high level programming concepts, so it's easy to say what I want in technical language and let the LLM find the exact syntax and command names for me.<p>I guess if you specialise in maintaining a code base with a single language and a fixed set of libraries then it becomes easier to remember all the details, but for me it will always be less effort to just search the names for whatever tools I want to include in a program at any point.</p>
]]></description><pubDate>Sat, 07 Feb 2026 15:04:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=46924418</link><dc:creator>TuringTest</dc:creator><comments>https://news.ycombinator.com/item?id=46924418</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46924418</guid></item><item><title><![CDATA[New comment by TuringTest in "Understanding Neural Network, Visually"]]></title><description><![CDATA[
<p>It is possible to try it, and some people do (high speed trading is just that, plus taking advantage of privileged information that speed provides to react before anyone else).<p>However there are two fundamental problems to computational predictions. The first one obviously is accuracy. A model is a compressed memorization of everything observed so far; a prediction with it is just projecting into the future the observed patterns. In a chaotic system, that goes only so far; the most regular, predictable patterns are obvious to everybody and give less return, and the chaotic system states where prediction would be more valuable are the less reliable. You cannot build a perfect oracle that would fix that.<p>The second problem is more insidious. Even if you were able to build a perfect oracle, acting on its predictions would become part of the system itself. That would change the outcomes, making the system behave in a different way as it was trained, and thus less reliable. If several people do it at the same time, there's no way to retrain the model to take into account the new behaviour.<p>There's the possibility (but not a guarantee) to reach a fixed point, that a Nash equilibrium would appear where such system becomes into a stable cycle, but that's not likely in a changing environment where everybody tries to outdo everyone else.</p>
]]></description><pubDate>Sat, 07 Feb 2026 11:49:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46923106</link><dc:creator>TuringTest</dc:creator><comments>https://news.ycombinator.com/item?id=46923106</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46923106</guid></item><item><title><![CDATA[New comment by TuringTest in "What Is Ruliology?"]]></title><description><![CDATA[
<p>The word he's looking for is "formal system".<p>For some reason he doesn't like doing mathematical demonstrations so he shuns the practice of doing them, and invented a new word to describe that way of using formal systems.<p><a href="https://en.wikipedia.org/wiki/Formal_system" rel="nofollow">https://en.wikipedia.org/wiki/Formal_system</a></p>
]]></description><pubDate>Sat, 07 Feb 2026 11:03:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=46922882</link><dc:creator>TuringTest</dc:creator><comments>https://news.ycombinator.com/item?id=46922882</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46922882</guid></item><item><title><![CDATA[New comment by TuringTest in "Peerweb: Decentralized website hosting via WebTorrent"]]></title><description><![CDATA[
<p>You never lived the 90's</p>
]]></description><pubDate>Sat, 31 Jan 2026 00:12:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46831787</link><dc:creator>TuringTest</dc:creator><comments>https://news.ycombinator.com/item?id=46831787</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46831787</guid></item><item><title><![CDATA[New comment by TuringTest in "Doing the thing is doing the thing"]]></title><description><![CDATA[
<p>But both are doing the winning thing, which is more valuable than just the battle thing. Unless you do it just for fun and don't mind the result.</p>
]]></description><pubDate>Tue, 27 Jan 2026 21:50:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=46787461</link><dc:creator>TuringTest</dc:creator><comments>https://news.ycombinator.com/item?id=46787461</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46787461</guid></item><item><title><![CDATA[New comment by TuringTest in "Doing the thing is doing the thing"]]></title><description><![CDATA[
<p>Sometimes that's because they're making it worthwhile, by connecting the thing with those who will benefit from it and explaining how to use it, which is as valuable as doing the thing.<p>I.e. by making sure that they're doing the <i>right thing</i>.</p>
]]></description><pubDate>Tue, 27 Jan 2026 21:48:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=46787442</link><dc:creator>TuringTest</dc:creator><comments>https://news.ycombinator.com/item?id=46787442</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46787442</guid></item><item><title><![CDATA[New comment by TuringTest in "There is an AI code review bubble"]]></title><description><![CDATA[
<p><i>>A human rubber-stamping code being validated by a super intelligent machine is the equivalent of a human sitting silently in the driver's seat of a self-driving car, "supervising".</i><p>So, absolutely necessary and essential?<p>In order to get the machine out of trouble when the unavoidable strange situation happens that didn't appear during training, and requires some judgement based on ethics or logical reasoning. For that case, you need a human in charge.</p>
]]></description><pubDate>Mon, 26 Jan 2026 19:10:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46770118</link><dc:creator>TuringTest</dc:creator><comments>https://news.ycombinator.com/item?id=46770118</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46770118</guid></item><item><title><![CDATA[New comment by TuringTest in "List of individual trees"]]></title><description><![CDATA[
<p>Wikipedia is and has always been a wiki; reverting bad or controversial edits has always been expected from day one.<p>Also Wikipedia has developed an editorial line of its own, so it's normal that edits that go against the line will be put in question; if that happens to you, you're expected to collaborate in the talk pages to express your intent for the changes, and possibly get recommendations on how to tweak it so that it sticks.<p>It also happens that most of contributions by first timers are indistinguishable from vandalism or spam; those are so obvious that an automated bot is able to recognize them and revert them without human supervision, with a very high success rate.<p>However if those first contributions are genuinely useful to the encyclopedia, such as adding high quality references for an unverified claim, correcting typos, or removing obvious vandalism that slipped through the cracks, it's much more likely that the edits will stay; go ahead and try <i>that</i> experiment and tell us how it went.</p>
]]></description><pubDate>Fri, 16 Jan 2026 09:15:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=46644596</link><dc:creator>TuringTest</dc:creator><comments>https://news.ycombinator.com/item?id=46644596</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46644596</guid></item><item><title><![CDATA[New comment by TuringTest in "25 Years of Wikipedia"]]></title><description><![CDATA[
<p>You're right, I mistyped it.</p>
]]></description><pubDate>Thu, 15 Jan 2026 18:12:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=46636667</link><dc:creator>TuringTest</dc:creator><comments>https://news.ycombinator.com/item?id=46636667</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46636667</guid></item><item><title><![CDATA[New comment by TuringTest in "25 Years of Wikipedia"]]></title><description><![CDATA[
<p><i>> The main issue with neutral people is that we do not know in which camp they are.</i><p>And that's a good thing, 'cause it means they're living to their standards.</p>
]]></description><pubDate>Thu, 15 Jan 2026 17:36:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=46636194</link><dc:creator>TuringTest</dc:creator><comments>https://news.ycombinator.com/item?id=46636194</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46636194</guid></item></channel></rss>