<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: YeGoblynQueenne</title><link>https://news.ycombinator.com/user?id=YeGoblynQueenne</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 20:55:22 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=YeGoblynQueenne" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by YeGoblynQueenne in "The threat is comfortable drift toward not understanding what you're doing"]]></title><description><![CDATA[
<p>>> The supervisor still needs to know what the answer should look like, still needs to know which checks to demand, still needs to have the instinct that something is off before they can articulate why. That instinct doesn't come from a subscription. It comes from years of failing at exactly the kind of work that people keep calling grunt work.<p>i.e. science.</p>
]]></description><pubDate>Sun, 05 Apr 2026 23:01:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47654815</link><dc:creator>YeGoblynQueenne</dc:creator><comments>https://news.ycombinator.com/item?id=47654815</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47654815</guid></item><item><title><![CDATA[How Iran Should End the War]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.foreignaffairs.com/middle-east/how-iran-should-end-war-javad-zarif">https://www.foreignaffairs.com/middle-east/how-iran-should-end-war-javad-zarif</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47643276">https://news.ycombinator.com/item?id=47643276</a></p>
<p>Points: 11</p>
<p># Comments: 7</p>
]]></description><pubDate>Sat, 04 Apr 2026 20:56:12 +0000</pubDate><link>https://www.foreignaffairs.com/middle-east/how-iran-should-end-war-javad-zarif</link><dc:creator>YeGoblynQueenne</dc:creator><comments>https://news.ycombinator.com/item?id=47643276</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47643276</guid></item><item><title><![CDATA[New comment by YeGoblynQueenne in "Artemis II's toilet is a moon mission milestone"]]></title><description><![CDATA[
<p>Hold it in for three days. Then you're ready to go in a flash.</p>
]]></description><pubDate>Fri, 03 Apr 2026 00:19:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47621927</link><dc:creator>YeGoblynQueenne</dc:creator><comments>https://news.ycombinator.com/item?id=47621927</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47621927</guid></item><item><title><![CDATA[New comment by YeGoblynQueenne in "Why the US Navy won't blast the Iranians and 'open' Strait of Hormuz"]]></title><description><![CDATA[
<p>The US no longer uses its army for defense. Nobody in their immediate region dares attack them, they're too powerful ("Godzilla", in the words of John Mearsheimer). All the wars that the US has fought since WWII are nothing to do with defense. Just look at the Wikipedia article on "power projection":<p><a href="https://en.wikipedia.org/wiki/Power_projection" rel="nofollow">https://en.wikipedia.org/wiki/Power_projection</a><p>The leader image is ... a US aircraft carrier (the USS Nimitz). That's what the US uses its military power for, to influence events in lands far, far away from its territory.<p>But, now, tell me which one of the many wars that the US has fought in after WWII did <i>not</i> end in disaster. Afghanistan? Iraq? Korea?<p>There was a meme doing the rounds the other day: "Name a character who can defeat Captain America". The answer being "Captain Vietnam". The US has faced humiliating defeat after humiliating defeat while bringing death and destruction and immeasurable misery to millions around the world.<p><i>That</i> is what HN users seem to have an "anti" sentiment for. If you watch the news you'll be able to tell that this goes far beyond HN. The whole of US society seems to be extremely tired with those "forever wars", those senseless excursions to faraway lands, that not only do not secure US interests but turn world opinion more and more against the US. Even the US' closest allies now fear the US: <i>vide</i> Greenland. Anyone with more than a video game or comic book understanding of how the real world works would do well to be concerned.<p>Edit: also from EU, btw. Greek but living in the UK.</p>
]]></description><pubDate>Tue, 31 Mar 2026 11:54:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47586031</link><dc:creator>YeGoblynQueenne</dc:creator><comments>https://news.ycombinator.com/item?id=47586031</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47586031</guid></item><item><title><![CDATA[Court of appeal says it cannot rule on which identical twin fathered a child]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.theguardian.com/society/2026/mar/30/court-of-appeal-says-it-cannot-rule-on-which-identical-twin-fathered-a-child">https://www.theguardian.com/society/2026/mar/30/court-of-appeal-says-it-cannot-rule-on-which-identical-twin-fathered-a-child</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47585147">https://news.ycombinator.com/item?id=47585147</a></p>
<p>Points: 7</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 31 Mar 2026 10:16:11 +0000</pubDate><link>https://www.theguardian.com/society/2026/mar/30/court-of-appeal-says-it-cannot-rule-on-which-identical-twin-fathered-a-child</link><dc:creator>YeGoblynQueenne</dc:creator><comments>https://news.ycombinator.com/item?id=47585147</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47585147</guid></item><item><title><![CDATA[Uber Backs Bills to Make It Harder to Sue Them for Crashes]]></title><description><![CDATA[
<p>Article URL: <a href="https://jacobin.com/2026/03/uber-crashes-lawsuit-california-robotaxis/">https://jacobin.com/2026/03/uber-crashes-lawsuit-california-robotaxis/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47571031">https://news.ycombinator.com/item?id=47571031</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 30 Mar 2026 06:23:06 +0000</pubDate><link>https://jacobin.com/2026/03/uber-crashes-lawsuit-california-robotaxis/</link><dc:creator>YeGoblynQueenne</dc:creator><comments>https://news.ycombinator.com/item?id=47571031</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47571031</guid></item><item><title><![CDATA[New comment by YeGoblynQueenne in "Missile defense is NP-complete"]]></title><description><![CDATA[
<p>Well spotted, my bad, too late now.</p>
]]></description><pubDate>Sat, 28 Mar 2026 10:29:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47553305</link><dc:creator>YeGoblynQueenne</dc:creator><comments>https://news.ycombinator.com/item?id=47553305</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47553305</guid></item><item><title><![CDATA[New comment by YeGoblynQueenne in "Missile defense is NP-complete"]]></title><description><![CDATA[
<p>Shahed drones have a maximum range of 25000 km [bbc_1]. The distance from e.g. Isfahan to Tel-Aviv is ~1592 km [google]. Shaheds can reach Israrel from Iran.<p>As to them all being intercepted, in the 12-day war that seemed to be the plan, i.e. force Israel to waste interceptors on cheap drones [bbc_2]. That seems to have changed in the current conflict.<p>_______________<p>[bbc_1] <i>With a maximum range of 2,500km it could fly from Tehran to Athens.</i><p>[bbc_2] <i>When Iran attacked Israel with hundreds of drones in 2024, the UK was reported to have used RAF fighter jets to shoot some down with missiles that are estimated to cost around £200,000 each.</i><p>Both exceprts from:<p><a href="https://www.bbc.co.uk/news/resources/idt-b3a272f0-3e10-4f95-9cd1-b34ab8ad033c" rel="nofollow">https://www.bbc.co.uk/news/resources/idt-b3a272f0-3e10-4f95-...</a><p>[google] <a href="https://www.google.co.uk/maps/dir/Isfahan,+Isfahan+Province,+Iran//@31.658003,39.431095,2532919m/data=!3m1!1e3!4m9!4m8!1m5!1m1!1s0x3fbc35fe8c326799:0x7ab57816ef5837f5!2m2!1d51.6659656!2d32.6538966!1m0!3e3?entry=ttu&g_ep=EgoyMDI2MDMyMi4wIKXMDSoASAFQAw%3D%3D" rel="nofollow">https://www.google.co.uk/maps/dir/Isfahan,+Isfahan+Province,...</a></p>
]]></description><pubDate>Tue, 24 Mar 2026 23:40:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47511169</link><dc:creator>YeGoblynQueenne</dc:creator><comments>https://news.ycombinator.com/item?id=47511169</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47511169</guid></item><item><title><![CDATA[New comment by YeGoblynQueenne in "Executing programs inside transformers with exponentially faster inference"]]></title><description><![CDATA[
<p>Well, for example a computer can't make me an omelette. There's tons of examples like that, pretty much everything humans "can do" with our bodies, that computers can't- not just because they don't have bodies, but because even when we give them bodies we can't program them to do the things we want them to. LLMs don't help at all here. They can easily fake knowing what to do but the -not few- attempts people have made to connect LLMs to a robot to get the LLM to drive the robot like a little AI brain have ... not really worked out? I guess? Not even self-driving cars use LLMs.<p>Speaking of self-driving cars' AIs, while they have plenty of machine learning components, e.g. for vision, SLAM, and so on, they are largely hand-coded, rule-based systems. Just like the good old days of GOFAI.<p>>> The whole notion of "we need to know what intelligence is exactly to reproduce it" is completely and utterly wrong.<p>Can you explain why it's completely wrong?</p>
]]></description><pubDate>Tue, 17 Mar 2026 21:09:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47418352</link><dc:creator>YeGoblynQueenne</dc:creator><comments>https://news.ycombinator.com/item?id=47418352</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47418352</guid></item><item><title><![CDATA[The Fake Images of a Real Strike on a School]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.theatlantic.com/ideas/2026/03/ai-imagery-iran-war/686347/">https://www.theatlantic.com/ideas/2026/03/ai-imagery-iran-war/686347/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47414795">https://news.ycombinator.com/item?id=47414795</a></p>
<p>Points: 4</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 17 Mar 2026 16:20:40 +0000</pubDate><link>https://www.theatlantic.com/ideas/2026/03/ai-imagery-iran-war/686347/</link><dc:creator>YeGoblynQueenne</dc:creator><comments>https://news.ycombinator.com/item?id=47414795</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47414795</guid></item><item><title><![CDATA[New comment by YeGoblynQueenne in "Palestinian boy, 12, describes how Israeli forces killed his family in car"]]></title><description><![CDATA[
<p>That's a legitimate question and it has no good answer. Not just Sudan. There is an ongoing genocide in Myanmar, against the Rohingya. There is an ongoing genocide against the Uyghurs in china. None of those get nearly the amount of coverage the genocide in Gaza gets, or, now the war in Iran and Lebanon.<p>I have no idea why. I have recently started to grow a bit paranoid and wonder whether I am being manipulated by the media I consume. That would not be a huge surprise, I'm willing to bet most people are influenced by some of the things they read online.<p>Anyway this is an interesting question that has to be answered: why only Gaza, and not the other genocides?</p>
]]></description><pubDate>Tue, 17 Mar 2026 01:17:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47407398</link><dc:creator>YeGoblynQueenne</dc:creator><comments>https://news.ycombinator.com/item?id=47407398</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47407398</guid></item><item><title><![CDATA[AI error jails innocent grandmother for months in Fargo fraud case]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.inforum.com/news/fargo/ai-error-jails-innocent-grandmother-for-months-in-fargo-case">https://www.inforum.com/news/fargo/ai-error-jails-innocent-grandmother-for-months-in-fargo-case</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47407224">https://news.ycombinator.com/item?id=47407224</a></p>
<p>Points: 6</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 17 Mar 2026 00:53:45 +0000</pubDate><link>https://www.inforum.com/news/fargo/ai-error-jails-innocent-grandmother-for-months-in-fargo-case</link><dc:creator>YeGoblynQueenne</dc:creator><comments>https://news.ycombinator.com/item?id=47407224</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47407224</guid></item><item><title><![CDATA[New comment by YeGoblynQueenne in "Executing programs inside transformers with exponentially faster inference"]]></title><description><![CDATA[
<p>:waves:</p>
]]></description><pubDate>Mon, 16 Mar 2026 11:37:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47397683</link><dc:creator>YeGoblynQueenne</dc:creator><comments>https://news.ycombinator.com/item?id=47397683</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47397683</guid></item><item><title><![CDATA[New comment by YeGoblynQueenne in "Executing programs inside transformers with exponentially faster inference"]]></title><description><![CDATA[
<p>Well, neural nets do what neural nets do best (not ML in general, which is a broader field), so if a lot of funding is going to neural nets then we'll see a lot of progress on the stuff neural nets are best suited for. No surprise. If Google et al were spending billions on symbolic AI maybe we'd see equally spectacular results there too. Maybe not. But we won't know because they don't.<p>There's no sense in which symbolic AI is at the end of its life and if you pay close attention you'll see that LLMs are trying to do all the things that symbolic AI is good at: major examples being reasoning, and planning from world models.<p>And as nextos says in the sibling comment most of the recent successes of LLMs in tasks that go beyond language generation, e.g. solving math olympiad problems, are the result of combining LLMs with symbolic verifiers.<p>>> While ML is cracking open entirely new fields - and might go all the way to AGI, the way it's going now.<p>I don't agree. Everything that neural nets do today, speech recognition, object identification in images, machine translation, language generation, program synthesis, game playing, protein folding, research automation, I mean every single thing really, is a task that comes from the depths of AI history. There's a big discussion to be had about why those tasks are "AI" tasks in the first place and what they have to do with "intelligence" in the broader sense (e.g. cats are intelligent but they can't generate any sort of text) but this discussion is constantly postponed as we all breathlessly run up the hill that neural nets are climbing. When we get to the top and find it was the wrong hill to climb, maybe we'll have that discussion at last, or maybe the entire industry, academia in tow, will run after the Next Big Thing in AI™ all over again. But- cracking open new fields? Nah. Not really.<p>AGI is not going to happen any time soon though. We have no idea what we're doing in terms of reproducing intelligence, that much is clear.</p>
]]></description><pubDate>Mon, 16 Mar 2026 11:32:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47397651</link><dc:creator>YeGoblynQueenne</dc:creator><comments>https://news.ycombinator.com/item?id=47397651</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47397651</guid></item><item><title><![CDATA[New comment by YeGoblynQueenne in "Lost Doctor Who episodes found"]]></title><description><![CDATA[
<p>>> Written by the creator of the Daleks, Terry Nation, and Dennis Spooner, the serial starred Hartnell and Purves alongside an early appearance by Nicholas Courtney as Bret Vyon, Adrienne Hill as Katarina, and Kevin Stoney as Mavic Chen.<p>I thought the creator of the Daleks was Davros?</p>
]]></description><pubDate>Sat, 14 Mar 2026 14:35:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47377131</link><dc:creator>YeGoblynQueenne</dc:creator><comments>https://news.ycombinator.com/item?id=47377131</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47377131</guid></item><item><title><![CDATA[New comment by YeGoblynQueenne in "John Carmack about open source and anti-AI activists"]]></title><description><![CDATA[
<p>I have to ask, are there really "anti-AI activists"? Like, are there people marching against AI, attacking data center, spray-painting "AI OUT" on computers, and so on? Or is it just an exaggeration by Carmak?</p>
]]></description><pubDate>Sat, 14 Mar 2026 14:32:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47377097</link><dc:creator>YeGoblynQueenne</dc:creator><comments>https://news.ycombinator.com/item?id=47377097</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47377097</guid></item><item><title><![CDATA[New comment by YeGoblynQueenne in "John Carmack about open source and anti-AI activists"]]></title><description><![CDATA[
<p>This is a conversation forum, so it's natural for people to ask questions of each other. Sure, we could, in principle, ask Google, or ChatGPT for everything, but then why have an online conversation at all?</p>
]]></description><pubDate>Sat, 14 Mar 2026 14:30:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47377079</link><dc:creator>YeGoblynQueenne</dc:creator><comments>https://news.ycombinator.com/item?id=47377079</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47377079</guid></item><item><title><![CDATA[New comment by YeGoblynQueenne in "Executing programs inside transformers with exponentially faster inference"]]></title><description><![CDATA[
<p>Full disclosure: all my published work is on symbolic machine learning (a.k.a. Inductive Logic Programming) :O<p>I think you're confusing various different things as "neurosymbolic AI". There is a NeSy symposium and I happen to have met many of the people there, and they are not GOFAI ideologues, rather they recognise the obvious limitations of neural nets (i.e. they're crap at deduction, though great at induction) and they look for ways to address them. Most of that crowd also has a predominantly statistical ML/ neural nets background, with symbolic AI as an afterthought.<p>I don't think I've ever heard anyone say that "ML is not real AI" and I mainly move in symbolic AI circles. I would check my sources, if I were you.<p>Anwyay, honestly, this is 2026, there is no sensible reason to be polarised about symbolic vs. statistical AI (or whatever distinction anyone wants to make). An analogy I like to make is as follows: a jetliner is a flying machine, a helicopter is a flying machine. We can use both for their advantages and disadvantages, but a flying machine is something too useful to give up on any one kind for ideological reasons. The practical benefits overwhelmingly make up for any ideological concerns (e.g. "jets bad" or "propellers bad").<p>And just to be clear, symbolic AI is still in rude health: automated theorem proving, planning and scheduling, program verification and model checking, constraint satisfaction, discrete optimisation, SAT solving, all those are fields where symbolic approaches are dominant, and where neural nets have not made significant inroads in many decades; nor are they likely to, not any more than symbolic approaches are likely to make any inroads in e.g. machine vision, or speech recognition. And that's just fine: lots of tools, lots of problems solved.</p>
]]></description><pubDate>Sat, 14 Mar 2026 14:01:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47376783</link><dc:creator>YeGoblynQueenne</dc:creator><comments>https://news.ycombinator.com/item?id=47376783</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47376783</guid></item><item><title><![CDATA[New comment by YeGoblynQueenne in "Executing programs inside transformers with exponentially faster inference"]]></title><description><![CDATA[
<p>Yeah, a "100% correct" Sudoku solver fully trained by gradient descent from examples? That sure would be something entirely new.<p>To answer dwa3592, it's always possible to set the weights of a neural net by hand, albeit extremely fiddly and normally only done "on paper". This is e.g. how the Turing-completeness of RNNs was shown back in the '90s:<p><i>On the computational power of neural nets</i><p><a href="https://binds.cs.umass.edu/papers/1992_Siegelmann_COLT.pdf" rel="nofollow">https://binds.cs.umass.edu/papers/1992_Siegelmann_COLT.pdf</a></p>
]]></description><pubDate>Sat, 14 Mar 2026 13:57:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47376742</link><dc:creator>YeGoblynQueenne</dc:creator><comments>https://news.ycombinator.com/item?id=47376742</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47376742</guid></item><item><title><![CDATA[New comment by YeGoblynQueenne in "Executing programs inside transformers with exponentially faster inference"]]></title><description><![CDATA[
<p>That's more or less what I got, also, but it's hard to tell. What a very annoying article, in its vagueness.</p>
]]></description><pubDate>Sat, 14 Mar 2026 13:54:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47376707</link><dc:creator>YeGoblynQueenne</dc:creator><comments>https://news.ycombinator.com/item?id=47376707</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47376707</guid></item></channel></rss>