<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: js8</title><link>https://news.ycombinator.com/user?id=js8</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 08 Apr 2026 22:05:52 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=js8" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by js8 in "US and Iran agree to provisional ceasefire"]]></title><description><![CDATA[
<p>Yes, I agree, except it's not irrelevant whether they built functional nuke or not, because this is used as a justification for war. (Not to mention, as a justification for war, "they could have built a nuke" is even more barbaric than "they have built a nuke".)<p>Still, that doesn't counter the fact they didn't actually make a nuclear bomb out of the material, nor the fact that their highest moral authority banned them from doing that, so it doesn't do anything to disprove that culturally they are more civilized (in that respect).<p>(Maybe an example from a corporation would clarify this better - the fact that there is a group of people in it doing things unethically doesn't mean that the company as a whole condones this behavior, even if structurally - how the corporation or capitalist society is constructed - might lead to some people doing it internally off the books. But once it is known to the CEO - the highest moral authority in a corporation, if he is not to be implicated in this, he must tell them to stop.)<p>It's frankly just moving the goalpost in an attempt not to accept your own barbarism. Is your culture OK with using nuclear weapons, even in self-defense? If yes, how do you dare to judge?</p>
]]></description><pubDate>Wed, 08 Apr 2026 07:03:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47686384</link><dc:creator>js8</dc:creator><comments>https://news.ycombinator.com/item?id=47686384</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47686384</guid></item><item><title><![CDATA[New comment by js8 in "US and Iran agree to provisional ceasefire"]]></title><description><![CDATA[
<p>Read the last two lines of that interview. Khamenei interpreted Islam as forbidding even building the bomb, and he is the moral authority on this, like it or not.<p>Japan could also have built a nuclear bomb, but chose not to. They decided that out of nothing else than their moral beliefs.<p>You simply don't want to accept than other cultures can be (in some respects, and even regardless of what individuals think on average - that's probably similar for large enough groups) more ethical than your own.</p>
]]></description><pubDate>Wed, 08 Apr 2026 05:27:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47685682</link><dc:creator>js8</dc:creator><comments>https://news.ycombinator.com/item?id=47685682</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47685682</guid></item><item><title><![CDATA[New comment by js8 in "US and Iran agree to provisional ceasefire"]]></title><description><![CDATA[
<p>> It exists to exchange a future nuclear war with Iran with a conventional war today.<p>That's just ridiculous. Nobody can predict the future, so trading uncertain war in the future for a certain war today is completely irrational. (And for the same reason, the war today is unlikely gonna be easier than the war tomorrow.)<p>Besides, Iran has avoided having nuclear weapon, because it causes too many civilian casualties, and that's against their beliefs. In this, they're more civilized than Americans (and Europeans), despite that this might be considered to be an irrational view by barbarians like you.<p>I think you're just coping with the fact that this war was utterly pointless, destructive for almost everyone in the world, and a poor attempt to increase power by a small group of people.</p>
]]></description><pubDate>Wed, 08 Apr 2026 04:49:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47685400</link><dc:creator>js8</dc:creator><comments>https://news.ycombinator.com/item?id=47685400</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47685400</guid></item><item><title><![CDATA[New comment by js8 in "Writing Lisp is AI resistant and I'm sad"]]></title><description><![CDATA[
<p>Personally, I think we're using LLMs wrong for programming. Computer programs are solutions to a given constraint logic problem (the specs).<p>We should be using LLMs to translate from (fuzzy) human specifications to formal specifications (potentially resolving contradictions), and then solving the resulting logic problem with a proper reasoning algorithm. That would also guarantee correctness.<p>LLMs are a "worse is better" kind of solution.</p>
]]></description><pubDate>Sun, 05 Apr 2026 05:49:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47646470</link><dc:creator>js8</dc:creator><comments>https://news.ycombinator.com/item?id=47646470</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47646470</guid></item><item><title><![CDATA[New comment by js8 in "Writing Lisp is AI resistant and I'm sad"]]></title><description><![CDATA[
<p>There is an interesting on-going research <a href="https://dnhkng.github.io/posts/sapir-whorf/" rel="nofollow">https://dnhkng.github.io/posts/sapir-whorf/</a> that shows LLMs think in a language-agnostic way. (It will probably get posted to HN after it is finished.)</p>
]]></description><pubDate>Sun, 05 Apr 2026 05:35:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47646403</link><dc:creator>js8</dc:creator><comments>https://news.ycombinator.com/item?id=47646403</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47646403</guid></item><item><title><![CDATA[New comment by js8 in "Oracle slashes 30k jobs"]]></title><description><![CDATA[
<p>Ah, there's your fallacy - you seem to think that when someone has a legal right to exercise some right, that also means they have a freedom (in the practical sense) to exercise that right.</p>
]]></description><pubDate>Wed, 01 Apr 2026 06:31:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47597579</link><dc:creator>js8</dc:creator><comments>https://news.ycombinator.com/item?id=47597579</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47597579</guid></item><item><title><![CDATA[New comment by js8 in "Oracle slashes 30k jobs"]]></title><description><![CDATA[
<p>Actually they kinda do, for example worker cooperatives. Not common, have some issues (different than those claimed by propaganda), but they do exist. (If we understand "marxist" as somewhat in favor of worker emancipation instead of alienation. Marx was an eclectic guy and can be interpreted in different ways.)</p>
]]></description><pubDate>Wed, 01 Apr 2026 06:24:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47597522</link><dc:creator>js8</dc:creator><comments>https://news.ycombinator.com/item?id=47597522</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47597522</guid></item><item><title><![CDATA[New comment by js8 in "Oracle slashes 30k jobs"]]></title><description><![CDATA[
<p>No, that's another sort of misconception, also expressed in another comment by WalterBright, which conveniently ignores the reality of most jobs.<p>It glosses over the fact that employers exercise control over the social relations required for production (of anything larger that can be built by a self-employed person). This happens by virtue of owning all the crucial means of production involved. And that point, where you need to coordinate work of several people, it ceases to be a system of contractors who freely determine their working conditions, and becomes a collective that has a common goal.<p>So no, it's not case in the U.S., in no economy of the world is majority of production organized into everyone being a little independent contractor who brings (or rents) their own equipment. That would be horribly inefficient (not to mention that people don't want it either, by and large).<p>There is a clear rebut to this, how can employer own the social relations (required for production), like managerial relationships, when they ostensibly only own the factory equipment? Well, it's like when you own an appartment, you technically only own the four walls, but practically you also enjoy the privacy that comes with it. In a similar way, capitalists owning a factory don't just rent equipment to a bunch workers, but can dictate the whole social superstructure of production, including the redistribution of earnings.</p>
]]></description><pubDate>Wed, 01 Apr 2026 06:17:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47597465</link><dc:creator>js8</dc:creator><comments>https://news.ycombinator.com/item?id=47597465</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47597465</guid></item><item><title><![CDATA[New comment by js8 in "Oracle slashes 30k jobs"]]></title><description><![CDATA[
<p>Yes, that's what people tell themselves to deal with it psychologically. That it's just a job, not a community, and you better not make friends in the workplace (despite spending majority of your life there). And that when you're unemployed, life just goes on, as if it doesn't mean much.<p>Like when a traumatised kid never loved by the parents concludes that life is harsh and love doesn't exist, so better be tough.</p>
]]></description><pubDate>Tue, 31 Mar 2026 16:07:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47589483</link><dc:creator>js8</dc:creator><comments>https://news.ycombinator.com/item?id=47589483</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47589483</guid></item><item><title><![CDATA[New comment by js8 in "Oracle slashes 30k jobs"]]></title><description><![CDATA[
<p>Actually, it is. You have been blinded by capitalism to consider it ethical.<p>The tribes usually treat the members as a family. While kicking someone from a tribe can happen, it's considered to be a harsh punishment.<p>In a tribe, when hard times come, people usually redistribute. That's a normal, human way of dealing with that situation. Not a layoff.<p>The other aspect is the economic crises. When a central bank decides to increase interest rates, it decreases lending to new investments in favor of lower inflation. This can lead to layoffs, instead of having inflation inflicted on everyone (especially the rich with huge savings). So that decision is essentially some random guys get kicked out of economic (and societal) participation in order to prevent more redistribution of existing wealth.<p>If you think about it, yes layoffs are deeply immoral. But we can understand, why they happen in capitalism, as a sort of big tragedy of the commons.</p>
]]></description><pubDate>Tue, 31 Mar 2026 15:38:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47588979</link><dc:creator>js8</dc:creator><comments>https://news.ycombinator.com/item?id=47588979</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47588979</guid></item><item><title><![CDATA[New comment by js8 in "Mathematical methods and human thought in the age of AI"]]></title><description><![CDATA[
<p>No it wasn't. Look at Joseph Stiglitz (Globalization and Its Discontents) and Ha-Joon Chang (Bad Samaritans, Kicking Away the Ladder) for counter-examples.</p>
]]></description><pubDate>Mon, 30 Mar 2026 14:17:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47574703</link><dc:creator>js8</dc:creator><comments>https://news.ycombinator.com/item?id=47574703</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47574703</guid></item><item><title><![CDATA[New comment by js8 in "Hypothesis, Antithesis, synthesis"]]></title><description><![CDATA[
<p>The AI would build a proof of correctness, which would be then verified in a proof checker (not AI).</p>
]]></description><pubDate>Tue, 24 Mar 2026 18:55:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47507451</link><dc:creator>js8</dc:creator><comments>https://news.ycombinator.com/item?id=47507451</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47507451</guid></item><item><title><![CDATA[New comment by js8 in "Hypothesis, Antithesis, synthesis"]]></title><description><![CDATA[
<p>> Databases and web browsers are too complicated to build a full-fidelity mathematical model for.<p>I disagree - thanks to Curry-Howard isomorphism, the full-fidelity mathematical model of a database or web browser are their binaries themselves.<p>We could have compilers provide theorems (with proof) of correctness of the translation from source to machine code, and library functions could provide useful theorems about the resource use.<p>Then, if the AI can reason about the behavior of the source code, it can also build the required proof of correctness along with it.</p>
]]></description><pubDate>Tue, 24 Mar 2026 18:53:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47507435</link><dc:creator>js8</dc:creator><comments>https://news.ycombinator.com/item?id=47507435</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47507435</guid></item><item><title><![CDATA[New comment by js8 in "Hypothesis, Antithesis, synthesis"]]></title><description><![CDATA[
<p>> There's no doubt, I think, testing will remain important and possibly become more important with more AI use, and so better testing is helpful, PBT included.<p>Given Curry-Howard isomorphism, couldn't we ask AI to directly prove the property of the binary executable under the assumption of the HW model, instead of running PBTs?<p>By no means I want to dismiss PBTs - but it seems that this could be both faster and more reliable.</p>
]]></description><pubDate>Tue, 24 Mar 2026 17:12:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47505945</link><dc:creator>js8</dc:creator><comments>https://news.ycombinator.com/item?id=47505945</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47505945</guid></item><item><title><![CDATA[New comment by js8 in "Reports of code's death are greatly exaggerated"]]></title><description><![CDATA[
<p>The big flagship AI models aren't just LLMs anymore, though. They are also trained with RL to respond better to user requests. Reading a lot of text is just one technique they employ to build the model of the world.<p>I think there are three different types of gaps, each with different remedies:<p>1. A definition problem - if I say "airplane", who do I mean? Probably something like jumbo jet or Cesna, less likely SR-71. This is something that we can never perfectly agree on, and AI will always will be limited to the best definition available to it. And if there is not enough training data or agreed definition for a particular (specialized) term, AI can just get this wrong (a nice example is the "Vihart" concept from above, which got mixed up with the "Seven red lines" sketch). So this is always going to be painful to get corrected, because it depends on each individual concept, regardless of the machine learning technology used. Frame problem is related to this, question of what hidden assumptions I am having when saying something.<p>2. The limits of reasoning with neural networks. What is really happening IMHO is that the AI models can learn rules of "informal" logical reasoning, by observing humans doing it. Informal logic learned through observation will always have logical gaps, simply because logical lapses occur in the training data. We could probably formalize this logic by defining some nice set of modal and fuzzy operators, however no one has been able to put it together yet. Then most, if not all, reasoning problems would reduce to solving a constraint problem; and even if we manage to quantize those and convert to SAT, it would still be NP-complete and as such potentially require large amounts of computation. AI models, even when they reason (and apply learned logical rules) don't do that large amount of computation in a formal way. So there are two tradeoffs - one is that AIs learned these rules informally and so they have gaps, and the other is that it is desirable in practice to time limit what amount of reasoning the AI will give to a given problem, which will lead to incomplete logical calculations. This gap is potentially fixable, by using more formal logic (and it's what happens when you run the AI program through tests, type checking, etc.), with the mentioned tradeoffs.<p>3. Going back to the "AI as an error-correcting code" analogy, if the input you give to AI (for example, a fragment of logical reasoning) is too much noisy (or contradictory), then it will just not respond as you expect it to (for example, it will correct the reasoning fragment in a way you didn't expect it to). This is similar to when an error-correcting code is faced with an input that is too noisy and outside its ability to correct it - it will just choose a different word as the correction. In AI models, this is compounded by the fact that nobody really understands the manifold of points that AI considers to be correct ideas (these are the code words in the error-correcting code analogy). In any case, this is again an unsolvable gap, AI will never be a magical mind reader, although it can be potentially fixed by AI having more context of what problem are you really trying to solve (the downside is this will be more intrusive to your life).<p>I think these things, especially point 2, will improve over time. They already have improved to the point that AI is very much usable in practice, and can be a huge time saver.</p>
]]></description><pubDate>Tue, 24 Mar 2026 05:43:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47498975</link><dc:creator>js8</dc:creator><comments>https://news.ycombinator.com/item?id=47498975</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47498975</guid></item><item><title><![CDATA[New comment by js8 in "Reports of code's death are greatly exaggerated"]]></title><description><![CDATA[
<p>I only have experience with Claude Code. If it goes on a spree, the task you are giving it is too big IMHO.<p>It's not a SAT solver (yet) and will have trouble to precisely handle arbitrarily large problems. So you have to lead it a bit, sometimes.</p>
]]></description><pubDate>Sun, 22 Mar 2026 23:39:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47483551</link><dc:creator>js8</dc:creator><comments>https://news.ycombinator.com/item?id=47483551</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47483551</guid></item><item><title><![CDATA[New comment by js8 in "Reports of code's death are greatly exaggerated"]]></title><description><![CDATA[
<p>Feel free to ask Claude about any other contradictory request. I use Claude Code and it often asks clarifying questions when it is unsure how to implement something, or or autocorrects my request if something I am asking for is wrong (like a typo in a filename). Of course sometimes it misunderstands; then you have to be more specific and/or divide the work into smaller pieces. Try it if you haven't.</p>
]]></description><pubDate>Sun, 22 Mar 2026 23:32:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47483500</link><dc:creator>js8</dc:creator><comments>https://news.ycombinator.com/item?id=47483500</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47483500</guid></item><item><title><![CDATA[New comment by js8 in "Reports of code's death are greatly exaggerated"]]></title><description><![CDATA[
<p>> AI tends to accept conventional wisdom. Because of this, it struggles with genuine critical thinking and cannot independently advance the state of the art.<p>Of course! But that's what makes them so powerful. In 99% of cases that's what you want - something that is conventional.<p>The AI can come up with novel things if it has an agency, and can learn on its own (using e.g. RL). But we don't want that in most use cases, because it's unpredictable; we want a tool instead.<p>It's not true that this lack of creativity implies lack of intelligence or critical thinking. AI clearly can reason and be critical, if asked to do so.<p>Conceptually, the breakthrough of AI systems (especially in coding, but it's to some extent true in other disciplines) is that they have an ability to take a fuzzy and potentially conflicting idea, and clean up the contradictions by producing a working, albeit conventional, implementation, by finding less contradictory pieces from the training data. The strength lies in intuition of what contradictions to remove. (You can think of it as an error-correcting code for human thoughts.)<p>For example, if I ask AI to "draw seven red lines, perpendicular, in blue ink, some of them transparent", it can find some solution that removes the contradictions from these constraints, or ask clarifying questons, what is the domain, so it could decide which contradictory statements to drop.<p>I actually put it to Claude and it gave a beautiful answer:<p>"I appreciate the creativity, but I'm afraid this request contains a few geometric (and chromatic) impossibilities: [..]<p>So, to faithfully fulfill this request, I would have to draw zero lines — which is roughly the only honest answer.<p>This is, of course, a nod to the classic comedy sketch by Vihart / the "Seven Red Lines" bit, where a consultant hilariously agrees to deliver exactly this impossible specification. The joke is a perfect satire of how clients sometimes request things that are logically or physically nonsensical, and how people sometimes just... agree to do it anyway.<p>Would you like me to draw something actually drawable instead? "<p>This clearly shows that AI can think critically and reason.</p>
]]></description><pubDate>Sun, 22 Mar 2026 21:29:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47482363</link><dc:creator>js8</dc:creator><comments>https://news.ycombinator.com/item?id=47482363</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47482363</guid></item><item><title><![CDATA[New comment by js8 in "Some things just take time"]]></title><description><![CDATA[
<p>I am not asking anybody to be an expert in both (although I am sure such people exist, however rare); I am saying people should ideally have some skill in both. Also, people can collaborate, and learn new skills.<p>If you're bottle-necked by waiting for the users of your product to give a feedback, you clearly need to spend more time learning how to be a user yourself. Or hire people with some domain skill who can also code.</p>
]]></description><pubDate>Sun, 22 Mar 2026 21:00:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47482093</link><dc:creator>js8</dc:creator><comments>https://news.ycombinator.com/item?id=47482093</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47482093</guid></item><item><title><![CDATA[New comment by js8 in "I'm OK being left behind, thanks"]]></title><description><![CDATA[
<p>Technical Supervision of the Investor is a thing, for a reason. The fact that IT industry doesn't have it is ridiculous.</p>
]]></description><pubDate>Sun, 22 Mar 2026 13:33:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47477388</link><dc:creator>js8</dc:creator><comments>https://news.ycombinator.com/item?id=47477388</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47477388</guid></item></channel></rss>