<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: dinfinity</title><link>https://news.ycombinator.com/user?id=dinfinity</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 04:07:34 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=dinfinity" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by dinfinity in "AI could be the end of the digital wave, not the next big thing"]]></title><description><![CDATA[
<p>You largely ignored what I said and displayed exactly the fallacious behavior I was pointing out. Again, Y2K was not a problem <i>because</i> people 'freaked out' (took the problem seriously). Similarly, AI will <i>only</i> not be a problem due to people that spend time and effort to mitigate its issues, not due to people like you pretending that because nothing went seriously wrong in the past, nothing automatically will this time (because you "just don't see the basis for it").</p>
]]></description><pubDate>Tue, 14 Apr 2026 11:48:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47764377</link><dc:creator>dinfinity</dc:creator><comments>https://news.ycombinator.com/item?id=47764377</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47764377</guid></item><item><title><![CDATA[New comment by dinfinity in "AI could be the end of the digital wave, not the next big thing"]]></title><description><![CDATA[
<p>> I try to not be fatalistic. As I was trying to argue, it's historically inaccurate and it doesn't actually change the outcome.<p>This is false. Being fatalistic and 'panicking' can definitely influence and thus change the outcome. Your logic is similar to what is (incorrectly) used to dismiss the Y2K-problem, for instance: Looking back it <i>seems</i> like there was no need to panic, but that is only because a lot of people recognized the urgency, worked their ass off and succeeded in preventing shit from going horribly wrong.<p>See: <a href="https://en.wikipedia.org/wiki/Preparedness_paradox" rel="nofollow">https://en.wikipedia.org/wiki/Preparedness_paradox</a><p>Your handwaving is doing harm by lulling people into a false sense of security. Your initial comment amounts to "Ah, it'll be fine, don't worry about it. We'll adapt, we always have.", even though you provide absolutely no arguments specific to this enormous force of insanely rapid change in an already incredibly unstable fragile world. We might adapt, but it will require serious thought rather than handwaving and leaning back; even then it might come with massive societal upheaval and a lot of suffering.</p>
]]></description><pubDate>Mon, 13 Apr 2026 18:56:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47756397</link><dc:creator>dinfinity</dc:creator><comments>https://news.ycombinator.com/item?id=47756397</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47756397</guid></item><item><title><![CDATA[New comment by dinfinity in "The AI Layoff Trap"]]></title><description><![CDATA[
<p>I agree, but that was not the point of contention.<p>I did not downvote you. I argued why your model of what the future will look like is wrong. That point still stands.</p>
]]></description><pubDate>Mon, 13 Apr 2026 16:52:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47754817</link><dc:creator>dinfinity</dc:creator><comments>https://news.ycombinator.com/item?id=47754817</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47754817</guid></item><item><title><![CDATA[New comment by dinfinity in "AI could be the end of the digital wave, not the next big thing"]]></title><description><![CDATA[
<p>We are the horses, though.<p>At some point those became almost fully obsolete in a productive economical sense (they're just fancy toys now, basically). No 'raising the ambition' is ever going to change that. They are what they are and they can do what they can do.<p>I don't know about you, but if the something in "we'll find something to do" is becoming a toy for AI or very rich people, I'm not exactly hopeful about the future.</p>
]]></description><pubDate>Mon, 13 Apr 2026 16:45:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47754711</link><dc:creator>dinfinity</dc:creator><comments>https://news.ycombinator.com/item?id=47754711</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47754711</guid></item><item><title><![CDATA[New comment by dinfinity in "The AI Layoff Trap"]]></title><description><![CDATA[
<p>Past performance is not a reliable indicator of future results.<p>Your logic equates to "there will always be jobs for humans to do", which is naive. Remember that we're counting <i>down</i> in the number of things we do better than inorganic stuff. At <i>some</i> point our bodies (admittedly impressive when compared to other animals) will be surpassed in enough aspects that there isn't anything where we can provide enough value to live off.</p>
]]></description><pubDate>Mon, 13 Apr 2026 15:57:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47753950</link><dc:creator>dinfinity</dc:creator><comments>https://news.ycombinator.com/item?id=47753950</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47753950</guid></item><item><title><![CDATA[New comment by dinfinity in "Bird brains (2023)"]]></title><description><![CDATA[
<p>> If you think being a guide dog doesn't require intelligence, you're delusional.<p>I see you dropped "sniffing out drugs" as a task requiring intelligence, that's a start.<p>> It can use it's innate intelligence to sense danger<p>So sensing danger requires intelligence? Bacteria can sense danger.<p>> navigate around obstacles it's never seen before<p>Not intelligence, but dexterity. Only if it has to solve a puzzle does intelligence come into play. And dogs suck ass at solving puzzles. Some birds are somewhat decent at it, but still very far removed from what an LLM can do.<p>> communicate with other humans through barking<p>Yeah, Timmy fell down a well, right? Perfect example of 'intelligence' and something you'd prefer a dog over an LLM /s<p>> I can't tell if you're trolling at this point. Llms are also trained and therefore are based on repetition and conditioning.<p>That is a fair point, but remember that your training examples were "sniffing for drugs" and "being a guide dog", both of which are very much in-distribution training (guide dogs only do a very specific very small set of things and require a lot of training to even be able to do those).<p>But for the sake of argument, let's say that there are some tasks requiring intelligence where you would prefer a dog over an LLM. Answer me this:
Roughly what <i>percentage</i> of distinct tasks requiring intelligence would you prefer to have a dog over an LLM? For each task, imagine that failure to complete the task will cause serious harm to your loved ones, so the stakes are high.</p>
]]></description><pubDate>Thu, 02 Apr 2026 19:07:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47618797</link><dc:creator>dinfinity</dc:creator><comments>https://news.ycombinator.com/item?id=47618797</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47618797</guid></item><item><title><![CDATA[New comment by dinfinity in "Bird brains (2023)"]]></title><description><![CDATA[
<p>Sniffing for illegal drugs requires wit? Right.<p>And 'trained' clearly means it is not something based in intelligence, but in repetition and conditioning.<p>Answer the actual question.</p>
]]></description><pubDate>Tue, 31 Mar 2026 20:16:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47592867</link><dc:creator>dinfinity</dc:creator><comments>https://news.ycombinator.com/item?id=47592867</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47592867</guid></item><item><title><![CDATA[New comment by dinfinity in "Bird brains (2023)"]]></title><description><![CDATA[
<p>So are animals trivially executing a complex program or are they 'analyzing' a complex problem?<p>LLMs can (more often) successfully find solutions for <i>far</i> more complex problems than animals can. So where does that leave us?</p>
]]></description><pubDate>Tue, 31 Mar 2026 15:38:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47588985</link><dc:creator>dinfinity</dc:creator><comments>https://news.ycombinator.com/item?id=47588985</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47588985</guid></item><item><title><![CDATA[New comment by dinfinity in "Bird brains (2023)"]]></title><description><![CDATA[
<p>Complexity does not require intelligence. Modern computers (even without AI) and technological systems do incredibly complex things and I'm quite sure you would not call those systems (again, without AI) intelligent.</p>
]]></description><pubDate>Tue, 31 Mar 2026 08:46:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47584433</link><dc:creator>dinfinity</dc:creator><comments>https://news.ycombinator.com/item?id=47584433</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47584433</guid></item><item><title><![CDATA[New comment by dinfinity in "Bird brains (2023)"]]></title><description><![CDATA[
<p>Neither of those are based in intelligence, but rather in dexterity, agility and sensing capabilities. Try again, and this time please read the question carefully and answer in good faith rather than trying to (unsuccessfully) look for a loophole.</p>
]]></description><pubDate>Tue, 31 Mar 2026 08:44:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47584412</link><dc:creator>dinfinity</dc:creator><comments>https://news.ycombinator.com/item?id=47584412</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47584412</guid></item><item><title><![CDATA[New comment by dinfinity in "Bird brains (2023)"]]></title><description><![CDATA[
<p>> If you're in tune with animals and spend time around a parrot, it's obvious there is a lot going on in their minds.<p>Not saying there isn't and somewhat offtopic, but if you apply this to LLMs those are <i>much</i>, <i>much</i> 'smarter' than <i>all</i> the animals people like to call intelligent (or something similar). If you disagree, please tell me for which task requiring intelligence you'd rather have an animal's wit than that of an LLM.<p>I really do feel we should be taking the current state of affairs as a starting point to recalibrate what counts as smart or worth 'protecting', whether it's our beloved animal friends or something inorganic. Simultaneously believing "birds are super smart" and "LLMs are just stochastic parrots" seems absurd.</p>
]]></description><pubDate>Mon, 30 Mar 2026 19:30:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47578662</link><dc:creator>dinfinity</dc:creator><comments>https://news.ycombinator.com/item?id=47578662</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47578662</guid></item><item><title><![CDATA[New comment by dinfinity in "Mouser: An open source alternative to Logi-Plus mouse software"]]></title><description><![CDATA[
<p>Keychron keyboards are absolutely amazing. And very affordable for what they are.</p>
]]></description><pubDate>Sat, 14 Mar 2026 17:08:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47378749</link><dc:creator>dinfinity</dc:creator><comments>https://news.ycombinator.com/item?id=47378749</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47378749</guid></item><item><title><![CDATA[New comment by dinfinity in "We should revisit literate programming in the agent era"]]></title><description><![CDATA[
<p>You did not engage with my main arguments. You should still do so.<p>1. Redundancy: "The code is what it does. The comments should contain what it's supposed to do. [...] You don't have to know anything else to see that something is wrong here." and specifically the concrete trivial (but effective) example.<p>2. "My take on developers arguing for self-documenting code is that they are undisciplined or do not use their tools well. The arguments against copious inline comments are "but people don't update them" and "I can see less of the code"."<p>> Respectfully, if someone wrote code like this, I wouldn't want to work with them. I mean next step is "I copy paste code [...]<p>This is an nonsensical slippery slope fallacy. In no way does that behavior follow from placing many comments in code. It also says nothing about the clearly demonstrated value of redundancy.<p>> I have been navigating code for 20 years and in good codebases, comments are rare and describe something "surprising".<p>Your definition of good here is circular. No argument on <i>why</i> they are good codebases. Did you measure how easy they were to maintain? How easy it was to onboard new developers? How many bugs it contained? Note also that correlation != causation: it might very well be that the good codebases you encountered were solo-projects by highly capable motivated developers and the comment-rich ones were complicated multi-developer projects with lots of developer churn.<p>> My problem with "literate programming" [...] is that I find it hard to trust developers who genuinely cannot understand unsurprising code without comments.<p>This is gatekeeping code by making it <i>less</i> understandable and essentially an admission that code with comments is <i>easier</i> to understand. I see the logic of this, but it is solving a problem in the wrong place. Developer competence should not be ascertained by intentionally making the code worse.</p>
]]></description><pubDate>Tue, 10 Mar 2026 11:58:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47322036</link><dc:creator>dinfinity</dc:creator><comments>https://news.ycombinator.com/item?id=47322036</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47322036</guid></item><item><title><![CDATA[New comment by dinfinity in "We should revisit literate programming in the agent era"]]></title><description><![CDATA[
<p>The code is what it does. The comments should contain what it's supposed to do.<p>Even if you give them equal roles, self-documenting code versus commented code is like having data on one disk versus having data in a RAID array.<p>Remember: Redundancy is a feature. Mismatches are information. Consider this:<p>// Calculate the sum of one and one<p>sum = 1 + 2;<p>You don't have to know anything else to see that something is wrong here. It could be that the comment is outdated, which has no direct effects and is easily solved. It could be that this is a bug in the code. In any case it is information and a great starting point for looking into a possible problem (with a simple git blame). Again, without needing <i>any</i> context, knowledge of the project or external documentation.<p>My take on developers arguing for self-documenting code is that they are undisciplined or do not use their tools well. The arguments against copious inline comments are "but people don't update them" and "I can see less of the code".</p>
]]></description><pubDate>Mon, 09 Mar 2026 20:13:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47314813</link><dc:creator>dinfinity</dc:creator><comments>https://news.ycombinator.com/item?id=47314813</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47314813</guid></item><item><title><![CDATA[New comment by dinfinity in "Why it takes you and an elephant the same amount of time to poop (2017)"]]></title><description><![CDATA[
<p>"Nearly odorless"?<p>I call bullshit.</p>
]]></description><pubDate>Sat, 07 Mar 2026 17:19:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47289511</link><dc:creator>dinfinity</dc:creator><comments>https://news.ycombinator.com/item?id=47289511</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47289511</guid></item><item><title><![CDATA[New comment by dinfinity in "TikTok's 'addictive design' found to be illegal in Europe"]]></title><description><![CDATA[
<p>> I'm not saying "let the producers run free". Intervening there is fine as long as we keep front of mind and mouth that people need to take their responsibility and that we need to do everything to help them to do so.</p>
]]></description><pubDate>Fri, 06 Feb 2026 23:01:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46919365</link><dc:creator>dinfinity</dc:creator><comments>https://news.ycombinator.com/item?id=46919365</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46919365</guid></item><item><title><![CDATA[New comment by dinfinity in "TikTok's 'addictive design' found to be illegal in Europe"]]></title><description><![CDATA[
<p>Exactly. It's not that the producers or distributors (of food, content, etc.) are <i>not</i> malicious/amoral/evil/greedy. It's that the <i>real</i> solution lies in fixing the vulnerabilities in the consumers.<p>You don't say to a heroin addict that they wouldn't have any problems if those pesky heroin dealers didn't make heroin so damn addictive. You realize that it's gonna take internal change (mental/cultural/social overrides to the biological weaknesses) in that person to reliably fix it (and ensure they don't shift to some other addiction).<p>I'm not saying "let the producers run free". Intervening there is fine as long as we keep front of mind and mouth that people need to take their responsibility and that we need to do everything to help them to do so.</p>
]]></description><pubDate>Fri, 06 Feb 2026 15:39:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46914143</link><dc:creator>dinfinity</dc:creator><comments>https://news.ycombinator.com/item?id=46914143</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46914143</guid></item><item><title><![CDATA[New comment by dinfinity in "White House Posts Altered Photo Showing Arrested Minnesota Protester Crying"]]></title><description><![CDATA[
<p>> This is where algorithmic, ad-fueled social media leads a republic.<p>Only if we keep repeating things like this.<p>People have agency and there are many people who are not led by or actively abusing social media. You don't tell a heroin addict it's not their fault, that the presence and malice of dealers made their fate inevitable.</p>
]]></description><pubDate>Fri, 23 Jan 2026 18:35:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46735990</link><dc:creator>dinfinity</dc:creator><comments>https://news.ycombinator.com/item?id=46735990</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46735990</guid></item><item><title><![CDATA[New comment by dinfinity in "Rob Pike goes nuclear over GenAI"]]></title><description><![CDATA[
<p>> It is always the eternal tomorrow with AI.<p>ChatGPT is only 3 years old. Having LLMs create <i>grand</i> novel things and synthesize knowledge <i>autonomously</i> is still very rare.<p>I would argue that 2025 has been the year in which the entire world has been starting to make that happen. Many devs now have workflows where small novel things are created by LLMs. Google, OpenAI and the other large AI shops have been working on LLM-based AI researchers that synthesize knowledge this year.<p>Your phrasing seems overly pessimistic and premature.</p>
]]></description><pubDate>Fri, 26 Dec 2025 13:01:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46391607</link><dc:creator>dinfinity</dc:creator><comments>https://news.ycombinator.com/item?id=46391607</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46391607</guid></item><item><title><![CDATA[New comment by dinfinity in "Google's year in review: areas with research breakthroughs in 2025"]]></title><description><![CDATA[
<p>That GP's comment is an advertisement for a subscription based closed source application that gets access to your credentials.</p>
]]></description><pubDate>Fri, 26 Dec 2025 11:12:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=46391072</link><dc:creator>dinfinity</dc:creator><comments>https://news.ycombinator.com/item?id=46391072</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46391072</guid></item></channel></rss>