<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: dexterlagan</title><link>https://news.ycombinator.com/user?id=dexterlagan</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 25 Apr 2026 14:12:01 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=dexterlagan" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by dexterlagan in "Why are we still using Markdown?"]]></title><description><![CDATA[
<p>Bonus points for MD being readable even when it's not parsed. More bonus points for Sublime Text displaying it in plain text and <i>still</i> looking great. Good enough++</p>
]]></description><pubDate>Sat, 04 Apr 2026 09:09:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47637318</link><dc:creator>dexterlagan</dc:creator><comments>https://news.ycombinator.com/item?id=47637318</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47637318</guid></item><item><title><![CDATA[New comment by dexterlagan in "Can I run AI locally?"]]></title><description><![CDATA[
<p>Same as top comment, have spent a lot of time on local models. IMHO, qwen3.5 is the very first model that is actually usable for serious work, ever - and I've tried them all. The 35B 3B is <i>very</i> smart. It understands things no other local model I've ever used does, it's that good. The 9B runs on my slow Mac, and it's also very 'smart'. I can say with confidence that 2026 is the year of the local model, at last.</p>
]]></description><pubDate>Sun, 15 Mar 2026 08:09:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47385301</link><dc:creator>dexterlagan</dc:creator><comments>https://news.ycombinator.com/item?id=47385301</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47385301</guid></item><item><title><![CDATA[New comment by dexterlagan in "Allow me to get to know you, mistakes and all"]]></title><description><![CDATA[
<p>I used to use LLMs to 'clean up' my own writings, and in the end I agree with the author here: it doesn't really help. The reader will have this impression of 'too perfect', and will have a diminished feeling of value, of honesty. I think we would benefit from a standardized way of signaling text and content that is exclusively human. Say, some sort of logo that says 'genuine', 'untouched by the hand of AI'. I'll be thinking about a way to do this.</p>
]]></description><pubDate>Sun, 15 Mar 2026 08:02:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47385256</link><dc:creator>dexterlagan</dc:creator><comments>https://news.ycombinator.com/item?id=47385256</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47385256</guid></item><item><title><![CDATA[New comment by dexterlagan in "AI can 10x developers in creating tech debt"]]></title><description><![CDATA[
<p>The tech debt this title speaks of only applies if humans have to deal with it. Tech debt is an assumption made on the grounds that humans are still programming and AI does not evolve. It's the opposite of reality.</p>
]]></description><pubDate>Sat, 24 Jan 2026 11:29:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=46742713</link><dc:creator>dexterlagan</dc:creator><comments>https://news.ycombinator.com/item?id=46742713</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46742713</guid></item><item><title><![CDATA[New comment by dexterlagan in "Nvidia Stock Crash Prediction"]]></title><description><![CDATA[
<p>There is one thing everybody forgets when making such predictions: companies don't stand still. Nvidia and every other tech business is constantly exploring new options, taking over competitors, buying startups with novel technologies etc... Nvidia is no slouch in that regard, and their recent quasi-acquisition of Groq is just one example of this. So, when attempting at making predictions, we're looking at a moving target, not systems set in stone. If the people at the helm are smart (and they are), you can expect lots of action and ups and downs - especially in the AI sphere.<p>My personal opinion, having witnessed first hand nearly 40 years of tech evolution, is that <i>this</i> AI revolution is different. We're at the very beginning of a true paradigm shift: the commoditization of intelligence. If that's not enough to make people think twice before betting against it, I don't know what is. And it's not just computing that is going to change. <i>Everything</i> is about to change, for better or worse.</p>
]]></description><pubDate>Tue, 20 Jan 2026 18:14:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=46695606</link><dc:creator>dexterlagan</dc:creator><comments>https://news.ycombinator.com/item?id=46695606</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46695606</guid></item><item><title><![CDATA[New comment by dexterlagan in "Ideas are cheap, execution is cheaper"]]></title><description><![CDATA[
<p>Execution is cheap? How about you try a video game, and not 3 obvious and worthless automations I could have made as a quick fix at lunch time.</p>
]]></description><pubDate>Thu, 15 Jan 2026 14:58:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=46633433</link><dc:creator>dexterlagan</dc:creator><comments>https://news.ycombinator.com/item?id=46633433</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46633433</guid></item><item><title><![CDATA[New comment by dexterlagan in "Show HN: I used Claude Code to discover connections between 100 books"]]></title><description><![CDATA[
<p>I had the same idea. I think this is very useful. As it is it does look like a proof-of-concept, and that's OK. I'd develop this as a book recommendation site and simply link to the books on Amazon or your preferred book source. Collect cash on referrals. Good stuff!</p>
]]></description><pubDate>Sun, 11 Jan 2026 10:31:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46574298</link><dc:creator>dexterlagan</dc:creator><comments>https://news.ycombinator.com/item?id=46574298</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46574298</guid></item><item><title><![CDATA[New comment by dexterlagan in "GPT-5.2"]]></title><description><![CDATA[
<p>My attempt: <a href="https://www.cleverthinkingsoftware.com/truth-or-extinction/" rel="nofollow">https://www.cleverthinkingsoftware.com/truth-or-extinction/</a></p>
]]></description><pubDate>Fri, 12 Dec 2025 14:21:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=46244382</link><dc:creator>dexterlagan</dc:creator><comments>https://news.ycombinator.com/item?id=46244382</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46244382</guid></item><item><title><![CDATA[New comment by dexterlagan in "Comprehension debt: A ticking time bomb of LLM-generated code"]]></title><description><![CDATA[
<p>We've been through many technological revolutions, in computing alone, through the past 50 years. The rate of progress of LLMs and AI in general over the past 2 years alone makes me think that this may be unwarranted worry and akin to premature optimization. Also, it seems to be rooted in a slightly out of date, human understanding of the tech/complexity debt problem. I don't really buy it. Yes complexity will increase as a result of LLM use. Yes eventually code will be hard to understand. That's a given, but there's no turning back. Let that sink in: AI will never be as limited as it is today. It can only get better. We will <i>never</i> go back to a pre-LLM world, unless we obliterate all technology by some catastrophy. Today we can already grok nearly any codebase of any complexity, get models to write fantastic documentation and explain the finer points to nearly anybody. Next year we might not even need to generate any docs, the model built in the codebase will answer any question about it, and will semi-autonomously conduct feature upgrades or more.<p>Staying realistic, we can say with some confidence that within the next 6-12 months alone, there are good reasons to believe that local, open source models will equate their bigger cloud cousins in coding ability, or get very close. Within the next year or two, we will quite probably see GPT6 and Sonnet 5.0 come out, dwarfing all the models that came before. With this, there is a high probability that any comprehension or technical debt accumulated over the past year or more will be rendered completely irrelevant.<p>The benefits given by any development made until then, even sloppy, should more than make up for the downside caused by tech debt or any kind of overly high complexity problem. Even if I'm dead wrong, and we hit a ceiling to LLM's ability to grok huge/complex codebases, it is unlikely to appear within the next few months. Additionally, behind closed doors the progress made is nothing short of astounding. Recent research at Stanford might quite simply change all of these naysayers' mind.</p>
]]></description><pubDate>Tue, 30 Sep 2025 15:48:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=45427040</link><dc:creator>dexterlagan</dc:creator><comments>https://news.ycombinator.com/item?id=45427040</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45427040</guid></item><item><title><![CDATA[New comment by dexterlagan in "How to Debug Chez Scheme Programs (2002)"]]></title><description><![CDATA[
<p>Racket has a very nice built-in debugger in its DrRacket editor, with thread visuals and all. Too bad nobody uses DrRacket, or Racket anymore. Admittedly, even with the best debugger, finding the cause of runtime errors has always been a pain. Hence everybody's moving towards statically compiled, strongly typed languages.</p>
]]></description><pubDate>Thu, 18 Sep 2025 11:12:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=45288201</link><dc:creator>dexterlagan</dc:creator><comments>https://news.ycombinator.com/item?id=45288201</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45288201</guid></item><item><title><![CDATA[New comment by dexterlagan in "[dead]"]]></title><description><![CDATA[
<p>I’ve had enough of misinformation. It’s killing our civilization. So I decided to do something about it.</p>
]]></description><pubDate>Mon, 15 Sep 2025 14:15:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=45250024</link><dc:creator>dexterlagan</dc:creator><comments>https://news.ycombinator.com/item?id=45250024</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45250024</guid></item><item><title><![CDATA[New comment by dexterlagan in "AI coding made me faster, but I can't code to music anymore"]]></title><description><![CDATA[
<p>This resonates with me, a lot. Few months ago I wrote about my initial thoughts here: <a href="https://www.cleverthinkingsoftware.com/programmers-will-be-replaced-by-people-with-ideas/" rel="nofollow">https://www.cleverthinkingsoftware.com/programmers-will-be-r...</a>
Things have changed quite a bit since, but I'm glad they changed for the better. Or so it seems.</p>
]]></description><pubDate>Wed, 27 Aug 2025 07:23:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=45036476</link><dc:creator>dexterlagan</dc:creator><comments>https://news.ycombinator.com/item?id=45036476</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45036476</guid></item><item><title><![CDATA[New comment by dexterlagan in "From GPT-4 to GPT-5: Measuring progress through MedHELM [pdf]"]]></title><description><![CDATA[
<p>Agreed, but in the case of the lie detector, it seems it's a matter of interpretation. In the case of LLMs, what is it? Is it a matter of saying "It's a next-word calculator that uses stats, matrices and vectors to predict output" instead of "Reasoning simulation made using a neural network"? Is there a better name? I'd say it's "A static neural network that outputs a stream of words after having consumed textual input, and that can be used to simulate, with a high level of accuracy, the internal  monologue of a person who would be thinking about and reasoning on the input". Whatever it is, it's not reasoning, but it's not a parrot either.</p>
]]></description><pubDate>Fri, 22 Aug 2025 11:32:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=44983226</link><dc:creator>dexterlagan</dc:creator><comments>https://news.ycombinator.com/item?id=44983226</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44983226</guid></item><item><title><![CDATA[New comment by dexterlagan in "Claude Code is all you need"]]></title><description><![CDATA[
<p>I can imagine there are plenty of use cases, but I could not find one for myself. Can you give an example?</p>
]]></description><pubDate>Tue, 12 Aug 2025 09:16:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=44874080</link><dc:creator>dexterlagan</dc:creator><comments>https://news.ycombinator.com/item?id=44874080</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44874080</guid></item><item><title><![CDATA[New comment by dexterlagan in "I tried every todo app and ended up with a .txt file"]]></title><description><![CDATA[
<p>This is a Xojo project, you can load the source code and all the UI objects by opening the binary project in Xojo (free).</p>
]]></description><pubDate>Tue, 12 Aug 2025 09:05:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=44874014</link><dc:creator>dexterlagan</dc:creator><comments>https://news.ycombinator.com/item?id=44874014</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44874014</guid></item><item><title><![CDATA[New comment by dexterlagan in "I tried every todo app and ended up with a .txt file"]]></title><description><![CDATA[
<p>I made my own. I needed to have a calendar that showed every todo item per day, and a text editor to edit the tasks just like in a todo.txt. Used it all day every day for over 15 years. I still have it installed on nearly all my Win systems, just because it opens instantly, has priority and colors. I also used it to produce reports for work, so I eventually added export options for HTML to paste directly into an email.<p><a href="https://github.com/DexterLagan/todo-master" rel="nofollow">https://github.com/DexterLagan/todo-master</a></p>
]]></description><pubDate>Mon, 11 Aug 2025 14:23:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=44864432</link><dc:creator>dexterlagan</dc:creator><comments>https://news.ycombinator.com/item?id=44864432</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44864432</guid></item><item><title><![CDATA[New comment by dexterlagan in "Running GPT-OSS-120B at 500 tokens per second on Nvidia GPUs"]]></title><description><![CDATA[
<p>I have a similar setup but with 32 GB of RAM. Do you partly offload the model to RAM? Do you use LMStudio or other to achieve this? Thanks!</p>
]]></description><pubDate>Fri, 08 Aug 2025 10:00:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=44835269</link><dc:creator>dexterlagan</dc:creator><comments>https://news.ycombinator.com/item?id=44835269</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44835269</guid></item><item><title><![CDATA[New comment by dexterlagan in "Sumo – Simulation of Urban Mobility"]]></title><description><![CDATA[
<p>Not to be confused with Suno - Simulation of Musical Ability :)</p>
]]></description><pubDate>Fri, 01 Aug 2025 10:39:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=44755073</link><dc:creator>dexterlagan</dc:creator><comments>https://news.ycombinator.com/item?id=44755073</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44755073</guid></item><item><title><![CDATA[New comment by dexterlagan in "I watched Gemini CLI hallucinate and delete my files"]]></title><description><![CDATA[
<p>I ended up adding a prompt to all my projects that forbids all these annoying repetitive apologies. Best thing I've ever done to Claude. Now he's blunt, efficient and SUCCINCT.</p>
]]></description><pubDate>Wed, 23 Jul 2025 10:37:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=44657646</link><dc:creator>dexterlagan</dc:creator><comments>https://news.ycombinator.com/item?id=44657646</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44657646</guid></item><item><title><![CDATA[New comment by dexterlagan in "Ask HN: Using AI daily but not seeing productivity gains – is it just me?"]]></title><description><![CDATA[
<p>I feel ya, but there's a better way. I've been writing detailed specs to direct LLMs, and that's what changed everything for me. I wrote about it at length: <a href="https://www.cleverthinkingsoftware.com/spec-first-development-the-missing-manual-for-building-with-ai/" rel="nofollow">https://www.cleverthinkingsoftware.com/spec-first-developmen...</a></p>
]]></description><pubDate>Mon, 23 Jun 2025 07:27:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=44353314</link><dc:creator>dexterlagan</dc:creator><comments>https://news.ycombinator.com/item?id=44353314</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44353314</guid></item></channel></rss>