<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: crabbone</title><link>https://news.ycombinator.com/user?id=crabbone</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 13 May 2026 14:20:36 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=crabbone" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by crabbone in "Why senior developers fail to communicate their expertise"]]></title><description><![CDATA[
<p>Another part of the equation is practice.<p>Long before the discussion of the morality of AI went mainstream, I ran into a problem with making what appeared to be ethical choices in automation, and then went on a journey of trying to figure this all ethics thing out (took courses in university, read some books...)<p>I made an unexpected discovery reading Jonathan Haid's... either Righteous Mind or the Happiness Hypothesis.  He claimed that practicing ethics, as is common in religious societies is an integral and important part of being a good person.  This is while secular societies often disregard this aspect and imagine ethics to be something you learn exclusively by reading books or engaging in similar activity that has exclusively the descriptive side, but no practice whatsoever.<p>I believe this is the same with expertise.  Part of it is gained through practice, and that is an unskippable part.  Practice will also usually require more time than the meta-discussion of the subject.<p>To oversimplify it, a novice programmer who listened to every story told by a senior, memorized and internalized them, but sill can't touch-type will be worse at everyday tasks pertaining to their occupation.  It's not enough to know touch-typing exists, one must practice it and become good at it in order to benefit from it.  There are, of course, more, but less obvious skills that need practice, where meta-knowledge simply can't be used as a substitute.  There are cues we learn to pick up by reading product documentation which will tell us if the product will work as advertised, whether the product manufacturer will be honest or fair with us, will the company making the product go out of business soon or will they try to bait-and-switch etc.<p>When children learn to do addition, it's not enough to describe to them the method (start counting with first summand, count the number of times of the second summand, the last count is the result), they actually must go through dozens of examples before they can reliably put the method to use.  And this same property carries over to a lot of other activities, even though we like to think about ourselves as being able to perform a task as soon as we understand the mechanism.</p>
]]></description><pubDate>Wed, 13 May 2026 09:42:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=48119762</link><dc:creator>crabbone</dc:creator><comments>https://news.ycombinator.com/item?id=48119762</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48119762</guid></item><item><title><![CDATA[New comment by crabbone in "The vi family"]]></title><description><![CDATA[
<p>If the author is reading this: would be nice to have a "family tree" diagram.</p>
]]></description><pubDate>Wed, 13 May 2026 09:11:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=48119561</link><dc:creator>crabbone</dc:creator><comments>https://news.ycombinator.com/item?id=48119561</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48119561</guid></item><item><title><![CDATA[New comment by crabbone in "The bottleneck was never the code"]]></title><description><![CDATA[
<p>> look at say Claude sonnet 3.x. It’s an entire world away in like a year<p>In the area I work I find them to be of very little value both then and now... I see no real difference.  They help in marginal tasks.  Eg. they catch typos, or they help new programmers to faster explore the existing codebase.<p>So far, I haven't used a single line of code generated by AI, even though I've seen thousands.  Some of them worked to draw attention to a problem, but none solved it successfully.  It was all pretty lame.<p>I see no reason to believe it's going to get better.  Waving hands more forcefully isn't helping, there's no argument behind the promise of "it will get better".  No reason to believe it will...<p>But, more importantly, the AI is applied on a level where really important things don't happen.  It's automating boilerplate work.  It doesn't make decisions about the important parts.  Like, in the example above, the AI is not capable of choosing a better strategy: use pyproject.toml or write code to build Python packages?  It's not the kind of decision it's called to make and nobody sensible would trust it to make such a decision because there isn't a clear right or wrong answer, only the future will prove one or the other to be the right call.</p>
]]></description><pubDate>Wed, 06 May 2026 21:21:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=48042028</link><dc:creator>crabbone</dc:creator><comments>https://news.ycombinator.com/item?id=48042028</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48042028</guid></item><item><title><![CDATA[New comment by crabbone in "The bottleneck was never the code"]]></title><description><![CDATA[
<p>> What better practices do you mean?<p>I literally listed examples above... Code reviews weren't the norm until some time around 2010-ish.  Then programmers realized that reviews help improve the code quality, and, eventually, this became so popular that today virtually everyone does it.<p>Anyways, I'll give an example from something that I've personally experienced / contributed to, which isn't as massive of a thing as code reviews, but is in the same general category.<p>Long ago, Git didn't have --force-with-lease option.  Few people used `git rebase` command because of that (the only way this would work is if using it later with --force, which could destroy someone else's work).  In the company I worked at the time, we extended Git to have what was later implemented as --force-with-lease.  Our motivation was the need for linear history and some other stricter requirements on the repository history (s.a. every commit must compile, retroactive modifications in response to tests added later etc.)<p>This is an example of how a process, that until then was either prone to accidental loss of programmer's work or would result in poorly organized history was improved by inventing a new ability.  This is also an example of something AI doesn't do, because, at its core, it's a program that tries to replicate the best existing tools and practices.  It won't imagine a new Git feature because it has no idea what it could possibly be because its authors don't know that either.<p>> opus 4.7 is all you need, but how can you argue with the fact that adoption since 4.5 has been an inflection point?<p>What did it invent?</p>
]]></description><pubDate>Wed, 06 May 2026 21:12:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=48041925</link><dc:creator>crabbone</dc:creator><comments>https://news.ycombinator.com/item?id=48041925</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48041925</guid></item><item><title><![CDATA[New comment by crabbone in "The bottleneck was never the code"]]></title><description><![CDATA[
<p>No. It's not more AI.  The solution is designing and sticking to development process that is more resilient to errors than the one that's currently happening.  This isn't a novel idea.  Code reviews weren't always part of the process, neither was VCS, nor bug tracker etc.<p>The way AI is set up today, it's trying to replicate the (hopefully) good existing practices.  Possibly faster.  The real change comes from inventing better practices (something AI isn't capable of, at least not the kind of AI that's being sold to the programmers today).</p>
]]></description><pubDate>Wed, 06 May 2026 19:24:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=48040508</link><dc:creator>crabbone</dc:creator><comments>https://news.ycombinator.com/item?id=48040508</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48040508</guid></item><item><title><![CDATA[New comment by crabbone in "The bottleneck was never the code"]]></title><description><![CDATA[
<p>> systemic tech debt is now addressable at scale with LLMs.<p>Is there any reason to believe this?  I've only seen the evidence of the contrary so far.<p>My experience with AI coding aides is that they, generally:<p>1. Don't have an opinion.<p>2. Are trained on code written using practices that increase technical debt.<p>3. Lack in the greater perspective department, more focused on concrete, superficial and immediate.<p>I think, I need to elaborate on the first and explain how it's relevant to the question.  I'll start with an example.  We have an AI reviewer and recently had migrated a bunch of company's repositories from Bitbucket to GitLab.  This also prompted a bunch of CI changes.  Some projects I'm involved with, but don't have much of an authority, that are written in Python switched to complicated builds that involve pyproject.toml (often including dynamic generation of this cursed file) as well as integration with a bunch of novelty (but poor quality) Python infrastructure tools that are used for building Python distributalbe artifacts.<p>In the projects where I have an authority, I removed most of the third-party integration.  None of them use pyproject.toml or setup.cfg or any similar configuration for the third-party build tool.  The project code contains bespoke code to build the artifacts.<p>These two approaches are clearly at odds.  A living and breathing person would either believe one to be the right approach or the other.  The AI reviewer had no problems with this situation.  It made some pedantic comments about the style and some fantasy-impossible-error-cases, but completely ignored the fact that moving forward these two approaches are bound to collide.  While it appears to have an opinion about the style of quotation marks, it completely doesn't care about strategic decisions.<p>My <i>guess</i> as to why this is the case is that such situations are genuinely rarely addressed in code review.  Most productive PRs, from which an AI could learn, are designed around small well-defined features in the pre-agreed upon context.  The context is never discussed in PRs because it's impractical (it would usually require too much of a change, so the developers don't even bring up the issue).<p>And this is where real large glacier-style deposits of tech debt live.  It's the issues developers are afraid of mentioning because of the understanding that they will never be given authority and resources to deal with.</p>
]]></description><pubDate>Wed, 06 May 2026 19:15:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=48040391</link><dc:creator>crabbone</dc:creator><comments>https://news.ycombinator.com/item?id=48040391</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48040391</guid></item><item><title><![CDATA[New comment by crabbone in "Does Employment Slow Cognitive Decline? Evidence from Labor Market Shocks"]]></title><description><![CDATA[
<p>Absolutely!<p>But also: with age more and more doors are closed to you.  Many hobbies become inaccessible.  You may end up with a bunch of choices that all just sound outright depressing.  Losing a job is losing one more choice, restricting yourself to the possibly more boring options that you can still physically pull off.<p>It's just not fun being old.</p>
]]></description><pubDate>Mon, 04 May 2026 19:53:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=48014074</link><dc:creator>crabbone</dc:creator><comments>https://news.ycombinator.com/item?id=48014074</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48014074</guid></item><item><title><![CDATA[New comment by crabbone in "PyInfra 3.8.0"]]></title><description><![CDATA[
<p>> It's amazing to me that we've spent decades with programming languages and environments which can accurately guess what you're about to type next, which have enormous expressiveness<p>You've almost guessed the problem.  Too much expressiveness is a bad thing.  This is a problem I encounter a lot more often then I'd be happy to.  It's very often is much <i>easier</i> to build something more generic than what the user actually needs, and then testing it becomes a nightmare.<p>To make this more concrete, here's a case I'm working on right now.  Our company provides customers with a tool to manage large amounts of compute resources (in HPC domain).  It's possible to run the product on-prem, or in different clouds, or a combination of both.  Typically, the management component comes with a PXE boot and unfolds from there.  A customer wanted integration with a particular cloud provider that doesn't support this management style, nor can it provide a spare disk to be used for management, nor any other way our management component was prepared to boot.<p>The solution was to use netboot that would pre-partition the disk and use the first N partitions to store the management component as well as the boot, ESP / bios_grub partition etc.  It had to be incorporated into the existing solution that encompasses partitioning and mounting all the resources available to a VM, including managing RAIDs, LVM, DM and so on.<p>The developers implemented it as a GPT partition name with a pre-defined value that would instruct our code to ignore the partitions found prior to the "special" partition and allow the user to carry on as usual, pretending that the first fraction of the disk simply didn't exist (used by netboot + the management component).<p>This solved the immediate problem for the user who wanted this ability, but created thousands of problems for QA: what happens if there's a RAID that uses the "hidden" partitions?  What happens if the user accidentally creates second /boot partition?  What happens if the user wants whole-disk encryption?  And so on.  It would've been so much better if these questions didn't exist in the first place, than to try to answer them, given the "simple" solution the developers came up with.<p>If you programmed for just a year, I'm sure you've been in this situation at least a few times already.  This is exceedingly common.<p>* * *<p>There's an enormous value to being able to restrict the possible ways a program can run.  Most GUI projects? -- They don't need infinite loops!  It just makes programs unnecessarily hard to verify.  But it's "easy" to have a single loop language element that can be made infinite if necessary.  Configuration languages exclude whole classes of errors simply by making them impossible to express.<p>However, I have to agree that, specifically, YAML is a piss-poor configuration language.  It has way too many problems that overshadow the benefits it offers.  We, collectively, decided to use it because everyone else decided to use it, making it popular... and languages are "natural monopolies".  So, one could certainly do better ditching YAML, if they can afford to go unpopular.  But ditching the idea of a configuration language is throwing the baby out with the bathwater.</p>
]]></description><pubDate>Mon, 04 May 2026 19:34:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=48013835</link><dc:creator>crabbone</dc:creator><comments>https://news.ycombinator.com/item?id=48013835</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48013835</guid></item><item><title><![CDATA[New comment by crabbone in "Executable installer will stop being released with Python 3.16"]]></title><description><![CDATA[
<p>> Just to be clear, Python is doing this because they want to.<p>No. They don't.  The story is more or less like this: ten-fifteen years ago Python community started to change.  The change was heavily influenced by the inflow of newcomers.  It was a very similar story to the "eternal September".  Too many new people entered the field.  Old-timers didn't feel comfortable in the new environment and started to leave.  The leave was accelerated by the newcomers being in Python for the clout and not for what the language had to offer, therefore becoming further hostile to the old-timers.<p>There was also Microsoft, embedding its people in positions of power within the community. Using all the usual techniques, like creating "rules of conduct", then blaming the old-timers for violating them, then kicking them out.  Then they changed the platform for discussion of the language that allowed the people who enacted this "bloodless coup" to silence the dissent to the point they are able now to pretend dissent doesn't exist.<p>So, "want to" is... very contentious here.  The people Microsoft put in charge of Python? -- Yeah, they probably want this, as this ensures tighter integration with proprietary tools, less freedom for people using the language.  It's great!  People who follow those installed by Microsoft? -- Yeah, they want this in the sense that they don't know any better.  They were told it's a good thing, that it's something they should want, and so they don't know any better than to want it.<p>But, suppose and the people who have to deal with this change were intelligent enough to understand the consequences and to see the general direction the things are going? -- They wouldn't want it.  This is the kind of "want" like the one you have with sweet carbonated drinks.  Do people want them? -- Absolutely!  Should they want them? -- Well, probably not...<p>So, there are different "shades" of "want".  In some superficial way, Python programmers on MS Windows do want this and similar changes.  But, in a more fundamental way they don't, even though had you run a poll, you'd get a different response.</p>
]]></description><pubDate>Sun, 03 May 2026 18:12:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47999727</link><dc:creator>crabbone</dc:creator><comments>https://news.ycombinator.com/item?id=47999727</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47999727</guid></item><item><title><![CDATA[New comment by crabbone in "Shai-Hulud Themed Malware Found in the PyTorch Lightning AI Training Library"]]></title><description><![CDATA[
<p>Oh, that's a sore spot with me, but I'm glad you asked!<p>So, for the purpose of full disclosure, I have a personal and professional grudge with PyPA, which also touches on how pip is being managed, beside other packaging issues.  It's not the side you want to be on, so, be warned!<p>So, without further ado: I write my own code to generate the deployed artifact.  In my case, I take all the wheels installed in my environment, extract them, and merge them into a single wheel.  The process also usually involves removing a bunch of junk from the packages packaged in such a way.  You'd be surprised how much nonsense people put in their distributed packages... like, their unit tests, or documentation in HTML / PDF format, __pycache__ files (together with the sources)... the list goes on.<p>But, it works because I curate what's being installed.  I don't trust pip to install just or everything I need.  I run it in a separate environment, where I examine the packages that have been installed as dependencies, figure out why any of these packages were installed (you'd be surprised how often you don't need them!), then, I make a list of the dependencies I actually need, with the exact versions and checksums, and use a Python or a Shell script to download and install them in the actual development environment.<p>This isn't a good idea when you have many short-lived projects, but, in my case, the typical project lifespan is measured in decades and there aren't that many of them.  So, I can expend the extra effort required to do that.<p>Unfortunately, I don't think there's a way to automate the process.  The key point is that there's a human who sifts through dependencies and figures out what to do with them.  Partially automate, maybe... but I can't think of a way to make this into a program that I could give someone.</p>
]]></description><pubDate>Fri, 01 May 2026 20:06:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47979573</link><dc:creator>crabbone</dc:creator><comments>https://news.ycombinator.com/item?id=47979573</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47979573</guid></item><item><title><![CDATA[New comment by crabbone in "Shai-Hulud Themed Malware Found in the PyTorch Lightning AI Training Library"]]></title><description><![CDATA[
<p>I'm referring to it as an anchor on the timeline.  Not saying it's a Python issue.</p>
]]></description><pubDate>Fri, 01 May 2026 19:54:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47979418</link><dc:creator>crabbone</dc:creator><comments>https://news.ycombinator.com/item?id=47979418</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47979418</guid></item><item><title><![CDATA[New comment by crabbone in "Shai-Hulud Themed Malware Found in the PyTorch Lightning AI Training Library"]]></title><description><![CDATA[
<p>Well, the install code can leave some code behind that will be executed on the production machine... It doesn't really help being in a container.  While a separate problem from Python ecosystem, people really put a lot more faith in isolation offered by containers than they should.  Also, it's often very tempting to poke holes in that isolation because it's difficult and up to impossible sometimes to get things done otherwise.</p>
]]></description><pubDate>Fri, 01 May 2026 19:52:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47979386</link><dc:creator>crabbone</dc:creator><comments>https://news.ycombinator.com/item?id=47979386</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47979386</guid></item><item><title><![CDATA[New comment by crabbone in "Running Adobe's 1991 PostScript Interpreter in the Browser"]]></title><description><![CDATA[
<p>PostScript was the first language I ever used <i>professionally</i>! :P<p>At the time, I worked for a printing house in Kyiv that specialized in accidental printing (screen printing, flexo-, tampo- etc. i.e. mostly printing on weird curved surfaces, not paper).  The triad (full-color) screen printing was all the rage (early-mid 90s).  Part of the process of generating the films that were later used to irradiate the polymer layer covering the screen mold was bound to a bootleg Scitex machines IDF used for printing maps.  While we had the machines, we didn't have a proper driver that could take a color image, separate it into channels and instruct the machines to produce the films.  So, I'd produce PS files from, eg. Photoshop (also bootleg...) and then edit the PS files by hand to match the requirements from the Scitex machines.<p>I wasn't a programmer by training, and doing all this stuff absolutely felt like magic.  Something I will never experience with computers again :'(</p>
]]></description><pubDate>Fri, 01 May 2026 19:40:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47979246</link><dc:creator>crabbone</dc:creator><comments>https://news.ycombinator.com/item?id=47979246</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47979246</guid></item><item><title><![CDATA[New comment by crabbone in "AI uses less water than the public thinks"]]></title><description><![CDATA[
<p>Yes and no.  We shouldn't compare datacenter water usage to residential water usage.  We should compare it to industrial water usage, as that is what it is.  The question like "how does datacenter water cooling compares to concrete factory water cooling?" makes some sense from engineering perspective, as you are comparing oranges to oranges to a degree.<p>Residential water usage is way too different in way too many ways to be meaningfully compared to industrial usage.  The scale is different, the waste water treatment is different, the infrastructure cost is different.  The water quality standards are different...</p>
]]></description><pubDate>Fri, 01 May 2026 19:25:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47979059</link><dc:creator>crabbone</dc:creator><comments>https://news.ycombinator.com/item?id=47979059</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47979059</guid></item><item><title><![CDATA[New comment by crabbone in "A 1960s art school experiment that redefined creativity"]]></title><description><![CDATA[
<p>In my days in art academy, the running joke was that<p><pre><code>    If you were accepted into the painting faculty, you were an artist,
    If you were accepted into graphics faculty, you were color-blind,
    If you were accepted into sculpture, you were blind,
    If you were accepted into art history, you couldn't be taught to draw.
</code></pre>
While a little cruel... (I was in the graphics), the general idea was to say that art theory, art history, and especially psychology studies around art are absolute rubbish.  These people seem to get into their line of work because they failed as artists (and don't understand / can't produce art).<p>Likewise, in this article, the approach to defining creative thinking is... so simplistic, and the test is so irrelevant...<p>Just to try to give you some background as to why a student could choose one approach or the other: if a student wasn't told why they need to draw a still life, they probably didn't care much for the outcome either.  Artists rarely know why they prefer one composition over the other, especially in academic studies like... still life.  To an artist, the selection of objects for a still life is really arbitrary, their arrangement is arbitrary -- it makes no difference.  To make an <i>interesting</i> still life, one would have to find something that would interest other artists in it.  Like, for example, how one can show different textures of the objects of the same nominal color using color?  Or... would a technique that models volume through the thickness / intensity of contours work on mostly round objects?  And so on.<p>Later, the article is trying to assess the artist's accomplishments in ways artists would frown upon.  The number of exhibitions?  The sales in prestigious galleries?  Yeah... as a student I spent some time working in the lab of Kadishman (the guy who draws the same sheep over and over, and then sells it for insane $$).  The "master" doesn't even draw the sheep anymore.  It's all Shinkar / Bezalel students who do it :D  And, honestly, the sheep is one of the biggest frauds I've personally witnessed in this profession (there are, of course, things like the diamond skull from Damien Hirst, which are more expensive because of the materials used, but I didn't have a chance to behold the miracle with my own eyes).</p>
]]></description><pubDate>Thu, 30 Apr 2026 21:55:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47968737</link><dc:creator>crabbone</dc:creator><comments>https://news.ycombinator.com/item?id=47968737</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47968737</guid></item><item><title><![CDATA[New comment by crabbone in "Shai-Hulud Themed Malware Found in the PyTorch Lightning AI Training Library"]]></title><description><![CDATA[
<p>> Looking back ten years to `left-pad`, are there more successful attacks now than ever?<p>I can't vouch for the number of attacks, but, and since we are talking about Python, nothing substantially changed since the time of `left-pad`. The same bad things that enabled supply chain attacks in Python ten years ago are in place today. However, it looks like there are more projects and they are more interconnected than before, so, it's likely that there are either more supply chain attacks, or that they are more damaging, or both.<p>Here's my anecdotal experience with Python's packaging tools.  For a while, I was maintaining a package to parse libconfuse configuration language.  It started as a Python 2.7 project, but at the time there was already some version of Python 3 available, so, it was written in a way that was supposed to be future-proof.<p>I didn't need to change the code of the project in the last ten or so years, but roughly once a year something would break in the setup.py.  Usually, because PyPA decided to remove a thing that didn't bother anyone.<p>When Python 3.13 came out, as clockwork, setup.py broke.  I rolled up my sleeves and removed the dependency on setuptools, instead, I wrote some Python code that generated a wheel from the project's sources.  I didn't look up the specification of the RECORD file in dist-info directory, and assumed that sha256().hexdigest() will generate the checksums in the desired format.  And that's how I shipped my packages...<p>Some time later, the company added an AI reviewer to the company's repo and it discovered that instead of hexdigest() the checksums have to be base64-encoded and then padding removed...<p>Now, to the punchline: nobody cared. The incorrectly generated packages installed perfectly fine without warnings.  Nobody checks the checksums.<p>More so: nobody checks that during `pip install` or the more fancy `uv pip install` the packages aren't built locally (i.e. nobody cares that package installation will result in arbitrary code execution).  It's not just common, it's almost universal to run `pip install` on production machines as a means of deploying a Python program.  How do I know this? -- The company I work for ships its Python client as a... source package.  Not intentionally.  We are just lazy.  But nobody cares.</p>
]]></description><pubDate>Thu, 30 Apr 2026 20:09:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47967584</link><dc:creator>crabbone</dc:creator><comments>https://news.ycombinator.com/item?id=47967584</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47967584</guid></item><item><title><![CDATA[New comment by crabbone in "Why I still reach for Lisp and Scheme instead of Haskell"]]></title><description><![CDATA[
<p>This is not what "subjective" means.  You can't argue something is subjective because many people don't agree with an opinion.<p>When someone argues subjectivity (in a negative sense), they need to show that the opinion <i>does not rely on facts</i>, rather it's based on... nothing (feelings).<p>I offered a very easy way to numerically assess the negative impact of poor language design choices made by Haskell designers.  It's not about what I "feel" about the language: in Java, you write three-words program, and you get, usually, a unique interpretation.  In Haskell, you write a three-words program, and you get 9 (nine) possible interpretations.  It's impossible for a human to examine nine interpretations simultaneously and figure out which of them are valid and might fit the context.  So, reading a Haskell program takes longer and requires more effort than a Java program.<p>Of course, Haskell programmers find ways to adapt to their misfortune.  They try to avoid pathological cases (eg. writing four-words programs, let alone five!), they memorize a lot of acronyms and non-typographical symbols that they later use to prune the search for a possible meaning of the program.  They invent conventions on top of the bare language design that constrain the search space for possible programs to make their task easier.<p>It's absolutely possible that after layers of conventions and a long time spent memorizing various acronyms and symbols, Haskell programmers catch up to speed of programmers in other languages: after all, the superficial difficulties with the language might seem like a small price to pay for the access to the language's riches that lay beyond the surface.  The language grammar rules cannot account for the entirety of the performance of the programmers who chose to write in the language.<p>This situation is very similar to the "universal" (claimed, but not in practice) mathematical language, which is extremely difficult to read, write, edit, typeset... yet the tradition of using it prevails and the overwhelming majority of mathematicians use, and prefer using the "universal" mathematical language even though much saner alternatives exist.</p>
]]></description><pubDate>Thu, 30 Apr 2026 09:48:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47960221</link><dc:creator>crabbone</dc:creator><comments>https://news.ycombinator.com/item?id=47960221</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47960221</guid></item><item><title><![CDATA[New comment by crabbone in "Why I still reach for Lisp and Scheme instead of Haskell"]]></title><description><![CDATA[
<p>> I couldn't disagree more<p>[proceeds to agree on all points]<p>Not even sure what to tell you... Have more introspection?</p>
]]></description><pubDate>Thu, 30 Apr 2026 07:41:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47959444</link><dc:creator>crabbone</dc:creator><comments>https://news.ycombinator.com/item?id=47959444</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47959444</guid></item><item><title><![CDATA[New comment by crabbone in "Why I still reach for Lisp and Scheme instead of Haskell"]]></title><description><![CDATA[
<p>> Are you mixing tabs and spaces? Maybe an example here would help.<p>This is not what "rules" means.  Rules aren't about what <i>I</i> do.  Rules are about what <i>the language</i> treats as legal or illegal.  I don't write in Haskell at all because I don't like it and have no use for it, but Haskell rules don't change because of that, they are still mindbogglingly complex when it comes to telling the programmer if the next line is the right amount of space to the right or not.  None of that complexity is necessary and could've been totally avoided if the language used statement delimiters.<p>> No, this is important, so that default strings don't to have to be something crummy.<p>My argument is that to get a little accidental convenience you sacrificed a huge amount of routine convenience.  The mental load of having to distrust a string when you see it is just not worth the accidental convenience of writing a prepared statement and making it appear as if it was a string.  In other words, you are the guy who traded a donkey for three beans, but the beans didn't sprout into a huge ladder that took you to the giant's castle. You just made a very watery soup and that was that.<p>> Again, an example would be helpful.<p>Look up the example I gave in the adjacent reply.<p>> I thought lazy execution was widely agreed to be the worst part of Haskell.<p>It's good because it's unique and, when it fits the purpose, it's useful for that particular purpose and neigh irreplaceable, because it is unique.  It's worth having for the sake of research, to understand how languages can be designed and what tools or techniques can be discovered on this path.  This is said from the perspective that Haskell is not the end product, but rather a research attempting to study how languages can work and what concepts they can develop.</p>
]]></description><pubDate>Thu, 30 Apr 2026 07:35:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47959401</link><dc:creator>crabbone</dc:creator><comments>https://news.ycombinator.com/item?id=47959401</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47959401</guid></item><item><title><![CDATA[New comment by crabbone in "Why I still reach for Lisp and Scheme instead of Haskell"]]></title><description><![CDATA[
<p>I wish there was some sort of a single metric that would allow measuring languages against each other and thus determining the best one.  Unfortunately, there are multiple variables and the relationship between the variables is unclear.  But, going totally with my gut feeling, some examples of good languages (in terms of ease of reading) include:<p>* Prolog (and, by extension, Erlang).<p>* Pascal.<p>* Java 5 and earlier (and Go, as it's almost a Java's twin).<p>These languages somehow manage to hit the sweet spot of enough system and enough diversity, few unexpected syntax constructs (eg. Pascal or Java have the "dangling else" problem, but it's manageable compared to the problems introduced by optional statement delimiters in Go or JavaScript for example).  In every case, a programmer must program defensively against these sorts of language "pathologies".<p>To give some examples of questionable or outright bad design decisions:<p>* In Common Lisp (and Scheme as well as a number of similar languages) there's a problem with identifying the open parenthesis that will be closed by typing the closing parenthesis.  Programmers must invent tools and techniques to manage this problem.<p>* In C++, there's a laughable (or, at least was, for a long time) rookie "whoopsie" when it comes to ">>" in templates vs infix operator.  And the "solution" offered by the language designer makes you think they were just... lazy (add space).<p>Here are also examples of some (perhaps, accidentally) good decisions:<p>* Kebab-case in many Lisp family of languages.  In Latin script, the position of the hyphen in the middle of the lower-case letter is a better choice then, eg. underscore (which is tutted to be a "not a typographic character").  Same reason why, eg. in traditional Hebrew hyphens are at the height of a capital letter (Hebrew doesn't have lower-case letters and the shape of letters is better suited for hyphens at the top rather than the middle).<p>* Clojure as well as Racket (afaik, deliberately) introduced more kinds of parenthesis-like delimiters to make it easier to guess which expression is being terminated by the currently typed delimiter.<p>* * *<p>Note that this is a "superficial" metric, because languages are also valuable for concepts they are able to express both in terms of program logic as well as program application to the hardware it manages; the ability to process, modify, generate, analyze the language automatically; the ability to constrain the language to a desired subset of all available operations... Incorporating all of these into a single metric seems like mission impossible :)</p>
]]></description><pubDate>Thu, 30 Apr 2026 07:21:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47959309</link><dc:creator>crabbone</dc:creator><comments>https://news.ycombinator.com/item?id=47959309</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47959309</guid></item></channel></rss>