<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: diarrhea</title><link>https://news.ycombinator.com/user?id=diarrhea</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 08 Apr 2026 01:44:53 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=diarrhea" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by diarrhea in "Dropping Cloudflare for Bunny.net"]]></title><description><![CDATA[
<p>Just this month Google shipped what I understand as hard limits in AI Studio/Gemini/whatever it's called this week. I had existing billing <i>alerts</i> (best you could do before IIUC), but set these new hard limits up immediately. Feels good!</p>
]]></description><pubDate>Tue, 07 Apr 2026 17:37:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47678722</link><dc:creator>diarrhea</dc:creator><comments>https://news.ycombinator.com/item?id=47678722</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47678722</guid></item><item><title><![CDATA[New comment by diarrhea in "Cloudflare targets 2029 for full post-quantum security"]]></title><description><![CDATA[
<p><a href="https://news.ycombinator.com/item?id=47677483">https://news.ycombinator.com/item?id=47677483</a></p>
]]></description><pubDate>Tue, 07 Apr 2026 17:21:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47678537</link><dc:creator>diarrhea</dc:creator><comments>https://news.ycombinator.com/item?id=47678537</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47678537</guid></item><item><title><![CDATA[New comment by diarrhea in "The cult of vibe coding is dogfooding run amok"]]></title><description><![CDATA[
<p>Interesting, though I disagree on basically all points...<p>> No Silver Bullet<p>As an industry, we do not know how to measure productivity. AI coding also does not <i>increase reliability</i> with how things are going. Same with simplicity, it's the opposite; we're adding obscene complexity, in the name of shipping features (the latter of which is not <i>productivity</i>).<p>In <i>some</i> areas I can see how AI doubles "productivity" (whatever that means!), but I do not see a 10x on the horizon.<p>> Kernighan's Law<p>Still holds! AI is amazing at debugging, but the vast majority of existing code is still human-written; so it'll have an easy time doing so, as indeed AI can be "twice as smart" as those human authors (in reality it's more like "twice as persistent/patient/knowledgeable/good at tool use/...").<p>Debugging fully AI-generated code with the same AI will fall into the same trap, subject to this law.<p>(As an aside, I do wonder how things will go once we're out of "use AI to <i>understand</i> human-generated content", to "use AI to understand AI-generated content"; it will probably work worse)<p>> just ask AI to rewrite the code<p>This is a terrible idea, unless perhaps there is an existing, exhaustive test harness. I'm sure people will go for this option, but I am convinced it will usually be the wrong approach (as it is today).<p>> Dijkstra on the foolishness of programming in natural language<p>So why are we not seeing repos of <i>just</i> natural language? Just raw prompt Markdown files? To generate computer code on-the-fly, perhaps even in any programming language we desire? And for the sake of it, assume LLMs could regenerate everything <i>instantly</i> at will.<p>For two reasons. The prompts would either need to raise to a level of precision as to be indistinguishable from a formal specification. And indeed, because complexity does become "exponentially harder"; inaccuracies inherent to human languages would compound. We <i>need</i> to persist results in formal languages still. It remains the ultimate arbiter. We're now just (much) better at generating large amounts of it.<p>> Lehman’s Law<p>This reminds me of a recent article [0]. Let AI run loose without genuine effort to curtail complexity and (with current tools and models) the project will need to be thrown out before long. It is a self-defeating strategy.<p>I think of this as the Peter principle applied to AI: it will happily keep generating more and more output, until it's "promoted" past its competence. At which point an LLM + tooling can no longer make sense of its own prior outputs. Advancements such as longer context windows just inflate the numbers (more understanding, but also more generating, ...).<p>The question is, will the market care? If software today goes wrong in 3% of cases, and with wide-spread AI use it'll be, say, 7%, will people care? Or will we just keep chugging along, happy with all the new, more featureful, but more faulty software? After all, we know about the Peter principle, but it's unavoidable and we're just happy to keep on.<p>> Jevons Paradox<p>My understanding is the exact opposite. We might well see a further proliferation of information technologies, into remaining sectors which have not yet been (economically) accessible.<p>0: <a href="https://lalitm.com/post/building-syntaqlite-ai/" rel="nofollow">https://lalitm.com/post/building-syntaqlite-ai/</a></p>
]]></description><pubDate>Mon, 06 Apr 2026 20:32:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47666627</link><dc:creator>diarrhea</dc:creator><comments>https://news.ycombinator.com/item?id=47666627</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47666627</guid></item><item><title><![CDATA[New comment by diarrhea in "Axios compromised on NPM – Malicious versions drop remote access trojan"]]></title><description><![CDATA[
<p>uv supports it, <a href="https://docs.astral.sh/uv/reference/settings/#exclude-newer" rel="nofollow">https://docs.astral.sh/uv/reference/settings/#exclude-newer</a></p>
]]></description><pubDate>Tue, 31 Mar 2026 16:18:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47589690</link><dc:creator>diarrhea</dc:creator><comments>https://news.ycombinator.com/item?id=47589690</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47589690</guid></item><item><title><![CDATA[New comment by diarrhea in "Reports of code's death are greatly exaggerated"]]></title><description><![CDATA[
<p>This take was accurate about 2 years ago, up until perhaps one year ago. Current capabilities far exceed what you are outlining, for example using Claude Opus models in a harness such as Claude Code or OpenCode.</p>
]]></description><pubDate>Mon, 23 Mar 2026 20:04:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47494395</link><dc:creator>diarrhea</dc:creator><comments>https://news.ycombinator.com/item?id=47494395</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47494395</guid></item><item><title><![CDATA[New comment by diarrhea in "Cloudflare flags archive.today as "C&C/Botnet"; no longer resolves via 1.1.1.2"]]></title><description><![CDATA[
<p>I use unbound (recursive resolver), and AdGuard Home as well (just forwards to unbound). Unbound could do ad-blocking itself as well, but it's more cumbersome than in AGH. So I use two tools for the time being.<p>The upside is there's no single entity receiving all your queries. The downside is there's no encryption (IIRC root servers do not support it), so your ISP sees your queries (but they don't <i>receive</i> them).</p>
]]></description><pubDate>Sun, 22 Mar 2026 11:57:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47476596</link><dc:creator>diarrhea</dc:creator><comments>https://news.ycombinator.com/item?id=47476596</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47476596</guid></item><item><title><![CDATA[New comment by diarrhea in "What canceled my Go context?"]]></title><description><![CDATA[
<p>I have the same question, I am confused by the premise of this article. Now you're recording everything twice?</p>
]]></description><pubDate>Sun, 08 Mar 2026 20:09:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47300879</link><dc:creator>diarrhea</dc:creator><comments>https://news.ycombinator.com/item?id=47300879</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47300879</guid></item><item><title><![CDATA[New comment by diarrhea in "What canceled my Go context?"]]></title><description><![CDATA[
<p>Agree. I am note sure I understand the premise of the article. You're now recording encountered errors twice, which can look like<p><pre><code>    cancel(fmt.Errorf(
        "order %s: payment failed: %w", orderID, err,
    ))
    return fmt.Errorf("order %s: payment failed: %w, orderID, err)
</code></pre>
Not only that, isn't this a "lie"? You're cancelling the context explicitly, but that's not necessary is it? Because at the moment the above call fails, the called-into functions might not have cancelled the context. There might be cleanup running later on which will then refuse to run on this eagerly cancelled context. There is no need to cancel this eagerly.<p>Perhaps I'm not seeing the problem being solved, but bog-standard `return err` with "lazy" context cancellation (in a top-level `defer cancel()`), or eager (in a leaf I/O goroutine) seems to carry similar functionality. Stacking both with ~identical information seems redundant.</p>
]]></description><pubDate>Sun, 08 Mar 2026 20:07:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47300858</link><dc:creator>diarrhea</dc:creator><comments>https://news.ycombinator.com/item?id=47300858</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47300858</guid></item><item><title><![CDATA[New comment by diarrhea in "Nobody ever got fired for using a struct"]]></title><description><![CDATA[
<p>Feldera speak from <i>lived experience</i> when they say 100+ column tables are common <i>in their customer base</i>. They speak from lived experience when they say there's no correlation <i>in their customer base</i>.<p>Feldera provides a service. They did not design these schemas. Their customers did, and probably over such long time periods that those schemas cannot be referred to as <i>designed</i> anymore -- they just <i>happened</i>.<p>IIUC Feldera works in OLAP primarily, where I have no trouble believing these schemas are common. At my $JOB they are, because it works well for the type of data we process. Some OLAP DBs might not even support JOINs.<p>Feldera folks are simply reporting on their experience, and people are saying they're... <i>wrong</i>?</p>
]]></description><pubDate>Fri, 06 Mar 2026 09:35:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47272875</link><dc:creator>diarrhea</dc:creator><comments>https://news.ycombinator.com/item?id=47272875</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47272875</guid></item><item><title><![CDATA[New comment by diarrhea in "Guix System First Impressions as a Nix User"]]></title><description><![CDATA[
<p>I have been using nixos-rebuild with target host and it has been totally fine.<p>The only thing I have not solved is password-protected sudo on the target host. I deploy using a dedicated user, which has passwordless sudo set up to work. Seems like a necessary evil.</p>
]]></description><pubDate>Sat, 31 Jan 2026 20:25:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46840450</link><dc:creator>diarrhea</dc:creator><comments>https://news.ycombinator.com/item?id=46840450</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46840450</guid></item><item><title><![CDATA[New comment by diarrhea in "When AI 'builds a browser,' check the repo before believing the hype"]]></title><description><![CDATA[
<p>In thermodynamics, ultimately you need to input work to remove entropy from a system (e.g. by cooling surroundings). Humans do the same for software.<p>I am an avid user of LLMs but I have not seen them remove entropy, not even once. They only add. It’s all on the verge of tech debt and it takes substantial human effort to keep entropy increases in check. Anyone can add 100 lines, but it takes genuine skill to do it 10 (and I don’t mean code golf).<p>And to truly remove entropy (cut useless tests, cut useless features, DRY up, find genuine abstractions, talk to PM to avoid building more crap, …) you still need humans. LLM built systems eventually collapse under their own chaos.<p>I think your analogy is quite fitting!</p>
]]></description><pubDate>Mon, 26 Jan 2026 22:13:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=46772316</link><dc:creator>diarrhea</dc:creator><comments>https://news.ycombinator.com/item?id=46772316</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46772316</guid></item><item><title><![CDATA[New comment by diarrhea in "What came first: the CNAME or the A record?"]]></title><description><![CDATA[
<p>Randomly fail or (increasingly) delay a random subset of all requests.</p>
]]></description><pubDate>Mon, 19 Jan 2026 20:14:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=46683905</link><dc:creator>diarrhea</dc:creator><comments>https://news.ycombinator.com/item?id=46683905</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46683905</guid></item><item><title><![CDATA[New comment by diarrhea in "You Need to Ditch VS Code"]]></title><description><![CDATA[
<p>Interesting attack on my person!<p>The things I want to get done on my computer are so much richer than moving files back and forth in a terminal all day. Simple things should be simple. Tools should enable us. Moving files is a means to an end and these commands having so many sharp edges makes me unhappy indeed.<p>But yes, of course I am an IDE toddler and cannot even tie my shoes without help from an LLM, thanks for reminding me.</p>
]]></description><pubDate>Fri, 02 Jan 2026 13:52:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=46464724</link><dc:creator>diarrhea</dc:creator><comments>https://news.ycombinator.com/item?id=46464724</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46464724</guid></item><item><title><![CDATA[New comment by diarrhea in "You Need to Ditch VS Code"]]></title><description><![CDATA[
<p>I have trouble gauging the effects of `*`, aka "will I cp/mv <i>the dir</i> or <i>all contents of the dir</i>"? Then shell expansion getting in the way, like expanding it all ("argument list too long"). I try to use rsync when possible, where intermittent `.` in paths are relevant, as well as trailing slashes... and don't forget `-a`! Then I forget if those same things apply to cp/mv. Then I cp one directory too deep and need to undo the mess -- no CTRL+Z!<p>The wide-spread use of `rm -rf some-ostensibly-empty-dir/` is also super dangerous, when `rm -r` or even `rmdir` would suffice, but you need to remember those exist. I find it strange there's no widely (where applicable) available `rm-to-trash`, which would be wildly safer and is table stakes for Desktop work otherwise.<p>Then there's `dd`...<p>I use terminals a lot, but a GUI approach (potentially with feedback pre-operation) plus undo/redo functionality for file system work is just so much easier to work with. As dumb as drag-and-drop with a mouse is, a single typo can't wreck your day.</p>
]]></description><pubDate>Tue, 30 Dec 2025 11:49:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46432364</link><dc:creator>diarrhea</dc:creator><comments>https://news.ycombinator.com/item?id=46432364</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46432364</guid></item><item><title><![CDATA[New comment by diarrhea in "You Need to Ditch VS Code"]]></title><description><![CDATA[
<p>I've had a <i>color scheme</i> plugin yanked from my IDE a while back, as it went malicious (Material Theme). It's just a bunch of hex codes, how is that even possible? Baffling and disappointing indeed.</p>
]]></description><pubDate>Tue, 30 Dec 2025 11:32:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46432232</link><dc:creator>diarrhea</dc:creator><comments>https://news.ycombinator.com/item?id=46432232</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46432232</guid></item><item><title><![CDATA[New comment by diarrhea in "You Need to Ditch VS Code"]]></title><description><![CDATA[
<p>> git on the command line is the power tool. The VS Code plugin is the training wheels version.<p>I don't disagree with your underlying point, but git is perhaps the worst example. It is a <i>horrible</i> tool in several ways. Powerful yes, but it pays too large a price in UX.<p>I've only ever used git through the CLI as well, but having switched to jujutsu (also CLI) I am not going back. It is quite eye-opening how much simpler git <i>should</i> be, for the average user (I realize "average user" is doing some heavy-lifting here -- git covers an enormous number of diverse use cases).<p>What jujutsu-CLI is for me (version control UX that just works) might be VSCode's GUI git integration for other people, or magit, or GitButler, or whatever other GUI or TUI.<p>Who cares about training wheels? If the real deal is a unicycle with one pedal and no saddle, I will keep using training wheels, thank you very much.</p>
]]></description><pubDate>Tue, 30 Dec 2025 11:27:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46432189</link><dc:creator>diarrhea</dc:creator><comments>https://news.ycombinator.com/item?id=46432189</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46432189</guid></item><item><title><![CDATA[New comment by diarrhea in "You Need to Ditch VS Code"]]></title><description><![CDATA[
<p>All IDEs including VSCode have fuzzy-finding for files, and fuzzy-finding for symbols as well. Between these, I never find myself using the file tree, except when it's the best tool for the job ("what other files are in this directory?", file tree manipulation (which IDEs recognize you doing, adjusting imports for you!) etc.).<p>I actually notice how this pattern is very fast, but I lose a code base's mental map. Coworkers might take longer to open any individual file but have a much better idea of repo layout as a whole. That makes them effective otherwise.</p>
]]></description><pubDate>Tue, 30 Dec 2025 11:19:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=46432106</link><dc:creator>diarrhea</dc:creator><comments>https://news.ycombinator.com/item?id=46432106</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46432106</guid></item><item><title><![CDATA[New comment by diarrhea in "You Need to Ditch VS Code"]]></title><description><![CDATA[
<p>No thank you...<p>One of the primary things shells are supposed to excel at is file system navigation and manipulation, but the experience is horrible. I can never get `cp`, `rsync`, `mv`, `find` right without looking them up, despite trying to learn. It's way too easy to mess up irreversibly and destructively.<p>One example is flattening a directory. You accidentally git-cloned one too deep, into `some/dir/dir/` and want to flatten contents to `some/dir/`, deleting the then-empty `some/dir/dir/`. Trivial and reversible in any file manager, needlessly difficult in a shell. I get it wrong all the time.<p>Similarly, iterating over a list of files (important yet trivial). `find` is arcane in its invocation, differs between Unix flavors, you will want `xargs` but `{}` substitution is also very error-prone. And don't forget `print0`! And don't you even dare `for f in *.pdf`, it will blow up in more ways than you can count. Also prepare to juggle quotes and escapes and pray you get that right. Further, bash defaults are insane (pray you don't forget `set -euo pipefail`).<p>How can we be failed by our tools so much, for these use cases? Why aren't these things trivial and <i>safe</i>?</p>
]]></description><pubDate>Tue, 30 Dec 2025 11:08:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46432020</link><dc:creator>diarrhea</dc:creator><comments>https://news.ycombinator.com/item?id=46432020</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46432020</guid></item><item><title><![CDATA[New comment by diarrhea in "Dagger: Define software delivery workflows and dev environments"]]></title><description><![CDATA[
<p>> just another kind of complex?<p>To some extent yes. If all you have is 2 GitHub Actions YAML files you are not going to reap massive benefits.<p>I'm a big fan of CUE myself. The benefits compound as you need to output more and more artifacts (= YAML config). Think of several k8s manifests, several GitHub Actions files, e.g. for building across several combinations of OSes, settings, etc.<p>CUE strikes a really nice balance between being primarily data description and not a Turing-complete language (e.g. cdk8s can get arbitrarily complex and abstract), reducing boilerplate (having you spell out the common bits <i>once</i> only, and each non-commit bit <i>once</i> only) and being type-safe (validation at build/export time, with native import of Go types, JSON schema and more).<p>They recently added an LSP which helps close the gap to other ecosystems. For example, cdk8s being TS means it naturally has fantastic IDE support, which CUE has been lacking in. CUE error messages can also be very verbose and unhelpful.<p>At work, we generate a couple thousand lines of k8s YAML from ~0.1x of that in CUE. The CUE is commented liberally, and validation imported from native k8s types, and sprinkled in where needed otherwise (e.g. we know for our application the FOO setting needs to be between 5 and 10). The generated YAML is clean, without any indentation and quoting worries. We also generate YAML-in-YAML, i.e. our application takes YAML config, which itself is in an outer k8s YAML ConfigMap. YAML-in-YAML is normally an enormous pain and easy to get wrong. In CUE it's just `yaml.Marshal`.<p>You get a lot of benefit for a comparatively simple mental model: all your CUE files form just one large document, and for export to YAML it's merged. Any conflicting values and any missing values fail the export. That's it. The mental model of e.g. cdk8s is massively more complex and has unbounded potential for abstraction footguns (being TypeScript). Not to mention CUE is Go and shipped as a single binary; the CUE v0.15.0 you use today will still compile and work 10 years from now.<p>You can start very simple, with CUE looking not unlike JSON, and add CUE-specific bits from there. You can always rip out the CUE and just keep the generated YAML, or replace CUE with e.g. cdk8s. It's not a one-way door.<p>The cherry on top are CUE scripts/tasks. In our case we use a CUE script to split the one-large-document (10s of thousands of lines) into separate files, according to some criteria. This is all defined in CUE as well, meaning I can write ~40 lines of CUE (this has a bit of a learning curve) instead of ~200 lines of cursed, buggy bash.</p>
]]></description><pubDate>Sun, 14 Dec 2025 16:11:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46264082</link><dc:creator>diarrhea</dc:creator><comments>https://news.ycombinator.com/item?id=46264082</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46264082</guid></item><item><title><![CDATA[New comment by diarrhea in "Anthropic acquires Bun"]]></title><description><![CDATA[
<p>The OS is my GC. It's why I segfault liberally.</p>
]]></description><pubDate>Fri, 05 Dec 2025 21:18:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=46167453</link><dc:creator>diarrhea</dc:creator><comments>https://news.ycombinator.com/item?id=46167453</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46167453</guid></item></channel></rss>