<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: didericis</title><link>https://news.ycombinator.com/user?id=didericis</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 25 Apr 2026 09:05:25 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=didericis" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by didericis in "Technical, cognitive, and intent debt"]]></title><description><![CDATA[
<p>When you press the 2 button, the plus button, the 2 button and the equals button, you are translating your question into bits and operations which are logically guaranteed to yield bits that represent your answer.<p>When you think through what will happen as a result of deterministic code, you are also thinking through what the bits will do, albeit at a higher level of abstraction.<p>When you ask an LLM to do something, you have no guarantee that the intent you provide is accurately translated, and you have no guarantee you’ll get the result you want. If you want your answer to 2+2 to <i>always</i> be 4, you shouldn’t use a non deterministic LLM. To get that guarantee, the bit manipulation a machine does needs to be logically equivalent to the way you evaluate the question.<p>That doesn’t mean you can’t minimize intent distortion or cognitive debt while using LLMs, or that you can’t think through the logic of whatever problem you’re dealing with in the same structured way a formal language forces you to while using them. But one of my pet peeves is comparing LLMs to compilers. The nondeterminism of LLMs and lack of logical rigidity makes them fundamentally different.</p>
]]></description><pubDate>Thu, 23 Apr 2026 11:29:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47874448</link><dc:creator>didericis</dc:creator><comments>https://news.ycombinator.com/item?id=47874448</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47874448</guid></item><item><title><![CDATA[New comment by didericis in "Technical, cognitive, and intent debt"]]></title><description><![CDATA[
<p>> you didn't think through how to manipulate the bits on the hardware, you just allowed the interpreter to do it<p>If you are thinking through deterministic code, you are thinking through the manipulation of bits in hardware. You are just doing it in a language which is easier for humans to understand.<p>There is a direct mapping of intent.</p>
]]></description><pubDate>Wed, 22 Apr 2026 17:33:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47866677</link><dc:creator>didericis</dc:creator><comments>https://news.ycombinator.com/item?id=47866677</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47866677</guid></item><item><title><![CDATA[New comment by didericis in "50 years of proof assistants"]]></title><description><![CDATA[
<p>> business rules engines, complex event processing, and related technologies are still marginal in the industry for reasons I don't completely understand<p>Translating between complex implicit intention in colloquial language and software and formal language used in proof assistants is usually very time consuming and difficult.<p>By the time you’ve formalized the rules, the context in which the rules made sense will have changed/a lot will be outdated. Plus time and money spent on formalizing rules is time and money not spent on core business needs.</p>
]]></description><pubDate>Sat, 13 Dec 2025 15:06:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46255026</link><dc:creator>didericis</dc:creator><comments>https://news.ycombinator.com/item?id=46255026</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46255026</guid></item><item><title><![CDATA[New comment by didericis in "Show HN: Why write code if the LLM can just do the thing? (web app experiment)"]]></title><description><![CDATA[
<p>> a process which is certainly non deterministic<p>The specific events that follow when asking a taxi driver where to go may not be exactly repeatable, but reality enforces physical determinism that is not explicitly understood by probabilistic token predictors. If you drive into a wall you <i>will</i> obey deterministic laws of momentum. If you drive off a cliff you <i>will</i> obey deterministic laws of gravity. These are certainties, not high probabilities. A physical taxi cannot have a catastrophic instant change in implementation and have its wheels or engine disappear when it stops to pick you up. A human taxi driver cannot instantly swap their physical taxi for a submarine, they cannot swap new york with paris, they cannot pass through buildings… the real world has a physically determined option-space that symbolic token predictors don’t understand yet.<p>And the reason humans are good at interpreting human intent correctly is not just that we’ve had billions of years of training with direct access to physical reality, but because we all share the same basic structure of inbuilt assumptions and “training history”. When interacting with a machine, so many of those basic unstated shared assumptions are absent, which is why it takes more effort to explicitly articulate what it is exactly that you want.<p>We’re getting much better at getting machines to infer intent from plain english, but even if we created a machine which could perfectly interpret our intentions, that still doesn’t solve the issue of needing to explain what you want in enough detail to actually get it for most tasks. Moving from point A to point B is a pretty simple task to describe. Many tasks aren’t like that, and the complexity comes as much from explaining what it is you want as it does from the implementation.</p>
]]></description><pubDate>Sun, 02 Nov 2025 15:34:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=45791048</link><dc:creator>didericis</dc:creator><comments>https://news.ycombinator.com/item?id=45791048</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45791048</guid></item><item><title><![CDATA[New comment by didericis in "Show HN: Why write code if the LLM can just do the thing? (web app experiment)"]]></title><description><![CDATA[
<p>> Well the need is to arrive where you are going.<p>In order to get to your destination, you need to explain where you want to go. Whatever you call that “imperative language”, in order to actually get the thing you want, you have to explain it. That’s an unavoidable aspect of interacting with <i>anything</i> that responds to commands, computer or not.<p>If the AI misunderstands those instructions and takes you to a slightly different <i>place</i> than you want to go, that’s a huge problem. But it’s bound to happen if you’re writing machine instructions in a natural language like English and in an environment where the same instructions aren’t consistently or deterministically interpreted. It’s even <i>more</i> likely if the destination or task is particularly difficult/complex to explain at the desired level of detail.<p>There’s a certain irreducible level of complexity involved in directing and translating a user’s intent into machine output simply and reliably that people keep trying to “solve”, but the issue keeps reasserting itself generation after generation. COBOL was “plain english” and people assumed it would make interacting with computers like giving instructions to another employee over half a century ago.<p>The primary difficulty is not the language used to articulate intent, the primary difficulty is articulating intent.</p>
]]></description><pubDate>Sun, 02 Nov 2025 05:26:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=45788022</link><dc:creator>didericis</dc:creator><comments>https://news.ycombinator.com/item?id=45788022</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45788022</guid></item><item><title><![CDATA[New comment by didericis in "We need a clearer framework for AI-assisted contributions to open source"]]></title><description><![CDATA[
<p>> we can generate and regenerate code from specs<p>We can (unreliably) write more code in natural english now. At its core it’s the same thing: detailed instructions telling the computer what it should do.</p>
]]></description><pubDate>Tue, 28 Oct 2025 12:51:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=45732190</link><dc:creator>didericis</dc:creator><comments>https://news.ycombinator.com/item?id=45732190</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45732190</guid></item><item><title><![CDATA[New comment by didericis in ""Vibe code hell" has replaced "tutorial hell" in coding education"]]></title><description><![CDATA[
<p>> Except it doesn't. It's much less to verify the tests.<p>This is only true when there is less information in those tests. You can argue that the extra information you see in the implementation doesn't matter as long as it does what the tests say, but the amount of uncertainty depends on the amount of information omitted in the tests. There's a threshold over which the effort of avoiding uncertainty becomes the same as the effort involved in just writing the code. Whether or not you think that's important depends on the problem you're working on and your tolerance for error and uncertainty, and there's no hard and fast rule for that. But if you want to approach 100% correctness, you need to attempt to specify your intentions 100% precisely. The fact that humans make mistakes and miscommunicate their intentions does not change the basic fact that a human needs to communicate their intention for a machine to fulfill that intention. The more precise the communication, the more work that's involved, regardless of whether you're verifying that precision after something generates it or generating it yourself.<p>> I can get the same level of uncertainty in far less time with an LLM. That's what makes it great.<p>I have a low tolerance for uncertainty in software, so I usually can't reach a level I find acceptable with an LLM. Fallible people who understand the intentions and current function of a codebase have a capacity that a statistical amalgamation of tokens trained on fallible people's output simply do not have. People may not use their capacity to verify alignment between intention and execution well, but they have it.<p>Again, I'm not denying that there's plenty of problems where the level of uncertainty involved in AI generated code is acceptable. I just think it's fundamentally true that extra precision requires extra work/there's simply no way to avoid that.</p>
]]></description><pubDate>Sat, 11 Oct 2025 19:06:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=45551759</link><dc:creator>didericis</dc:creator><comments>https://news.ycombinator.com/item?id=45551759</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45551759</guid></item><item><title><![CDATA[New comment by didericis in ""Vibe code hell" has replaced "tutorial hell" in coding education"]]></title><description><![CDATA[
<p>> When I'm writing the code myself, it's basically a ton of "plumbing" of loops and ifs and keeping track of counters and making sure I'm not making off-by-one errors and not making punctuation mistakes and all the rest. It actually takes quite a lot of brain energy and time to get that all perfect.<p>All of that "plumbing" affects behavior. My argument is that all of the brain energy used when checking that behavior is necessary in order to check that behavior. Do you have a test for an off by one error? Do you have a test to make sure your counter behaves correctly when there are multiple components on the same page? Do you have a test to make sure errors don't cause the component to crash? Do you have a test to ensure non utf-8 text or binary data in a text input throws a validation error? Etc etc. If you're checking <i>all</i> the details for correct behavior, the effort involved converges to roughly the same thing.<p>If you're not checking all of that plumbing, you don't know whether or not the behavior is correct. And the level of abstraction used when working with agents and LLMs is <i>not</i> the same as when working with a higher level language, because <i>LLMs make no guarantees about the correspondence between input and output</i>. Compilers and programming languages are meticulously designed to ensure that output is <i>exactly what is specified</i>. There are bugs and edge cases in compilers and quirks based on different hardware, so it's not always 100% perfect, but it's 99.9999% perfect.<p>When you use an LLM, <i>you have no guarantees about what it's doing</i>, and in a way that's <i>categorically different than not knowing what a compiler does</i>. Very few people know <i>all</i> of the steps that break down `console.log("hello world")` into the electrical signals that get sent to the pixels on a screen on a modern OS using modern hardware given the complexity of the stack, but they <i>do</i> know with as close as is humanly possible to 100% certainty that a correctly configured environment will result in that statement outputting the text "hello world" to a console. They <i>do not need to know the implementation because the contract is deterministic and well defined</i>. Prompts are <i>not deterministic nor well defined</i>, so if you want to verify it's doing what you want it to do, you have to check what it's doing in detail.<p>Your basic argument here is that you can save a lot of time by trusting the LLM will faithfully wire the code as you want, and that you can write tests to sanity check behavior and verify that. That's a valid argument, <i>if</i> you're ok tolerating a certain level of uncertainty about behavior that you haven't meticulously checked or tested. The more you want to meticulously check behavior, the more effort it takes, and the more it converges to the effort involved in just writing the code normally.</p>
]]></description><pubDate>Sat, 11 Oct 2025 14:56:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=45549643</link><dc:creator>didericis</dc:creator><comments>https://news.ycombinator.com/item?id=45549643</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45549643</guid></item><item><title><![CDATA[New comment by didericis in ""Vibe code hell" has replaced "tutorial hell" in coding education"]]></title><description><![CDATA[
<p>I’ve been saying this for years now: you can’t avoid communicating what you want a computer to do. The specific requirements <i>have</i> to be made somewhere.<p>Inferring intent from plain english prompts and context is a powerful way for computers to <i>guess</i> what you want from underspecified requirements, but the problem of defining what you want <i>specifically</i> always requires you to convey some irreducible amount of information. Whether it’s code, highly specific plain english, or detailed tests, if you care about correctness they all basically converge to the same thing and the same amount of work.</p>
]]></description><pubDate>Fri, 10 Oct 2025 18:30:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=45542202</link><dc:creator>didericis</dc:creator><comments>https://news.ycombinator.com/item?id=45542202</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45542202</guid></item><item><title><![CDATA[New comment by didericis in "Suspicionless ChatControl must be taboo in a state governed by the rule of law"]]></title><description><![CDATA[
<p>> Such laws cannot be enforced. Enforcement can only be arbitrary.<p>I am against criminalizing cryptography and largely agree about it being infeasible given the extent of proliferation and ease of replicating it/am playing devil's advocate:<p>Laws banning math related to manufacturing nuclear weapons can and has been enforced. It's important to take legal threats like ChatControl seriously and not just dismiss it as absurd/unenforceable overreach, even if that's likely true.</p>
]]></description><pubDate>Wed, 08 Oct 2025 18:54:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=45519368</link><dc:creator>didericis</dc:creator><comments>https://news.ycombinator.com/item?id=45519368</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45519368</guid></item><item><title><![CDATA[New comment by didericis in "Typst: A Possible LaTeX Replacement"]]></title><description><![CDATA[
<p>I agree the standard for mathematical notation is latex, but it’s only needed for fairly limited parts of a document. It makes more sense to me as something you’d use in snippets like `$\sum_{n=1}^{10}n$` than something that should control the whole document.<p>Markdown and mathjax is imo way more web friendly than a full latex document and avoids distracting/unnecessary aspects of latex.<p>As for publisher support, that’s what frustrates me most: html was specifically designed for academics at cern to <i>publish and link documents</i>. Instead of using an html friendly format like markdown, publishers demand a format designed for printing content to a physical piece of paper. So now html is used for almost everything <i>except</i> academic papers, and academic papers are all pdfs submitted to journal or preprint severs with no links.</p>
]]></description><pubDate>Sun, 28 Sep 2025 05:33:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=45402004</link><dc:creator>didericis</dc:creator><comments>https://news.ycombinator.com/item?id=45402004</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45402004</guid></item><item><title><![CDATA[New comment by didericis in "Xeres: Uncensorable Peer-to-Peer Communications"]]></title><description><![CDATA[
<p>I’ve talked to their devs/met them in person and trust them, most of their stack is public/all the primitives they use are available and well documented (see <a href="https://github.com/holepunchto" rel="nofollow">https://github.com/holepunchto</a> and <a href="https://docs.pears.com/" rel="nofollow">https://docs.pears.com/</a>), I’ve used that stack and verified it does what is advertised, and I believe they’re planning a full open source release of the parts that aren’t already public.</p>
]]></description><pubDate>Sun, 28 Sep 2025 05:03:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=45401901</link><dc:creator>didericis</dc:creator><comments>https://news.ycombinator.com/item?id=45401901</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45401901</guid></item><item><title><![CDATA[New comment by didericis in "Xeres: Uncensorable Peer-to-Peer Communications"]]></title><description><![CDATA[
<p>Looks interesting. I've been using <a href="https://keet.io/" rel="nofollow">https://keet.io/</a> for a good while now, which has similar motivations.</p>
]]></description><pubDate>Sat, 27 Sep 2025 23:26:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=45400217</link><dc:creator>didericis</dc:creator><comments>https://news.ycombinator.com/item?id=45400217</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45400217</guid></item><item><title><![CDATA[New comment by didericis in "Typst: A Possible LaTeX Replacement"]]></title><description><![CDATA[
<p>I'm probably ignorant to specific issues that make more advanced typesetting for journal submissions necessary, but I don't understand why some academic flavor of markdown isn't the standard. I'd advocate for that before either LaTeX or Typst.<p>I absolutely get the importance of typesetting for people who publish physical books/magazines/etc, but when it comes to research I don't see the value of typsetting anything. Journals or print publishers should be responsible for typsetting submissions to fit their style/paper size/etc, and researchers should just be responsible for delivering their research in a format that's simpler and more content focused.</p>
]]></description><pubDate>Sat, 27 Sep 2025 22:54:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=45400031</link><dc:creator>didericis</dc:creator><comments>https://news.ycombinator.com/item?id=45400031</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45400031</guid></item><item><title><![CDATA[New comment by didericis in "A Postmark backdoor that’s downloading emails"]]></title><description><![CDATA[
<p>> the best thing to do is to inform.<p>While also not using them yourself/actively trying to find and strip them out of workflows you have control over.</p>
]]></description><pubDate>Sat, 27 Sep 2025 16:38:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=45397313</link><dc:creator>didericis</dc:creator><comments>https://news.ycombinator.com/item?id=45397313</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45397313</guid></item><item><title><![CDATA[New comment by didericis in "Pairing with Claude Code to rebuild my startup's website"]]></title><description><![CDATA[
<p>I just started using aider, recommend it: <a href="https://aider.chat/" rel="nofollow">https://aider.chat/</a><p>It indexes files in your repo, but you can control which specific files to include when prompting and keep it very limited/controlled.</p>
]]></description><pubDate>Fri, 26 Sep 2025 16:41:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=45388457</link><dc:creator>didericis</dc:creator><comments>https://news.ycombinator.com/item?id=45388457</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45388457</guid></item><item><title><![CDATA[New comment by didericis in "AI’s coding evolution hinges on collaboration and trust"]]></title><description><![CDATA[
<p>> I can't imagine the vendor lock in of that.... You have the source, but it is in such a state that no human can maintain it?<p>It’s much worse than that.<p>What happens when the erroneous output caused by model blind spots gets fed back into the model?<p>Those blind spots get reinforced.<p>Doesn’t matter how small that error rate is (and it’s not small). The errors will compound.<p>Vendor lock-in won’t matter because it will simply stop working/become totally unrecoverable.</p>
]]></description><pubDate>Fri, 29 Aug 2025 22:29:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=45070121</link><dc:creator>didericis</dc:creator><comments>https://news.ycombinator.com/item?id=45070121</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45070121</guid></item><item><title><![CDATA[New comment by didericis in "Some thoughts on LLMs and software development"]]></title><description><![CDATA[
<p>> The more practical question is though, does that matter?<p>I think it matters quite a lot.<p>Specifically for knowledge preservation and education.</p>
]]></description><pubDate>Thu, 28 Aug 2025 21:16:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=45057133</link><dc:creator>didericis</dc:creator><comments>https://news.ycombinator.com/item?id=45057133</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45057133</guid></item><item><title><![CDATA[New comment by didericis in "Some thoughts on LLMs and software development"]]></title><description><![CDATA[
<p>Part of what got me into software was this: no matter how complex or impressive the operation, with enough time and determination, you could trace each step and learn how a tap on a joystick lead to the specific pixels on a screen changing.<p>There’s a beautiful invitation to learn and contribute baked into a world where each command is fully deterministic and spec-ed out.<p>Yes, there have always been poorly documented black boxes, but I thought the goal was to minimize those.<p>People don’t understand how much is going to be lost if that goal is abandoned.</p>
]]></description><pubDate>Thu, 28 Aug 2025 20:09:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=45056511</link><dc:creator>didericis</dc:creator><comments>https://news.ycombinator.com/item?id=45056511</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45056511</guid></item><item><title><![CDATA[New comment by didericis in "AI promised efficiency. Instead, it's making us work harder"]]></title><description><![CDATA[
<p>It works by translating language abstractions to code.<p>The probability of plain english being correctly translated to code depends on existing code and documented abstractions describing lower level functionality.</p>
]]></description><pubDate>Mon, 04 Aug 2025 23:41:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=44792637</link><dc:creator>didericis</dc:creator><comments>https://news.ycombinator.com/item?id=44792637</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44792637</guid></item></channel></rss>