<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: wizzwizz4</title><link>https://news.ycombinator.com/user?id=wizzwizz4</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 11 Apr 2026 12:52:24 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=wizzwizz4" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by wizzwizz4 in "Ads in ChatGPT"]]></title><description><![CDATA[
<p>The classified ads section in a newspaper is valuable, and you can discard it. (If you meant ads stuffed around articles: yes, that annoys me, but I'm also not familiar enough with the papers that do that to name one.)</p>
]]></description><pubDate>Fri, 10 Apr 2026 14:50:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47719018</link><dc:creator>wizzwizz4</dc:creator><comments>https://news.ycombinator.com/item?id=47719018</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47719018</guid></item><item><title><![CDATA[New comment by wizzwizz4 in "Ads in ChatGPT"]]></title><description><![CDATA[
<p>He'd already written about it: <a href="https://xkcd.com/632/" rel="nofollow">https://xkcd.com/632/</a></p>
]]></description><pubDate>Fri, 10 Apr 2026 14:47:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47718973</link><dc:creator>wizzwizz4</dc:creator><comments>https://news.ycombinator.com/item?id=47718973</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47718973</guid></item><item><title><![CDATA[New comment by wizzwizz4 in "Škoda DuoBell: A bicycle bell that penetrates noise-cancelling headphones"]]></title><description><![CDATA[
<p>"Next to nothing in inconvenience" is the perception <i>now</i>. It certainly wasn't the perception when seatbelts were introduced. The ability to listen to personal music while walking is less than 50 years old: before that, you had the radio or nothing. Even <i>that</i> would not be an intolerable inconvenience for most. But I was more thinking:<p>> People should not hear loud music when driving - max is normal speaking voice level.<p>which feels like a more than acceptable constraint to me.</p>
]]></description><pubDate>Wed, 08 Apr 2026 11:10:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47688571</link><dc:creator>wizzwizz4</dc:creator><comments>https://news.ycombinator.com/item?id=47688571</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47688571</guid></item><item><title><![CDATA[New comment by wizzwizz4 in "Škoda DuoBell: A bicycle bell that penetrates noise-cancelling headphones"]]></title><description><![CDATA[
<p>Why would enforcement be necessary, given assumptions 1 and 2 (not stupid, not murderers), and awareness? Around these parts, seatbelt enforcement isn't necessary because everyone voluntarily wears their seatbelt – except for children, occasionally, but the adults are generally capable of enforcing that. (Even teenagers / young adults being irresponsible in cars generally wear seatbelts while doing so.)</p>
]]></description><pubDate>Wed, 08 Apr 2026 11:01:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47688490</link><dc:creator>wizzwizz4</dc:creator><comments>https://news.ycombinator.com/item?id=47688490</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47688490</guid></item><item><title><![CDATA[New comment by wizzwizz4 in "Škoda DuoBell: A bicycle bell that penetrates noise-cancelling headphones"]]></title><description><![CDATA[
<p>How do we enforce seatbelts? (1) Assume the public aren't stupid. (2) Assume the public aren't murderers. (3) Explain the risk-benefit analysis through informative videos like <a href="https://en.wikipedia.org/wiki/Julie_(1998_film)" rel="nofollow">https://en.wikipedia.org/wiki/Julie_(1998_film)</a>.<p>People can shout "domestic terror" all they like, but if it's not true, it's not true.</p>
]]></description><pubDate>Wed, 08 Apr 2026 10:27:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47688140</link><dc:creator>wizzwizz4</dc:creator><comments>https://news.ycombinator.com/item?id=47688140</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47688140</guid></item><item><title><![CDATA[New comment by wizzwizz4 in "The threat is comfortable drift toward not understanding what you're doing"]]></title><description><![CDATA[
<p>I'm willing to bet 10% of my net worth on this. But my claim was not about any given untrained child (for instance, a child who does not want to program would do poorly): a fair bet would allow me to choose the child, you to choose the LLM, use a task and programming language of the child's choice, and have a neutral third-party familiar with the programming language judge "better code". (I would, of course, want to ensure that the judge used an appropriate rubric: RLHF can produce a sophisticated turd-polisher. Perhaps the evaluation process could involve modifications made to the program?)<p>It is (rightly) difficult to get hold of <i>one</i> uninvolved child, for safeguarding reasons, so it would be better to run it as a school (or interschool) competition, where multiple children may participate. For fairness, you may also provide multiple LLM participants (however you define that). The winner of the contest, as determined by the judge, would then determine the winner of the bet ­– unless the winning child had been trained, in which case we would fall back to the next-highest-ranked participant. The number of LLM candidates would be equal to the number of eligible children.<p>However, I don't see a good way to allow each child to pick a programming language and task, without leaving the competition results incomparable. So perhaps each child should be paired with an LLM, and the judge should determine which submission from each pair is better? But then if I only need one victory (to support my claim), this is clearly unfair. So each pair should be tested enough to determine whether they're consistently better than the LLM… but then we are demanding a <i>lot</i> of the child participants, for no real benefit to them.<p>If we can agree on a workable protocol, I can try to pull some strings and see if we can make this happen. I could use the money.</p>
]]></description><pubDate>Tue, 07 Apr 2026 17:09:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47678384</link><dc:creator>wizzwizz4</dc:creator><comments>https://news.ycombinator.com/item?id=47678384</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47678384</guid></item><item><title><![CDATA[New comment by wizzwizz4 in "The threat is comfortable drift toward not understanding what you're doing"]]></title><description><![CDATA[
<p>LLMs are still making fundamentally the same kinds of errors that they made in 2021. If you check my HN comment history, you'll see I <i>predicted</i> these errors, just from skimming the relevant academic papers (which is to say they're obvious: I'm far from the only person saying this). There is no theoretical reason we should expect them to go away, unless the model architectures fundamentally change (and no, GPT -> LLaMA is not a fundamental change), because they're not removable discontinuities: they're indicative of fundamental capability gaps.<p>I don't care how many terms you add to your Taylor series: your polynomial approximation of a sine wave is never going to be suitable for additive speech synthesis. Likewise, I don't care how good your predictive-text transformer model gets at instrumental NLP subtasks: it will never be a good programmer (except as far as it's a plagiarist). Just look at the Claude Code source code: if <i>anyone's</i> an expert in agentic AI development, it's the Claude people, and yet the codebase is utterly unmaintainable dogshit that <i>shouldn't work</i> and, on further inspection, <i>doesn't</i> work.<p>That's not to say that no computer program can write computer programs, but <i>this</i> computer program is well into the realm of diminishing returns.</p>
]]></description><pubDate>Sun, 05 Apr 2026 15:11:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47650240</link><dc:creator>wizzwizz4</dc:creator><comments>https://news.ycombinator.com/item?id=47650240</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47650240</guid></item><item><title><![CDATA[New comment by wizzwizz4 in "The threat is comfortable drift toward not understanding what you're doing"]]></title><description><![CDATA[
<p>From the article:<p>> There's a common rebuttal to this, and I hear it constantly. "Just wait," people say. "In a few months, in a year, the models will be better. They won't hallucinate. They won't fake plots. The problems you're describing are temporary." I've been hearing "just wait" since 2023.<p>We're <i>not</i> trending towards superintelligence with these AIs. We're trending towards (and, in fact, have already reached) superintelligence with <i>computers in general</i>, but LLM agents are among the least capable known algorithms for the majority of tasks we get them to do. The problem, as it usually is, is that most people don't have access to the fruits of obscure research projects.<p>Untrained children write better code than the most sophisticated LLMs, without even noticing they're doing anything special.</p>
]]></description><pubDate>Sun, 05 Apr 2026 12:49:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47648888</link><dc:creator>wizzwizz4</dc:creator><comments>https://news.ycombinator.com/item?id=47648888</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47648888</guid></item><item><title><![CDATA[New comment by wizzwizz4 in "Slop is not necessarily the future"]]></title><description><![CDATA[
<p>Laziness is a virtue: <a href="https://thethreevirtues.com/" rel="nofollow">https://thethreevirtues.com/</a>. The bots aren't lazy: they're incompetent.</p>
]]></description><pubDate>Sun, 05 Apr 2026 12:44:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47648834</link><dc:creator>wizzwizz4</dc:creator><comments>https://news.ycombinator.com/item?id=47648834</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47648834</guid></item><item><title><![CDATA[New comment by wizzwizz4 in "Mayor of Paris removed parking spaces, reduced the number of cars"]]></title><description><![CDATA[
<p>I believe that was LaGrange's point.</p>
]]></description><pubDate>Sat, 21 Mar 2026 16:53:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47468755</link><dc:creator>wizzwizz4</dc:creator><comments>https://news.ycombinator.com/item?id=47468755</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47468755</guid></item><item><title><![CDATA[New comment by wizzwizz4 in "Mayor of Paris removed parking spaces, reduced the number of cars"]]></title><description><![CDATA[
<p>Electric cars tend to be heavier than ICE cars. This means their tyres wear out faster, which is plastic dust being thrown up in the air. (We're still not sure of the health impacts of microplastics, but we do know they accumulate in various organs, including the brain.) They also throw up road dust, and we <i>know</i> that rock dust is really bad to breathe in. Air pollution is still present. Compared to ICE cars fitted with catalytic converters, electric cars are probably better, but just because you can't smell their emissions doesn't mean they aren't still reducing the air quality.<p>They're also still tonnes of metal hurtling along the streets of a city shared by pedestrians, which is inherently dangerous. (Less so than a bus, but there are also more cars than buses: you'd have to check the statistics to see how that evens out.) As for actually damaging the road (producing road dust, potholes, etc, requiring a resurface that off-gases for weeks afterwards): cars damage the road more than bikes, though that's not significant compared to lorries, since the wear is something ludicrous like the fourth power of the weight-per-axle.</p>
]]></description><pubDate>Sat, 21 Mar 2026 14:40:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47467475</link><dc:creator>wizzwizz4</dc:creator><comments>https://news.ycombinator.com/item?id=47467475</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47467475</guid></item><item><title><![CDATA[New comment by wizzwizz4 in "4Chan mocks £520k fine for UK online safety breaches"]]></title><description><![CDATA[
<p>I think they mean the fact that UK plug sockets are earthed, and contain a mechanism that prevents you from shorting live and neutral with a bent fork, even though those safety mechanisms are rarely the last line of defence (hence "over-engineered"… you can probably tell that I disagree with that assessment).</p>
]]></description><pubDate>Thu, 19 Mar 2026 20:40:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47445718</link><dc:creator>wizzwizz4</dc:creator><comments>https://news.ycombinator.com/item?id=47445718</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47445718</guid></item><item><title><![CDATA[New comment by wizzwizz4 in "A sufficiently detailed spec is code"]]></title><description><![CDATA[
<p>> <i>The paradigms literally are different. […] They’re extremely far apart.</i><p>And yet, you can write pure-functional thunked streams in Python (and have the static type-checker enforce strong type checking), and high-level duck-typed OO with runtime polymorphism in Haskell.<p>The hardest part is getting a proper sum type going in Python, but ducktyping comes to the rescue. You can write `MyType = ConstructA | ConstructB | ConstructC` where each ConstructX type has a field like `discriminant: Literal[MyTypeDiscrim.A]`, but that's messy. (Technically, you can use the type itself as a discriminant, but that means you have to worry about subclasses; you can fix that by introducing an invariance constraint, or by forbidding subclasses….) It shouldn't be too hard to write a library to deal with this nicely, but I haven't found one. (<a href="https://pypi.org/project/pydantic-discriminated/" rel="nofollow">https://pypi.org/project/pydantic-discriminated/</a> isn't quite the same thing, and its comparison table claims that everything else is worse.)</p>
]]></description><pubDate>Thu, 19 Mar 2026 17:56:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47443290</link><dc:creator>wizzwizz4</dc:creator><comments>https://news.ycombinator.com/item?id=47443290</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47443290</guid></item><item><title><![CDATA[New comment by wizzwizz4 in "A sufficiently detailed spec is code"]]></title><description><![CDATA[
<p>Humans also have the ability to <i>introspect</i>. Ultimately, (nearly) every software project is intended to provide a service to humans, and most humans are similar in most ways: "what would I want it to do?" is a surprisingly-reliable heuristic for dealing with ambiguity, especially if you know where you should and shouldn't expect it to be valid.<p>The best LLMs can manage is "what's statistically-plausible behaviour for descriptions of humans in the corpus", which is not the same thing at all. Sometimes, I imagine, that might be more useful; but for programming (where, assuming you're not reinventing wheels or scrimping on your research, you're often encountering situations that nobody has encountered before), an alien mind's extrapolation of statistically-plausible human behaviour observations is not useful. (I'm using "alien mind" metaphorically, since LLMs do not appear particularly mind-like to me.)</p>
]]></description><pubDate>Thu, 19 Mar 2026 11:45:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47437757</link><dc:creator>wizzwizz4</dc:creator><comments>https://news.ycombinator.com/item?id=47437757</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47437757</guid></item><item><title><![CDATA[New comment by wizzwizz4 in "'It's sweet. It's bitter. It's ours.' The chocolate ritual that binds my family"]]></title><description><![CDATA[
<p>I didn't notice the dashes, and I thought <i>that</i> construction was actually fine. I first noticed whatever I'm noticing at the passage:<p>> <i>It was never just about the candy. It was about being together.</i><p>And then, a subsequent paragraph ends:<p>> <i>But the ritual remained.</i><p>I'll allow "Grandpa’s chocolate was something else entirely. It was sacred." because (while it fits the pattern) it actually <i>works</i>; but we see the same pattern again later:<p>> <i>It’s sweet. It’s bitter. It’s ours.</i><p>I'll give the benefit of the doubt for the overall structure of the narrative, too, because maybe the author's picked up (or, heck, perhaps <i>pioneered</i>) the traditional Facebook Anecdote Genre. I'll even give a pass for the style, because it <i>resembles</i> the writing style of <a href="https://www.csmonitor.com/1996/0502/050296.home.home.1.html" rel="nofollow">https://www.csmonitor.com/1996/0502/050296.home.home.1.html</a> – if somewhat less skilful. But those parts I've identified as least skilful are also those parts I've identified as most AI-like.<p>Now look at when Nancy Intrator contributed to the Christian Science Monitor: 1994–1997. Sure, it's <i>plausible</i> that she's contributing again after a long haitus (and the facts of the story back this up)… but we've got a lot of "plausible"s adding up.<p>You mentioned the en-dashes (which I don't treat as an AI indicator; I use them myself, even!). Nancy's 90s writing <i>did</i> use dashes, but much less frequently: <a href="https://www.csmonitor.com/1994/1228/28172.html" rel="nofollow">https://www.csmonitor.com/1994/1228/28172.html</a> has 4 dashes in a much longer piece (all spaced ASCII, but we can chalk that difference up to software), whereas TFA has 11! (<a href="https://www.csmonitor.com/1995/0329/29161.html" rel="nofollow">https://www.csmonitor.com/1995/0329/29161.html</a> has no dashes at all, but it's a slightly different genre of anecdote, so I'll ignore that as an outlier.)<p>If I had to guess, I would say that there was a shorter draft written by the real Nancy Intrator (or someone pretending to be her, but <i>probably</i> the real Nancy), which was then expanded by a generative AI system. The additional research I've done for this comment has not changed my guess.</p>
]]></description><pubDate>Wed, 18 Mar 2026 09:05:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47423334</link><dc:creator>wizzwizz4</dc:creator><comments>https://news.ycombinator.com/item?id=47423334</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47423334</guid></item><item><title><![CDATA[New comment by wizzwizz4 in "'It's sweet. It's bitter. It's ours.' The chocolate ritual that binds my family"]]></title><description><![CDATA[
<p>There's something distressing about <i>this</i> being AI-generated, given the subject matter.</p>
]]></description><pubDate>Tue, 17 Mar 2026 21:23:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47418527</link><dc:creator>wizzwizz4</dc:creator><comments>https://news.ycombinator.com/item?id=47418527</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47418527</guid></item><item><title><![CDATA[New comment by wizzwizz4 in "Polymarket gamblers threaten to kill me over Iran missile story"]]></title><description><![CDATA[
<p>It means I shouldn't listen to them <i>in general</i>. The LessWrongers are mainly wrong about things they think they understand: when they <i>aren't</i> overconfident, their improvisational skills tend to be decent. They were an excellent source of information about COVID-19, but they're a terrible source of information in the areas where they think they have expertise.<p>When there's a crisis, it's still worth checking in to see what the LessWrongers are saying about it, because it <i>might</i> be very useful, and it's pretty easy to tell: you just check whether it looks like they're doing science, or Rationalism™®, and only investigate further in the rare cases where it's the former.</p>
]]></description><pubDate>Mon, 16 Mar 2026 18:47:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47403069</link><dc:creator>wizzwizz4</dc:creator><comments>https://news.ycombinator.com/item?id=47403069</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47403069</guid></item><item><title><![CDATA[New comment by wizzwizz4 in "Polymarket gamblers threaten to kill me over Iran missile story"]]></title><description><![CDATA[
<p>They were right about Bitcoin getting big (though I'm not aware of anyone putting their money where their mouth was), and they were a decent source of information leading up to the peak of the COVID-19 pandemic (which probably saved a handful of lives). Just because they're <i>almost</i> always aggressively wrong, that doesn't mean they're aggressively wrong about <i>everything</i>.</p>
]]></description><pubDate>Mon, 16 Mar 2026 17:05:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47401702</link><dc:creator>wizzwizz4</dc:creator><comments>https://news.ycombinator.com/item?id=47401702</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47401702</guid></item><item><title><![CDATA[New comment by wizzwizz4 in "MoD sources warn Palantir role at heart of government is threat to UK security"]]></title><description><![CDATA[
<p>That's part of it, but not the whole story. If Palantir were a book, explaining how to implement data aggregation systems effectively, people wouldn't be so wary of it. (Critics would still criticise that data aggregation was performed in the first place, of course, but there wouldn't be the additional "and it's Palantir".)</p>
]]></description><pubDate>Mon, 16 Mar 2026 16:12:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47400944</link><dc:creator>wizzwizz4</dc:creator><comments>https://news.ycombinator.com/item?id=47400944</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47400944</guid></item><item><title><![CDATA[New comment by wizzwizz4 in "Hostile Volume – A game about adjusting volume with intentionally bad UI"]]></title><description><![CDATA[
<p>I had to use it for #19, since YouTube doesn't load on my machine. Patching it would make the game unplayable past level 19.</p>
]]></description><pubDate>Sat, 14 Mar 2026 21:23:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47381341</link><dc:creator>wizzwizz4</dc:creator><comments>https://news.ycombinator.com/item?id=47381341</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47381341</guid></item></channel></rss>