<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: davemp</title><link>https://news.ycombinator.com/user?id=davemp</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 10:16:15 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=davemp" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by davemp in "Bring Back Idiomatic Design (2023)"]]></title><description><![CDATA[
<p>> Tell me you know nothing about web development without saying you know nothing about web dev<p>This Twitterism really bugs me.<p>You took the time to write a really detailed response (much appreciated, you convinced me). There’s no need to explicitly dunk on the OP. Though if you really want to be a little mean (a little bit is fair imo), I think it should be closer to level of creativity of the rest of your comment. Call them ignorant and say you can’t take them seriously or something. The twitterism wouldn’t really stand on its own as a comment.<p>Sorry for the nitpicky rant.</p>
]]></description><pubDate>Sun, 12 Apr 2026 16:27:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47741566</link><dc:creator>davemp</dc:creator><comments>https://news.ycombinator.com/item?id=47741566</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47741566</guid></item><item><title><![CDATA[New comment by davemp in "Bring Back Idiomatic Design (2023)"]]></title><description><![CDATA[
<p>I’m a decade+ linux power user and I still do insane things like pipe outputs into vim so I can copy paste without having to remember tmux copy paste modes when I have vertical panes open.</p>
]]></description><pubDate>Sun, 12 Apr 2026 16:10:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47741396</link><dc:creator>davemp</dc:creator><comments>https://news.ycombinator.com/item?id=47741396</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47741396</guid></item><item><title><![CDATA[New comment by davemp in "AI Will Be Met with Violence, and Nothing Good Will Come of It"]]></title><description><![CDATA[
<p>Sounds promising honestly. One of the scariest parts of the big AI labs is all of the exclusive training data they get through their UIs. (It’s unclear whether distillation is a feasible way to close the gap).<p>If there were another party involved, that would (hopefully) diversify power that (potentially) comes with those streams of data.<p>It’s a bit ironic that the USA has mostly abandoned interoperability after being one of the pioneers with the American manufacturing method. [0]<p>[0]: <a href="https://en.wikipedia.org/wiki/American_system_of_manufacturing" rel="nofollow">https://en.wikipedia.org/wiki/American_system_of_manufacturi...</a></p>
]]></description><pubDate>Sun, 12 Apr 2026 15:10:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47740632</link><dc:creator>davemp</dc:creator><comments>https://news.ycombinator.com/item?id=47740632</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47740632</guid></item><item><title><![CDATA[New comment by davemp in "Sam Altman's response to Molotov cocktail incident"]]></title><description><![CDATA[
<p>I intentionally said “modern civil society” instead of the USA to avoid talking about specifics.<p>Whether the USA has a sufficiently functional justice system is another topic. My intuition is also that, in the presence of a disfunctional social system, fixing (or replacing) the system will usually lead to better outcomes than side stepping it. Not that I really want to talk about the minutia and challenges of fixing the USA’s justice system.</p>
]]></description><pubDate>Sun, 12 Apr 2026 14:51:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47740417</link><dc:creator>davemp</dc:creator><comments>https://news.ycombinator.com/item?id=47740417</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47740417</guid></item><item><title><![CDATA[New comment by davemp in "Small models also found the vulnerabilities that Mythos found"]]></title><description><![CDATA[
<p>> Across a thousand runs through our scaffold, the total cost was under $20,000<p>Lots of questions about the $20k. Is that raw electricity costs, subsidized user token costs? If so, the actual costs to run these sorts of tasks sustainably could be something like $200k. Even at $50k, a FreeBSD DoS is not an extremely competitive price. That's like 2-4mo of labor.<p>Don't get me wrong, I think this seems like a great use for LLMs. It intuitively feels like a much more powerful form of white box fuzzing that used techniques like symbolic execution to try to guide execution contexts to more important code paths.</p>
]]></description><pubDate>Sat, 11 Apr 2026 23:37:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47734915</link><dc:creator>davemp</dc:creator><comments>https://news.ycombinator.com/item?id=47734915</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47734915</guid></item><item><title><![CDATA[New comment by davemp in "Sam Altman's response to Molotov cocktail incident"]]></title><description><![CDATA[
<p>I’m falling into the Socratic hole [0], but in a modern civil society there is a justice system through which people seek recourse. This has all sorts of desirable effects for societies.<p>Please educate yourself on the basics or at least put more effort in before participating in conversations.<p>[0]: It’s easy to abuse the Socratic method and devolve a discussion into one of first principles. It’s extremely tiresome and a huge waste of everyone’s time.</p>
]]></description><pubDate>Sat, 11 Apr 2026 17:51:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47732560</link><dc:creator>davemp</dc:creator><comments>https://news.ycombinator.com/item?id=47732560</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47732560</guid></item><item><title><![CDATA[New comment by davemp in "Sam Altman's response to Molotov cocktail incident"]]></title><description><![CDATA[
<p>Yeah a company causing mass death or other disasters is maybe the single clearest signal that they should go bankrupt and someone else should take over (if the tech is really that important).</p>
]]></description><pubDate>Sat, 11 Apr 2026 03:49:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47727163</link><dc:creator>davemp</dc:creator><comments>https://news.ycombinator.com/item?id=47727163</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47727163</guid></item><item><title><![CDATA[New comment by davemp in "Sam Altman's response to Molotov cocktail incident"]]></title><description><![CDATA[
<p>“Very likely yes”, I reply to an account that <1yr old with mostly comments in AI topics many of which violate the HN guidelines (including the one I’m responding to).</p>
]]></description><pubDate>Sat, 11 Apr 2026 03:21:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47727013</link><dc:creator>davemp</dc:creator><comments>https://news.ycombinator.com/item?id=47727013</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47727013</guid></item><item><title><![CDATA[New comment by davemp in "AI assistance when contributing to the Linux kernel"]]></title><description><![CDATA[
<p>> We're well past the Turing test now<p>Nope, there is no “The” Turing Test. Go read his original paper before parroting pop sci nonsense.<p>The Turing test paper proposes an adversarial game to deduce if the interviewee is human. It’s extremely well thought out. Seriously, read it. Turing mentions that he’d wager something like 70% of unprepared humans wouldn’t be able to correctly discern in the near future. He never claims there to be a definitive test that establishes sentience.<p>Turing may have won that wager (impressive), but there are clear tells similar to the “how many the r’s are in strawberries?” that an informed interrogator could reliably exploit.</p>
]]></description><pubDate>Sat, 11 Apr 2026 03:09:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47726931</link><dc:creator>davemp</dc:creator><comments>https://news.ycombinator.com/item?id=47726931</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47726931</guid></item><item><title><![CDATA[New comment by davemp in "AI assistance when contributing to the Linux kernel"]]></title><description><![CDATA[
<p>Interesting. That seems to suggest that one would need to retain the prompts in order to pursue copyright claims if a defendant can cast enough doubt on human authorship.<p>Though I guess such a suit is unlikely if the defendant could just AI wash the work in the first place.</p>
]]></description><pubDate>Sat, 11 Apr 2026 02:36:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47726752</link><dc:creator>davemp</dc:creator><comments>https://news.ycombinator.com/item?id=47726752</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47726752</guid></item><item><title><![CDATA[New comment by davemp in "Are We Idiocracy Yet?"]]></title><description><![CDATA[
<p>Eugenics is a taboo because there is a tempting trap to over simplify and make assertions that are not actually supported by the data.<p>We know that <i>IQ</i> is hereditary to an unknown degree. We have some evidence that IQ relates to intelligence.<p>We don’t know really know which genetic variables are responsible. Even simple features like height are thought to have ~12k variables.<p>We don’t even really have a good definition of intelligence (see any AGI comment chain).<p>I would say that assuming we have good enough data or models to base important decisions on is unscientific.<p>Decisions like improving education and nutrition don’t really need to help.</p>
]]></description><pubDate>Tue, 07 Apr 2026 14:31:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47675997</link><dc:creator>davemp</dc:creator><comments>https://news.ycombinator.com/item?id=47675997</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47675997</guid></item><item><title><![CDATA[New comment by davemp in "Are We Idiocracy Yet?"]]></title><description><![CDATA[
<p>I always have a problem when folks bring up idiocracy because the of the eugenics angle. It’s extremely unlikely that people are getting inherently stupider, just less educated. The former is some sort of prophecy of doom and the latter is actually actionable.</p>
]]></description><pubDate>Tue, 07 Apr 2026 11:29:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47673542</link><dc:creator>davemp</dc:creator><comments>https://news.ycombinator.com/item?id=47673542</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47673542</guid></item><item><title><![CDATA[New comment by davemp in "DRAM pricing is killing the hobbyist SBC market"]]></title><description><![CDATA[
<p>Easy to spot in a contrived example is not:<p>> impossible to express in Rust<p>I’m not going to argue with Rust folks who misrepresent the language.</p>
]]></description><pubDate>Thu, 02 Apr 2026 16:10:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47616347</link><dc:creator>davemp</dc:creator><comments>https://news.ycombinator.com/item?id=47616347</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47616347</guid></item><item><title><![CDATA[New comment by davemp in "DRAM pricing is killing the hobbyist SBC market"]]></title><description><![CDATA[
<p><p><pre><code>    let foo = [1, 2, 3];
    unsafe {
        *foo.get_unchecked_mut(4) = 5;
    }
</code></pre>
Not sure why Rust evangelists always seem to ignore that unsafe exists.</p>
]]></description><pubDate>Thu, 02 Apr 2026 00:50:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47608680</link><dc:creator>davemp</dc:creator><comments>https://news.ycombinator.com/item?id=47608680</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47608680</guid></item><item><title><![CDATA[New comment by davemp in "Coding agents could make free software matter again"]]></title><description><![CDATA[
<p>I appreciate the detailed reply and that there’s subtlety here.<p>I read the linked Bartz case. It’s disappointing that it seems limited to only the copying of books into a data set and not the result of training LLM on protected works. This is not the “use” that I was discussing and not very interesting.<p>The plaintiffs didn’t even challenge that the outputs of the LLMs infringe. They judge seems to agree (at least by omission) that fair use wouldn’t apply but that the outputs were transformative and in cases where they aren’t:<p>> [anthropic] placed additional software between the user and the underlying LLM to ensure that no infringing output ever reached the users.<p>So this is not true:<p>> he [the judge] details exactly why the facts of the case and prior case law find that training an AI on copyrighted material is fair use<p>The plaintiffs also make really awful arguments about “memorizing” and “learning” that falsely anthropomorphize LLMs. Which the judge shoots down.<p>If we’re going to give LLMs the same rights as humans, there’s unlikely to much of an argument.<p>I think there’s potential for an argument about how LLMs use “compressed” versions of protected works to _mechanically_ traverse language space. It would be subtle and technical so maybe not likely to work in our current context.</p>
]]></description><pubDate>Tue, 31 Mar 2026 13:08:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47586822</link><dc:creator>davemp</dc:creator><comments>https://news.ycombinator.com/item?id=47586822</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47586822</guid></item><item><title><![CDATA[New comment by davemp in "Coding agents could make free software matter again"]]></title><description><![CDATA[
<p>Yet you still wasted your own time and everyone else’s time with a reply that has even less substance.<p>I was making an argument based on quotes from the actual legal code and you’re saying pions who don’t use the exact correct terminology shouldn’t even consider what should or shouldn’t be legal? What a load of junk. This is a democracy. We’re supposed to be engaging with it.</p>
]]></description><pubDate>Mon, 30 Mar 2026 15:28:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47575562</link><dc:creator>davemp</dc:creator><comments>https://news.ycombinator.com/item?id=47575562</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47575562</guid></item><item><title><![CDATA[New comment by davemp in "Coding agents could make free software matter again"]]></title><description><![CDATA[
<p>I replied to someone saying that it’s fair use, which presupposes that it’s a derivative work.</p>
]]></description><pubDate>Mon, 30 Mar 2026 15:16:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47575405</link><dc:creator>davemp</dc:creator><comments>https://news.ycombinator.com/item?id=47575405</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47575405</guid></item><item><title><![CDATA[New comment by davemp in "Coding agents could make free software matter again"]]></title><description><![CDATA[
<p>Claiming LLMs are fair use is ridiculous bordering on ignorant or disingenuous.<p>Here’s the 4 part test from 17 U.S.C. § 107:<p>1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;<p>Fail. The use is to make trillions of dollars and be maximally disruptive.<p>2. the nature of the copyrighted work;<p>Fail. In many cases at least, the copy written code is commercial or otherwise supports livelihoods; and is the result much high skill labor with the express stipulation for reciprocity.<p>3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and<p>Fail. They use all of it.<p>4. the effect of the use upon the potential market for or value of the copyrighted work.<p>Fail to the extreme. There is already measurable decline in these markets. The leaders explicitly state that they want to put knowledge workers out of business.<p>- - -<p>Hell, LLMs don’t even pass the sniff test.<p>The only reason this stuff is being entertained is some combination of the prisoner’s dilemma and more classic greed.</p>
]]></description><pubDate>Mon, 30 Mar 2026 10:59:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47572729</link><dc:creator>davemp</dc:creator><comments>https://news.ycombinator.com/item?id=47572729</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47572729</guid></item><item><title><![CDATA[New comment by davemp in "Miasma: A tool to trap AI web scrapers in an endless poison pit"]]></title><description><![CDATA[
<p>> No. Reading something, learning from it, then writing something similar, is legal; and more importantly, it is moral.<p>Machines aren’t human. Don’t anthropomorphize them. The same morals and laws don’t apply.</p>
]]></description><pubDate>Mon, 30 Mar 2026 10:26:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47572547</link><dc:creator>davemp</dc:creator><comments>https://news.ycombinator.com/item?id=47572547</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47572547</guid></item><item><title><![CDATA[New comment by davemp in "Reports of code's death are greatly exaggerated"]]></title><description><![CDATA[
<p>Considering we can only approximate irrational numbers, I’m not sure that’s a given. Maybe we’ll have a breakthrough with some type of analog computing, but we could also just hit physical limits on energy or precision.</p>
]]></description><pubDate>Mon, 23 Mar 2026 22:00:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47495707</link><dc:creator>davemp</dc:creator><comments>https://news.ycombinator.com/item?id=47495707</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47495707</guid></item></channel></rss>