<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: mpalmer</title><link>https://news.ycombinator.com/user?id=mpalmer</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 13 May 2026 13:20:27 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=mpalmer" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by mpalmer in "The vi family"]]></title><description><![CDATA[
<p>Thankfully my work applies inside vi.</p>
]]></description><pubDate>Wed, 13 May 2026 12:59:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=48121328</link><dc:creator>mpalmer</dc:creator><comments>https://news.ycombinator.com/item?id=48121328</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48121328</guid></item><item><title><![CDATA[New comment by mpalmer in "Show HN: I asked AI to write Sci-Fi for eternity"]]></title><description><![CDATA[
<p>A novel has a very specific definition, and these texts do not qualify. The gulf between 75 paragraphs and a novel (even a novella) is vast.<p>AI is nowhere near being able to produce a novel worth reading. The author's already heavily qualifying these ~10 page short stories, because the ratio of slop to moderately interesting content is still so unacceptably high.<p>Consider putting some energy towards finding more human sci-fi worth reading, because Asimov might be a pioneer, but he is hardly the most accessible or enjoyable read. I guarantee this brand of chocolate is better.</p>
]]></description><pubDate>Wed, 13 May 2026 12:58:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=48121311</link><dc:creator>mpalmer</dc:creator><comments>https://news.ycombinator.com/item?id=48121311</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48121311</guid></item><item><title><![CDATA[New comment by mpalmer in "The vi family"]]></title><description><![CDATA[
<p><p><pre><code>    I’ve been a long time vim user, and I honestly never really bought into the efficiency claims. That gets repeated over and over, but If you’re a slow typer then no editor can really make much of a difference.

    Little by little your movements become more complex and efficient, and the journey to figuring that out is fun and interesting.
</code></pre>
The slight contradiction in your comment has a lot of truth in it.<p><pre><code>    It’s just fun, and I don’t think that gets talked about enough
</code></pre>
Yes yes yes. Vim can absolutely lead to more efficient text editing, but I agree it has more to do with the fun journey than with typing speed.<p>vi definitely doesn't scratch that "itch" for everyone in the same way. But for me, it's as though I found a cheat code. Getting better at vi feels like getting better at a game - only practicing <i>this</i> game makes you better at any number of tasks that are relevant to your daily work.<p>(although if you also want to get better at typing speed, there are surprisingly fun roguelikes on Steam for just this purpose)</p>
]]></description><pubDate>Wed, 13 May 2026 12:00:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=48120784</link><dc:creator>mpalmer</dc:creator><comments>https://news.ycombinator.com/item?id=48120784</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48120784</guid></item><item><title><![CDATA[New comment by mpalmer in "The Future of Obsidian Plugins"]]></title><description><![CDATA[
<p>They don't have to reliably assess whether a plugin is malicious.<p>The checks are a filter so they can apply manual review only to those plugins which pass the baseline (and automatable) requirements.</p>
]]></description><pubDate>Tue, 12 May 2026 17:19:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=48111282</link><dc:creator>mpalmer</dc:creator><comments>https://news.ycombinator.com/item?id=48111282</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48111282</guid></item><item><title><![CDATA[New comment by mpalmer in "A web page that shows you everything the browser told it without asking"]]></title><description><![CDATA[
<p>It's the usual terse LLM voice that makes everything sound dramatic. Nails on a chalkboard</p>
]]></description><pubDate>Fri, 08 May 2026 18:06:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=48066680</link><dc:creator>mpalmer</dc:creator><comments>https://news.ycombinator.com/item?id=48066680</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48066680</guid></item><item><title><![CDATA[New comment by mpalmer in "The Old Guard: Confronting America's Gerontocratic Crisis"]]></title><description><![CDATA[
<p>We're talking about democratic republics. How does the one map to the other?</p>
]]></description><pubDate>Thu, 07 May 2026 15:15:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=48050444</link><dc:creator>mpalmer</dc:creator><comments>https://news.ycombinator.com/item?id=48050444</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48050444</guid></item><item><title><![CDATA[New comment by mpalmer in "Multi-stroke text effect in CSS"]]></title><description><![CDATA[
<p>I would think that quite a few powerful new ideas have come purely from abusing and bashing around older ideas.</p>
]]></description><pubDate>Wed, 06 May 2026 14:00:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=48036357</link><dc:creator>mpalmer</dc:creator><comments>https://news.ycombinator.com/item?id=48036357</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48036357</guid></item><item><title><![CDATA[New comment by mpalmer in "Specsmaxxing – On overcoming AI psychosis, and why I write specs in YAML"]]></title><description><![CDATA[
<p>> I think you are confusing the spec as "this is how it must be built", as opposed to, "this is what the software must do and must not do to be acceptable".<p>You are confusing code with application code. The latter thing you describe is a test, which is expressible in code.</p>
]]></description><pubDate>Sun, 03 May 2026 13:05:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47996578</link><dc:creator>mpalmer</dc:creator><comments>https://news.ycombinator.com/item?id=47996578</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47996578</guid></item><item><title><![CDATA[New comment by mpalmer in "AI should elevate your thinking, not replace it"]]></title><description><![CDATA[
<p>"Taking you at your word..."</p>
]]></description><pubDate>Mon, 27 Apr 2026 14:37:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47922218</link><dc:creator>mpalmer</dc:creator><comments>https://news.ycombinator.com/item?id=47922218</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47922218</guid></item><item><title><![CDATA[New comment by mpalmer in "AI should elevate your thinking, not replace it"]]></title><description><![CDATA[
<p>See my comment in this thread for what jumped out the most to me.</p>
]]></description><pubDate>Mon, 27 Apr 2026 13:27:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47921303</link><dc:creator>mpalmer</dc:creator><comments>https://news.ycombinator.com/item?id=47921303</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47921303</guid></item><item><title><![CDATA[New comment by mpalmer in "AI should elevate your thinking, not replace it"]]></title><description><![CDATA[
<p>Taking you at your word, your A.I. revision process nonetheless seems to have yielded content which may as well have been generated at the start for how difficult it is to get through it.<p><pre><code>    The valuable engineer is the one who sees the hidden constraint before it causes an outage. The one who notices that the team is solving the wrong problem. The one who reduces a vague debate into crisp tradeoffs. The one who identifies the missing abstraction. The one who can debug reality, not just read code. The one who can create clarity where everyone else sees noise.
</code></pre>
This is a list of six things, disguised as an actual paragraph. Of sentence fragments disguised as actual sentences. Etc. Either you wrote this yourself and the AI <i>didn't</i> tell you "this is repetitive and list-y", or...<p><pre><code>    "The software engineers who will be most valuable in the future are not the ones who do everything themselves. They are the ones who refuse to spend time on work that A.I. can do for them, while still understanding everything that is done on their behalf."

    "The danger is not that A.I. will make people lazy in some vague moral sense. It is that it makes it easy to simulate competence without building competence."

    "In that world, the engineer is not replaced by A.I. The engineer becomes more leveraged because they are operating above the level of raw output."

    "The ability to explain why something works, not just that it appears to work."

    "That process is not optional. It is how engineers acquire and elevate their competency."

    "The support system may make you look functional, but it does not make you capable."

    "The challenge is not merely adopting A.I. tools. It is protecting the conditions under which real thinking, learning, and craftsmanship continue to thrive."

    "They will need interview loops that test reasoning, not just polished answers."

    "The organizations that handle this well will not be the ones that simply push A.I. adoption hardest. They will be the ones that learn to separate leverage from dependency, acceleration from imitation, and genuine capability from convincing output."
</code></pre>
^ Which of these are your thoughts? They all look like slop to me.</p>
]]></description><pubDate>Mon, 27 Apr 2026 12:37:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47920802</link><dc:creator>mpalmer</dc:creator><comments>https://news.ycombinator.com/item?id=47920802</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47920802</guid></item><item><title><![CDATA[New comment by mpalmer in "Moleskine's AI Lord of the Rings collection can only mock"]]></title><description><![CDATA[
<p>Clicking the image expands it. Looks like the real thing to me (and easy enough when you've got the rights; to use AI for this would have been idiotic)</p>
]]></description><pubDate>Mon, 27 Apr 2026 12:14:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47920576</link><dc:creator>mpalmer</dc:creator><comments>https://news.ycombinator.com/item?id=47920576</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47920576</guid></item><item><title><![CDATA[New comment by mpalmer in "Moleskine's AI Lord of the Rings collection can only mock"]]></title><description><![CDATA[
<p>> it feels analogous to someone being mad that someone isn’t being carried via palanquin through the market after the motorized scooter has been invented.<p>It would be more accurate if for the entire journey, the scooter driver also extolled the virtues of slow, luxurious, human-powered travel.</p>
]]></description><pubDate>Mon, 27 Apr 2026 12:12:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47920553</link><dc:creator>mpalmer</dc:creator><comments>https://news.ycombinator.com/item?id=47920553</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47920553</guid></item><item><title><![CDATA[New comment by mpalmer in "The West forgot how to make things, now it’s forgetting how to code"]]></title><description><![CDATA[
<p>What hypocrisy is there in distinguishing between the qualitative value of prose vs code? They serve entirely different purposes; your failure to recognize that is no one else's fault.</p>
]]></description><pubDate>Sun, 26 Apr 2026 16:29:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47911542</link><dc:creator>mpalmer</dc:creator><comments>https://news.ycombinator.com/item?id=47911542</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47911542</guid></item><item><title><![CDATA[New comment by mpalmer in "Average is all you need"]]></title><description><![CDATA[
<p><p><pre><code>    This is not only average. This is actual magic.

    So let's be real: the SQL is average. The joins are average. The chart is average. And that took us less than 5 minutes and that was amazing, that is the entire point.

    You did not need a data engineer to model your HubSpot data, or a meeting to agree on whether it should be last-click or first-click or linear or time-decay or whatever.

    You needed a query, written fast, on data you already own. Your LLM wrote it. You confirmed it made sense. Your manager got a link.


    Honestly, average is clearly magic; prove me wrong.

</code></pre>
I'll give it a go. This is generated slop, and the poor, factory-made quality of the writing undercuts every aspect of the argument.<p>It is like nails on a chalkboard.</p>
]]></description><pubDate>Fri, 17 Apr 2026 12:34:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47805243</link><dc:creator>mpalmer</dc:creator><comments>https://news.ycombinator.com/item?id=47805243</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47805243</guid></item><item><title><![CDATA[New comment by mpalmer in "My AI-Assisted Workflow"]]></title><description><![CDATA[
<p>That's fair, but I'm not sure why you chose to address the one part of my comment that isn't responsive to your points.</p>
]]></description><pubDate>Wed, 15 Apr 2026 18:28:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47783170</link><dc:creator>mpalmer</dc:creator><comments>https://news.ycombinator.com/item?id=47783170</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47783170</guid></item><item><title><![CDATA[New comment by mpalmer in "My AI-Assisted Workflow"]]></title><description><![CDATA[
<p>How exactly did I misquote you?</p>
]]></description><pubDate>Wed, 15 Apr 2026 17:58:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47782780</link><dc:creator>mpalmer</dc:creator><comments>https://news.ycombinator.com/item?id=47782780</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47782780</guid></item><item><title><![CDATA[New comment by mpalmer in "My AI-Assisted Workflow"]]></title><description><![CDATA[
<p><p><pre><code>    At that point, asking the model to e.g. note any ambiguities about the task at hand is exactly equivalent to asking it to evaluate any input
</code></pre>
This point is load-bearing for your position, and it is completely wrong.<p>Prompt P at state S leads to a <i>new</i> state SP'. The "common jumping off point" you describe is effectively useless, because we instantly diverge from it by using different prompts.<p>And even if it weren't useless for that reason, LLMs don't "query" their "state" in the way that humans reflect on their state of mind.<p>The idea that hallucinations are somehow less likely because you're asking meta-questions about LLM output is completely without basis</p>
]]></description><pubDate>Wed, 15 Apr 2026 17:55:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47782743</link><dc:creator>mpalmer</dc:creator><comments>https://news.ycombinator.com/item?id=47782743</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47782743</guid></item><item><title><![CDATA[New comment by mpalmer in "My AI-Assisted Workflow"]]></title><description><![CDATA[
<p>You're retreating from your position. You started at "major step" and "extremely important", and you've arrived at "there's not <i>no</i> value".</p>
]]></description><pubDate>Wed, 15 Apr 2026 13:44:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47778886</link><dc:creator>mpalmer</dc:creator><comments>https://news.ycombinator.com/item?id=47778886</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47778886</guid></item><item><title><![CDATA[New comment by mpalmer in "My AI-Assisted Workflow"]]></title><description><![CDATA[
<p>LLMs do not have special or unique insight into how best to prompt them. Not in the slightest.<p><a href="https://aphyr.com/posts/411-the-future-of-everything-is-lies-i-guess#unreliable-narrators" rel="nofollow">https://aphyr.com/posts/411-the-future-of-everything-is-lies...</a></p>
]]></description><pubDate>Wed, 15 Apr 2026 11:18:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47777555</link><dc:creator>mpalmer</dc:creator><comments>https://news.ycombinator.com/item?id=47777555</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47777555</guid></item></channel></rss>