<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: larve</title><link>https://news.ycombinator.com/user?id=larve</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 10:51:37 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=larve" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[From "prompt and pray" to prompt engineering]]></title><description><![CDATA[
<p>Article URL: <a href="https://gogogolems.substack.com/p/from-prompt-and-pray-to-prompt-engineering">https://gogogolems.substack.com/p/from-prompt-and-pray-to-prompt-engineering</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47675394">https://news.ycombinator.com/item?id=47675394</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 07 Apr 2026 13:51:42 +0000</pubDate><link>https://gogogolems.substack.com/p/from-prompt-and-pray-to-prompt-engineering</link><dc:creator>larve</dc:creator><comments>https://news.ycombinator.com/item?id=47675394</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47675394</guid></item><item><title><![CDATA[The Augmentation of Doug Engelbart]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.youtube.com/watch?v=_7ZtISeGyCY">https://www.youtube.com/watch?v=_7ZtISeGyCY</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47565567">https://news.ycombinator.com/item?id=47565567</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 29 Mar 2026 18:07:51 +0000</pubDate><link>https://www.youtube.com/watch?v=_7ZtISeGyCY</link><dc:creator>larve</dc:creator><comments>https://news.ycombinator.com/item?id=47565567</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47565567</guid></item><item><title><![CDATA[The Price of Truth]]></title><description><![CDATA[
<p>Article URL: <a href="https://harmoniousdiscourse.substack.com/p/the-price-of-truth">https://harmoniousdiscourse.substack.com/p/the-price-of-truth</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47524742">https://news.ycombinator.com/item?id=47524742</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 25 Mar 2026 23:32:50 +0000</pubDate><link>https://harmoniousdiscourse.substack.com/p/the-price-of-truth</link><dc:creator>larve</dc:creator><comments>https://news.ycombinator.com/item?id=47524742</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47524742</guid></item><item><title><![CDATA[Slowing Down in the Age of Coding Agents]]></title><description><![CDATA[
<p>Article URL: <a href="https://the.scapegoat.dev/slowing-down-in-the-age-of-coding-agents/">https://the.scapegoat.dev/slowing-down-in-the-age-of-coding-agents/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47504904">https://news.ycombinator.com/item?id=47504904</a></p>
<p>Points: 18</p>
<p># Comments: 3</p>
]]></description><pubDate>Tue, 24 Mar 2026 16:12:22 +0000</pubDate><link>https://the.scapegoat.dev/slowing-down-in-the-age-of-coding-agents/</link><dc:creator>larve</dc:creator><comments>https://news.ycombinator.com/item?id=47504904</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47504904</guid></item><item><title><![CDATA[Slowing Down in the Age of Coding Agents]]></title><description><![CDATA[
<p>Article URL: <a href="https://gogogolems.substack.com/p/slowing-down-in-the-age-of-coding">https://gogogolems.substack.com/p/slowing-down-in-the-age-of-coding</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47467843">https://news.ycombinator.com/item?id=47467843</a></p>
<p>Points: 5</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 21 Mar 2026 15:21:16 +0000</pubDate><link>https://gogogolems.substack.com/p/slowing-down-in-the-age-of-coding</link><dc:creator>larve</dc:creator><comments>https://news.ycombinator.com/item?id=47467843</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47467843</guid></item><item><title><![CDATA[Simplicity in the age of AI-assisted coding]]></title><description><![CDATA[
<p>Article URL: <a href="https://the.scapegoat.dev/simplicity-in-the-age-of-ai-assisted-coding/">https://the.scapegoat.dev/simplicity-in-the-age-of-ai-assisted-coding/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47398375">https://news.ycombinator.com/item?id=47398375</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 16 Mar 2026 12:55:37 +0000</pubDate><link>https://the.scapegoat.dev/simplicity-in-the-age-of-ai-assisted-coding/</link><dc:creator>larve</dc:creator><comments>https://news.ycombinator.com/item?id=47398375</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47398375</guid></item><item><title><![CDATA[Simplicity in the age of AI-assisted coding]]></title><description><![CDATA[
<p>Article URL: <a href="https://gogogolems.substack.com/p/simplicity-in-the-age-of-ai-assisted">https://gogogolems.substack.com/p/simplicity-in-the-age-of-ai-assisted</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47389189">https://news.ycombinator.com/item?id=47389189</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 15 Mar 2026 16:51:01 +0000</pubDate><link>https://gogogolems.substack.com/p/simplicity-in-the-age-of-ai-assisted</link><dc:creator>larve</dc:creator><comments>https://news.ycombinator.com/item?id=47389189</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47389189</guid></item><item><title><![CDATA[New comment by larve in "The Collective Superstitions of People Who Talk to Machines"]]></title><description><![CDATA[
<p>I think that due to the nature of language, often the prompting technique that you use is indeed the best, for you, since it allows you to express yourself “naturally” and thus have more consistent and effective session with a model adopting a similar style and using similar abstractions when building.</p>
]]></description><pubDate>Sat, 14 Mar 2026 16:26:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47378279</link><dc:creator>larve</dc:creator><comments>https://news.ycombinator.com/item?id=47378279</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47378279</guid></item><item><title><![CDATA[New comment by larve in "Where's the shovelware? Why AI coding claims don't add up"]]></title><description><![CDATA[
<p>I'm always a very serious person while I wait for people to join the stream. I'm sorry you weren't impressed, but tbf that's not really my goal, I just like building things and yapping about it.</p>
]]></description><pubDate>Thu, 04 Sep 2025 17:00:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=45129433</link><dc:creator>larve</dc:creator><comments>https://news.ycombinator.com/item?id=45129433</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45129433</guid></item><item><title><![CDATA[New comment by larve in "Where's the shovelware? Why AI coding claims don't add up"]]></title><description><![CDATA[
<p>Not sure what you mean? This was a demo in a live session that took about 30 minutes, including ui ideation (see pngs). It’s a reasonably well featured app and the code is fairly minimal. I wouldn’t be able to write something like that in 30 minutes by hand.</p>
]]></description><pubDate>Thu, 04 Sep 2025 12:28:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=45126486</link><dc:creator>larve</dc:creator><comments>https://news.ycombinator.com/item?id=45126486</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45126486</guid></item><item><title><![CDATA[New comment by larve in "Where's the shovelware? Why AI coding claims don't add up"]]></title><description><![CDATA[
<p>Since I get downvoted because I guess people don’t believe me, I’m sitting at breakfast reading a book. I suddenly think about yaml streaming parsing, start a gpt research, dig a bit deeper into streaming parser approaches, and launch a deep research on streaming parsing which I will print out and read tomorrow at breakfast and go through by hand. I then take some of the gpt discussion and paste it into Manus, saying:<p>“ Write a streaming go yaml parsers based on the tokenizer (probably use goccy yaml if there is no tokenizer in the standard yaml parser), and provide an event callback to the parser which can then be used to stream and print to the output.<p>Make a series of test files and verify they are streamed properly.”<p>This is the slot machine. It might work, it might be 50% jank, it might be entire jank. It’ll be a few thousand lines of code that I will skim and run. In the best case, it’s a great foundation to more properly work on. In the worst case it was an interesting experiment and I will learn something about either prompting Manus, or streaming parsing, or both.<p>I certainly won’t dedicate my full code review attention to what was generated. Think of it more as a hyper specific google search returning stackoverflow posts that go into excruciating detail.<p><a href="https://chatgpt.com/share/68b98724-a8cc-8012-9bee-b9c4a77fe904" rel="nofollow">https://chatgpt.com/share/68b98724-a8cc-8012-9bee-b9c4a77fe9...</a><p><a href="https://manus.im/share/kmsyzuoRHfn1FNjg5NWz17?replay=1" rel="nofollow">https://manus.im/share/kmsyzuoRHfn1FNjg5NWz17?replay=1</a></p>
]]></description><pubDate>Thu, 04 Sep 2025 12:17:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=45126400</link><dc:creator>larve</dc:creator><comments>https://news.ycombinator.com/item?id=45126400</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45126400</guid></item><item><title><![CDATA[New comment by larve in "Where's the shovelware? Why AI coding claims don't add up"]]></title><description><![CDATA[
<p>You can look at my GitHub, and I stream full unedited sessions on <a href="https://youtube.com/@program-with-ai" rel="nofollow">https://youtube.com/@program-with-ai</a></p>
]]></description><pubDate>Thu, 04 Sep 2025 11:44:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=45126175</link><dc:creator>larve</dc:creator><comments>https://news.ycombinator.com/item?id=45126175</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45126175</guid></item><item><title><![CDATA[New comment by larve in "Where's the shovelware? Why AI coding claims don't add up"]]></title><description><![CDATA[
<p>Trivial is a pretty big word in this context. Expanding an idea into some sort of code is indeed a matter of waiting. The idea, the prompt, the design of the overall workflow to leverage the capabilities of llms/agents in a professional/long-lived codebase context is far from trivial, imo.</p>
]]></description><pubDate>Thu, 04 Sep 2025 11:43:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=45126169</link><dc:creator>larve</dc:creator><comments>https://news.ycombinator.com/item?id=45126169</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45126169</guid></item><item><title><![CDATA[New comment by larve in "Where's the shovelware? Why AI coding claims don't add up"]]></title><description><![CDATA[
<p>I only review what needs to be reviewed, I don’t need to fully review every prototype, shell script, dev tool etc… only what is in the critical path.<p>But if llms show us one thing, it’s how bad our code review tools are. I have a set of tree sitter helpers that allow me to examine different parts of a PR more easily (one that allows me to diff semantic parts of the code, instead of “files” and “lines”, one that gives me stats on what subsystems are touched and crosscorrelation of different subsystems, one for attaching metadata and which documents are related to a commit, one for managing our design documents, llm-coding intermediary documents, long lasting documents, etc… the proper version of these are for work but here’s the initial yolo from Manus: <a href="https://github.com/go-go-golems/vibes/tree/main/2025-08-22/pr-analyzer" rel="nofollow">https://github.com/go-go-golems/vibes/tree/main/2025-08-22/p...</a> <a href="https://github.com/go-go-golems/vibes/tree/main/2025-08-22/commit-context" rel="nofollow">https://github.com/go-go-golems/vibes/tree/main/2025-08-22/c...</a> <a href="https://github.com/go-go-golems/vibes/tree/main/2025-08-15/document-management-system" rel="nofollow">https://github.com/go-go-golems/vibes/tree/main/2025-08-15/d...</a> <a href="https://github.com/go-go-golems/vibes/tree/main/2025-07-29/pr-analyzer-dual" rel="nofollow">https://github.com/go-go-golems/vibes/tree/main/2025-07-29/p...</a>).<p>I very often put some random idea into the llm slot machine that is manus, and use the result as a starting point to remold it into a proper tool, and extracting the relevant pieces as reusable packages. I’ve got a pretty wide treesitter/lsp/git based set of packages to manage llm output and assist with better code reviews.<p>Also, every llm PR comes with _extensive_ documentation / design documents / changelogs, by the nature of how these things work, which helps both humans and llm-asssisted code review tools.</p>
]]></description><pubDate>Thu, 04 Sep 2025 10:23:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=45125624</link><dc:creator>larve</dc:creator><comments>https://news.ycombinator.com/item?id=45125624</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45125624</guid></item><item><title><![CDATA[New comment by larve in "Where's the shovelware? Why AI coding claims don't add up"]]></title><description><![CDATA[
<p>I don't think so, although I think at that point experience heavily comes into play. With GPT-5 especially, I can basically point cursor/codex at a repo and say "refactor this to this pattern" and come back 25 minutes later to a pretty much impeccable result. In fact that's become my favourite past time lately.<p>I linked some examples higher up, but I've been maintaining a lot of packages that I started slightly before chatgpt and then refactored and worked on as I progressively moved to the "entirely AI generated" workflow I have today.<p>I don't think it's an easy skill (not saying that to make myself look good, I spent an ungodly amount of time exploring programming with LLMs and still do), akin to thinking at a strategic level vs at a "code" level.<p>Certain design patterns also make it much easier to deal with LLM code: state reducers (redux/zustand for example), event-driven architectures, component-based design systems, building many CLI tools that the agent can invoke to iterate and correct things, as do certain "tools" like sqlite/tmux (by that I mean just telling the LLM "btw you can use tmux/sqlite", you allow it to pass hurdles that would otherwise just make it spiral into slop-ratatouille).<p>I also think that a language like go was a really good coincidence, because it is so amenable to LLM-ification.</p>
]]></description><pubDate>Wed, 03 Sep 2025 23:38:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=45121586</link><dc:creator>larve</dc:creator><comments>https://news.ycombinator.com/item?id=45121586</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45121586</guid></item><item><title><![CDATA[New comment by larve in "Where's the shovelware? Why AI coding claims don't add up"]]></title><description><![CDATA[
<p>I have linked my github above. I don't know how that fares in the bigger scope of things, but I went from 0 opensource to hundreds of tools and frameworks and libraries. Putting a number on "productivity" makes no sense to me, I would have no idea what that means.<p>I generate between 10-100k lines of code per day these days. But is that a measure of productivity? Not really...</p>
]]></description><pubDate>Wed, 03 Sep 2025 23:21:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=45121446</link><dc:creator>larve</dc:creator><comments>https://news.ycombinator.com/item?id=45121446</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45121446</guid></item><item><title><![CDATA[New comment by larve in "Where's the shovelware? Why AI coding claims don't add up"]]></title><description><![CDATA[
<p>Their main point is "AI coding claims don't add up", as shown by the amount of code shipped. I personally do think some of the more incredible claims about AI coding add up, and am happy to talk about it based on my "evidence", ie the software I am building. 99.99% of my code is ai generated at this point, with the occasional one line I fill in because it'd be stupid to wait for an LLM to do it.<p>For example, I've built 5-6 iphone apps, but they're kind of one-offs and I don't know why I would put them up on the app store, since they only scratch my own itches.</p>
]]></description><pubDate>Wed, 03 Sep 2025 22:38:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=45121136</link><dc:creator>larve</dc:creator><comments>https://news.ycombinator.com/item?id=45121136</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45121136</guid></item><item><title><![CDATA[New comment by larve in "Where's the shovelware? Why AI coding claims don't add up"]]></title><description><![CDATA[
<p>In case the author is reading this, I have the receipts on how there's a real step function in how much software I build, especially lately. I am not going to put any number on it because that makes no sense, but I certainly push a lot of code that reasonably seems to work.<p>The reason it doesn't show up online is that I mostly write software for myself and for work, with the primary goal of making things better, not faster. More tooling, better infra, better logging, more prototyping, more experimentation, more exploration.<p>Here's my opensource work: <a href="https://github.com/orgs/go-go-golems/repositories" rel="nofollow">https://github.com/orgs/go-go-golems/repositories</a> . These are not just one-offs (although there's plenty of those in the vibes/ and go-go-labs/ repositories), but long-lived codebases / frameworks that are building upon each other and have gone through many many iterations.</p>
]]></description><pubDate>Wed, 03 Sep 2025 22:08:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=45120912</link><dc:creator>larve</dc:creator><comments>https://news.ycombinator.com/item?id=45120912</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45120912</guid></item><item><title><![CDATA[New comment by larve in "MCP doesn't need tools, it needs code"]]></title><description><![CDATA[
<p>codeact is a really interesting area to explore. I expanded upon the JS platform I started sketching out in <a href="https://www.youtube.com/watch?v=J3oJqan2Gv8" rel="nofollow">https://www.youtube.com/watch?v=J3oJqan2Gv8</a> . LLMs know a million APIs out of the box and have no trouble picking more up through context, yet struggle once you give them a few tools. In fact just enabling a single tool definition "degrades" the vibes of the model.<p>Give them an eval() with a couple of useful libraries (say, treesitter), and they are able not only to use it well, but to write their own "tools" (functions) and save massively on tokens.<p>They also allow you to build "ephemeral" apps, because who wants to wait for tokens to stream and a LLM to interpret the result when you could do most tasks with a normal UI, only jumping into the LLM when fuzziness is required.<p>Most of my work on this is sadly private right now, but here's a few repos github.com/go-go-golems/jesus <a href="https://github.com/go-go-golems/go-go-goja" rel="nofollow">https://github.com/go-go-golems/go-go-goja</a> that are the foundation.</p>
]]></description><pubDate>Mon, 18 Aug 2025 14:57:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=44941352</link><dc:creator>larve</dc:creator><comments>https://news.ycombinator.com/item?id=44941352</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44941352</guid></item><item><title><![CDATA[New comment by larve in ""Just Fucking Ship It" (Or: On Vibecoding)"]]></title><description><![CDATA[
<p>I don't think that COBOL, BASIC, SQL have failed. They allowed many non-technical people to get started building things with computers. The skills to vibe-code (or more generally building applications with LLMs) are not reading and writing english, they are the skill of using LLMs to build applications.<p>In the context of people not learning "real programming", you can equate LLMs to say, wordpress plugins or making a squarespace site. Deployment of software has never been gated by how much effort it took to write it, there's millions of wordpress sites out there that get deployed way faster than an LLM can generate code.<p>If we care about the security of it all, then let's build the platforms to have LLMs build secure applications. If we care about the craft of programming, whatever that means in this day and age, then we need to catch people building where they are. I'm not going to tell people to not use computers because they want to cash out, they will just use whatever tool they find anyway. Might as well cash out on them cashing out while also giving them better platforms to build upon.<p>As far as the OP goes, these kind of security issues due to hardcoded credentials are basically the hallmark of someone shipping a (mobile|web) app for the first time, LLMs or not. The only reason the LLM actually used that is because it was possible for the user to provide it tokens, instead of replit/lovable/expo/whatever providing a proper way to provision these things.<p>Every cash~out fast bro out there these days uses stripe and doesn't roll their own payment processing anymore. They certainly used to do so because they just clicked a random wordpress plugin. That's what I think a more productive way to tackle the issue is.</p>
]]></description><pubDate>Thu, 10 Jul 2025 18:56:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=44524281</link><dc:creator>larve</dc:creator><comments>https://news.ycombinator.com/item?id=44524281</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44524281</guid></item></channel></rss>