<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: loveparade</title><link>https://news.ycombinator.com/user?id=loveparade</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 16 May 2026 08:21:26 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=loveparade" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by loveparade in "The sigmoids won't save you"]]></title><description><![CDATA[
<p>> If you used the same architecture as GPT2 today you're in for a bad time training a new frontier model.  It's only because we have dozens of breakthroughs<p>What exactly are these dozens of breakthroughs? Most frontier models architectures today still look very much like GPT2 at their core. There were various of improvements like instructgpt, finetuning techniques, efficiency improvements with kv caches, faster attention, lora, better tokenizers, etc. Most of these are for making things run faster. The biggest differentiator has probably been data curation and post-training data and the ability to fit more into the model. But I think we had few breakthroughs that would fall into the category of different technologies.</p>
]]></description><pubDate>Sat, 16 May 2026 02:21:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=48156233</link><dc:creator>loveparade</dc:creator><comments>https://news.ycombinator.com/item?id=48156233</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48156233</guid></item><item><title><![CDATA[New comment by loveparade in "Vibe Coding Will Break Your Company"]]></title><description><![CDATA[
<p>The more AI-generated AI bad stories we get the more likely LLMs will produce more!</p>
]]></description><pubDate>Tue, 28 Apr 2026 06:31:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47931062</link><dc:creator>loveparade</dc:creator><comments>https://news.ycombinator.com/item?id=47931062</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47931062</guid></item><item><title><![CDATA[New comment by loveparade in "I quit drinking for a year"]]></title><description><![CDATA[
<p>I haven't had a drink in 6 months or so. Not because I wanted to stop, I just have not had the desire to drink recently.<p>Now I would love to tell you about all the amazing magical healthy benefits that have come with that, but unfortunately there are none. I feel no difference at all.</p>
]]></description><pubDate>Tue, 28 Apr 2026 01:55:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47929668</link><dc:creator>loveparade</dc:creator><comments>https://news.ycombinator.com/item?id=47929668</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47929668</guid></item><item><title><![CDATA[New comment by loveparade in "To my students"]]></title><description><![CDATA[
<p>Doesn't matter who reads it. The point is that you will probably never learn to do "high level system design" well if you do not have enough experience writing and refactoring code yourself. It's like you wanting to become the chef of a kitchen and giving instructions without having ever prepped food.<p>There is indeed something useful about trying to write elegant code. Not because others read it. But because that's how you learn about the engineering tradeoffs and abstraction that exist everywhere.</p>
]]></description><pubDate>Tue, 28 Apr 2026 00:55:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47929218</link><dc:creator>loveparade</dc:creator><comments>https://news.ycombinator.com/item?id=47929218</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47929218</guid></item><item><title><![CDATA[New comment by loveparade in "Anthropic takes $5B from Amazon and pledges $100B in cloud spending in return"]]></title><description><![CDATA[
<p>I give it one to two more years before open source models have fully caught up. Products are commodities and models are commodities too. GPUs cores are still hard to get for inference at scale right now. They need a platform with lock in but unsure what that would look like and why it wouldn't be based on open source models.</p>
]]></description><pubDate>Tue, 21 Apr 2026 14:02:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47848965</link><dc:creator>loveparade</dc:creator><comments>https://news.ycombinator.com/item?id=47848965</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47848965</guid></item><item><title><![CDATA[New comment by loveparade in "Anthropic takes $5B from Amazon and pledges $100B in cloud spending in return"]]></title><description><![CDATA[
<p>Good lucking getting GPUs.</p>
]]></description><pubDate>Tue, 21 Apr 2026 13:57:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47848915</link><dc:creator>loveparade</dc:creator><comments>https://news.ycombinator.com/item?id=47848915</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47848915</guid></item><item><title><![CDATA[New comment by loveparade in "OpenAI backs Illinois bill that would limit when AI labs can be held liable"]]></title><description><![CDATA[
<p>Not really, when OpenAI was formed in 2015 there were no LLMs, at least none that worked well. It was a regular AI research lab mostly doing Reinforcement Learning on game environments like Atari similar to DeepMind. Once they struck gold with LLMs (2019 or so?) and saw there is money to made everything changed, as expected when a bunch of SV types get involved.</p>
]]></description><pubDate>Fri, 10 Apr 2026 13:46:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47718106</link><dc:creator>loveparade</dc:creator><comments>https://news.ycombinator.com/item?id=47718106</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47718106</guid></item><item><title><![CDATA[New comment by loveparade in "We've raised $17M to build what comes after Git"]]></title><description><![CDATA[
<p>I watched the video but I don't quite get it. I feel like I'm missing something? A nicer git workflow is not what I need because I can ask an LLM to fix my git state and branches. This feels a bit backwards. LLMs are already great at working with raw git as their primitive.<p>I'm curious what their long term vision they pitched investors is.</p>
]]></description><pubDate>Fri, 10 Apr 2026 07:08:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47714613</link><dc:creator>loveparade</dc:creator><comments>https://news.ycombinator.com/item?id=47714613</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47714613</guid></item><item><title><![CDATA[New comment by loveparade in "Claude mixes up who said what"]]></title><description><![CDATA[
<p>Yeah, it looks like a model issue to me. If the harness had a (semi-)deterministic bug and the model was robust to such mix-ups we'd see this behavior much more frequently. It looks like the model just starts getting confused depending on what's in the context, speakers are just tokens after all and handled in the same probabilistic way as all other tokens.</p>
]]></description><pubDate>Thu, 09 Apr 2026 10:21:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47701685</link><dc:creator>loveparade</dc:creator><comments>https://news.ycombinator.com/item?id=47701685</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47701685</guid></item><item><title><![CDATA[New comment by loveparade in "I've sold out"]]></title><description><![CDATA[
<p>For me the reason to add dependencies to my projects is exactly because they are maintained upstream and I don't need to worry about maintaining them myself. If I need to fork and maintain it myself I'd rather write my own version of it that perfectly fits my use case, or use another dependency that is maintained.</p>
]]></description><pubDate>Wed, 08 Apr 2026 13:57:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47690300</link><dc:creator>loveparade</dc:creator><comments>https://news.ycombinator.com/item?id=47690300</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47690300</guid></item><item><title><![CDATA[New comment by loveparade in "I've sold out"]]></title><description><![CDATA[
<p>Understandable, I'd probably do the same in his position. Still sucks, we've seen this pattern a thousands times before and what happens next is pretty obvious.<p>I was prototyping something with pi under the hood for a personal project, going to switch off it now.</p>
]]></description><pubDate>Wed, 08 Apr 2026 12:02:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47689033</link><dc:creator>loveparade</dc:creator><comments>https://news.ycombinator.com/item?id=47689033</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47689033</guid></item><item><title><![CDATA[New comment by loveparade in "OpenAI says its new model GPT-2 is too dangerous to release (2019)"]]></title><description><![CDATA[
<p>Are you sure they are not just refusing to solve your UI bug due to safety concerns? They may be worried you'll take over the world once your UX becomes too good.</p>
]]></description><pubDate>Wed, 08 Apr 2026 03:36:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47684823</link><dc:creator>loveparade</dc:creator><comments>https://news.ycombinator.com/item?id=47684823</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47684823</guid></item><item><title><![CDATA[New comment by loveparade in "Show HN: I built a tiny LLM to demystify how language models work"]]></title><description><![CDATA[
<p>It really seems it's mostly AI comments on this. Maybe this topic is attractive to all the bots.</p>
]]></description><pubDate>Mon, 06 Apr 2026 07:13:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47657818</link><dc:creator>loveparade</dc:creator><comments>https://news.ycombinator.com/item?id=47657818</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47657818</guid></item><item><title><![CDATA[New comment by loveparade in "Show HN: I made a YouTube search form with advanced filters"]]></title><description><![CDATA[
<p>You could manage your subscriptions in an RSS reader, that's what I used to do. Each channel has multiple RSS feeds associated with it for different types of videos (live, vod, etc).</p>
]]></description><pubDate>Mon, 06 Apr 2026 02:04:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47656155</link><dc:creator>loveparade</dc:creator><comments>https://news.ycombinator.com/item?id=47656155</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47656155</guid></item><item><title><![CDATA[New comment by loveparade in "Show HN: I made a YouTube search form with advanced filters"]]></title><description><![CDATA[
<p>The whole Youtube experience has gotten so bad over the years. I love the youtube content, but I wish I didn't have to deal with the UI/UX and recommendations that the YT app forces on me.<p>Annoying Shorts. I'm trying to keep my watch history clean to "steer" recommendations, but YT keeps adding things to it that I didn't actually watch just because I happened to hover my mouse over a video, etc.</p>
]]></description><pubDate>Mon, 06 Apr 2026 01:53:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47656075</link><dc:creator>loveparade</dc:creator><comments>https://news.ycombinator.com/item?id=47656075</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47656075</guid></item><item><title><![CDATA[New comment by loveparade in "Running Gemma 4 locally with LM Studio's new headless CLI and Claude Code"]]></title><description><![CDATA[
<p>Interesting, I don't like codex exactly because of its built-in sandboxing. If I need a sandbox I rather do a simple bwrap myself around the agent process, I prefer that over the agent cli doing a bunch of sandboxing magic that gets in my way.</p>
]]></description><pubDate>Mon, 06 Apr 2026 00:45:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47655582</link><dc:creator>loveparade</dc:creator><comments>https://news.ycombinator.com/item?id=47655582</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47655582</guid></item><item><title><![CDATA[New comment by loveparade in "Running Gemma 4 locally with LM Studio's new headless CLI and Claude Code"]]></title><description><![CDATA[
<p>What do you recommend? I've tried both pi and opencode and both are better than claude imo, but I wonder if there are others.</p>
]]></description><pubDate>Mon, 06 Apr 2026 00:21:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47655414</link><dc:creator>loveparade</dc:creator><comments>https://news.ycombinator.com/item?id=47655414</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47655414</guid></item><item><title><![CDATA[New comment by loveparade in "The threat is comfortable drift toward not understanding what you're doing"]]></title><description><![CDATA[
<p>I see these analogies a lot, but I don't like them. Assembly has a clear contract. You don't need to know how it works because it works the same way each time. You don't get different outputs when you compile the same C code twice.<p>LLMs are nothing like that. They are probabilistic systems at their very core. Sometimes you get garbage. Sometimes you win. Change a single character and you may get a completely different response. You can't easily build abstractions when the underlying system has so much randomness because you need to verify the output. And you can't verify the output if you have no idea what you are doing or what the output should look like.</p>
]]></description><pubDate>Sun, 05 Apr 2026 13:06:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47649020</link><dc:creator>loveparade</dc:creator><comments>https://news.ycombinator.com/item?id=47649020</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47649020</guid></item><item><title><![CDATA[New comment by loveparade in "The threat is comfortable drift toward not understanding what you're doing"]]></title><description><![CDATA[
<p>I think it just really depends. There is no fixed rule to how PhD programs are supposed to work. Sometimes your advisor will suggest projects he finds interesting and wants to see done, he just doesn't have time to do it himself. That's pretty common. Sometimes advisors don't have that and/or want students to come up with their own projects proposals, etc.</p>
]]></description><pubDate>Sun, 05 Apr 2026 12:57:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47648953</link><dc:creator>loveparade</dc:creator><comments>https://news.ycombinator.com/item?id=47648953</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47648953</guid></item><item><title><![CDATA[New comment by loveparade in "LLM Wiki – example of an "idea file""]]></title><description><![CDATA[
<p>That has been my experience as well. Most of the value of writing docs or a wiki is not in the final artifacts, it's that the process of writing docs updates your own mental models and knowledge so that you can make better decisions down the road.<p>Even if you can get an LLM to output good artifacts that don't eventually evolve into slop, which is questionable, it's really not that useful, especially not for a personal wiki.</p>
]]></description><pubDate>Sun, 05 Apr 2026 09:55:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47647774</link><dc:creator>loveparade</dc:creator><comments>https://news.ycombinator.com/item?id=47647774</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47647774</guid></item></channel></rss>