<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: grttww</title><link>https://news.ycombinator.com/user?id=grttww</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 25 Apr 2026 10:50:38 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=grttww" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by grttww in "Over-editing refers to a model modifying code beyond what is necessary"]]></title><description><![CDATA[
<p>You wrote all that and didn’t address the question lmao.<p>There’s diminishing returns and moreover this idea that people are holding it wrong / they need to figure out the complexity goes against all that has been done over the past 30 years : making things simpler.</p>
]]></description><pubDate>Wed, 22 Apr 2026 22:09:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47869948</link><dc:creator>grttww</dc:creator><comments>https://news.ycombinator.com/item?id=47869948</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47869948</guid></item><item><title><![CDATA[New comment by grttww in "Over-editing refers to a model modifying code beyond what is necessary"]]></title><description><![CDATA[
<p>When you steer a car, there isn’t this degree of probability about the output.<p>How do you emulate that with llm’s? I suppose the objective is to get variance down to the point it’s barely noticeable. But not sure it’ll get to that place based on accumulating more data and re-training models.</p>
]]></description><pubDate>Wed, 22 Apr 2026 18:31:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47867442</link><dc:creator>grttww</dc:creator><comments>https://news.ycombinator.com/item?id=47867442</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47867442</guid></item></channel></rss>