<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: yarn_</title><link>https://news.ycombinator.com/user?id=yarn_</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 15 May 2026 18:26:16 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=yarn_" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by yarn_ in "AI is making me dumb"]]></title><description><![CDATA[
<p>> I think much of the world of software has become incredibly myopic.
> usually taking the easy way out is just deferring the costs to your future. Problem is that those costs accrue interest.<p>This sums up my thoughts perfectly lately, that is a great way to put it all.<p>Programmers have <i>never</i> been any good at measuring or estimating their own productivity, there is no reason to assume that has changed (one could argue theres ample reason to assume the opposite).<p>Part of the problem as well is that there is some unseen/unnamable "spaghettiness"/"sloppiness"/"whatever" factor, that scales very very poorly. At the beginning it can seem fine, especially when you have some constant speed multiplier like an LLM spitting out code - but the larger exponent of the function that results from this factor being "worse" will eventually outpace that constant multiplier. You will only see it once its too late, or will never see it all because of our myopia as you say.</p>
]]></description><pubDate>Fri, 15 May 2026 02:45:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=48143951</link><dc:creator>yarn_</dc:creator><comments>https://news.ycombinator.com/item?id=48143951</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48143951</guid></item><item><title><![CDATA[New comment by yarn_ in "Async Rust never left the MVP state"]]></title><description><![CDATA[
<p>sad but true</p>
]]></description><pubDate>Tue, 05 May 2026 18:17:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=48026394</link><dc:creator>yarn_</dc:creator><comments>https://news.ycombinator.com/item?id=48026394</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48026394</guid></item><item><title><![CDATA[New comment by yarn_ in "Async Rust never left the MVP state"]]></title><description><![CDATA[
<p>>the devil lies in the details<p>This is true, but perhaps not uniquely so, when compared to platform dependence of the standard libary already. File semantics, sync primitive gaurantees and implementations, timers and timer resolutions, etc have subtle differences between platforms that the Rust stdlib makes no further gaurantees about.</p>
]]></description><pubDate>Tue, 05 May 2026 18:08:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=48026270</link><dc:creator>yarn_</dc:creator><comments>https://news.ycombinator.com/item?id=48026270</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48026270</guid></item><item><title><![CDATA[New comment by yarn_ in "'Staggering' number of people believe unproven health claims"]]></title><description><![CDATA[
<p>>People disagree on a bunch of extremely politicized topics within the realm of nutrition and health which is famously complex and hard to understand even for experts in the field.<p>Yup, I'm "staggered".</p>
]]></description><pubDate>Mon, 04 May 2026 20:40:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=48014676</link><dc:creator>yarn_</dc:creator><comments>https://news.ycombinator.com/item?id=48014676</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48014676</guid></item><item><title><![CDATA[New comment by yarn_ in "AI should elevate your thinking, not replace it"]]></title><description><![CDATA[
<p>>We don't hire juniors and throw them boilerplate and tiny bugs while expecting them to learn along the way ad hoc<p>Huh? This is exactly what almost everyone does</p>
]]></description><pubDate>Tue, 28 Apr 2026 02:10:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47929754</link><dc:creator>yarn_</dc:creator><comments>https://news.ycombinator.com/item?id=47929754</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47929754</guid></item><item><title><![CDATA[New comment by yarn_ in "SDL bans AI-written commits"]]></title><description><![CDATA[
<p>Another good example of "the people writing good code with AI are the people who could have done it regardless"</p>
]]></description><pubDate>Thu, 16 Apr 2026 16:42:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47796040</link><dc:creator>yarn_</dc:creator><comments>https://news.ycombinator.com/item?id=47796040</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47796040</guid></item><item><title><![CDATA[New comment by yarn_ in "Copilot edited an ad into my PR"]]></title><description><![CDATA[
<p>>It seems we've uncovered several feasible answers to your question of "why would you want that?"<p>Fair enough</p>
]]></description><pubDate>Tue, 31 Mar 2026 00:53:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47581530</link><dc:creator>yarn_</dc:creator><comments>https://news.ycombinator.com/item?id=47581530</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47581530</guid></item><item><title><![CDATA[New comment by yarn_ in "Copilot edited an ad into my PR"]]></title><description><![CDATA[
<p>Right but these are bad actors, roughly speaking, so why should I expect them to disclose the fact that they're using LLMs to me?<p>If someone is repeatedly sending me slop to look at I'll block them whether or not they tell me an LLM was involved</p>
]]></description><pubDate>Mon, 30 Mar 2026 17:54:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47577528</link><dc:creator>yarn_</dc:creator><comments>https://news.ycombinator.com/item?id=47577528</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47577528</guid></item><item><title><![CDATA[New comment by yarn_ in "Copilot edited an ad into my PR"]]></title><description><![CDATA[
<p>This mirrors my thoughts.</p>
]]></description><pubDate>Mon, 30 Mar 2026 17:36:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47577325</link><dc:creator>yarn_</dc:creator><comments>https://news.ycombinator.com/item?id=47577325</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47577325</guid></item><item><title><![CDATA[New comment by yarn_ in "Copilot edited an ad into my PR"]]></title><description><![CDATA[
<p>I don't see what the "deceptive practices" would be though - you can just look at the code being submitted, there isn't really the same background truth involved as with "did the thing in this video actually happen?" "do these commercial people actually think this?"<p>If I have a block of human code and an identical block of llm code then whats the difference? Especially given that in reality it is trivial to obfuscate whether its human or LLM (in fact usually you have to go out of your way to identify it as such).<p>I am an AI hater but I'm just being realistic and practical here, I'm not sure how else to approach all this.</p>
]]></description><pubDate>Mon, 30 Mar 2026 16:59:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47576829</link><dc:creator>yarn_</dc:creator><comments>https://news.ycombinator.com/item?id=47576829</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47576829</guid></item><item><title><![CDATA[New comment by yarn_ in ""Over 1.5 million GitHub PRs have had ads injected into them by Copilot""]]></title><description><![CDATA[
<p>Yep other people pointed this out as well, this makes sense to me.</p>
]]></description><pubDate>Mon, 30 Mar 2026 16:53:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47576760</link><dc:creator>yarn_</dc:creator><comments>https://news.ycombinator.com/item?id=47576760</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47576760</guid></item><item><title><![CDATA[New comment by yarn_ in ""Over 1.5 million GitHub PRs have had ads injected into them by Copilot""]]></title><description><![CDATA[
<p>Well if an agent is submitting it I'm just going to reject it, thats no problem. "Just send me the prompt".</p>
]]></description><pubDate>Mon, 30 Mar 2026 16:52:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47576751</link><dc:creator>yarn_</dc:creator><comments>https://news.ycombinator.com/item?id=47576751</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47576751</guid></item><item><title><![CDATA[New comment by yarn_ in ""Over 1.5 million GitHub PRs have had ads injected into them by Copilot""]]></title><description><![CDATA[
<p>I mean sure, in the same sense that law enforcement would be a lot easier if all the criminals just came to the police station and gave themselves up<p>Again though, people can trivially hide the fact they used an LLM to whatever extent, so we kind of need to adjust accordingly.<p>Even if saying no to all LLM involvement seemed pertinent, it doesn't seem possible in the first place.</p>
]]></description><pubDate>Mon, 30 Mar 2026 16:52:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47576742</link><dc:creator>yarn_</dc:creator><comments>https://news.ycombinator.com/item?id=47576742</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47576742</guid></item><item><title><![CDATA[New comment by yarn_ in "Copilot edited an ad into my PR"]]></title><description><![CDATA[
<p>The human who submitted the PR is 100% accountable either way, thats partly my point.<p>Disclosing AI has its purposes, I agree, but its not like we can reliably get everyone to do it anyway, which also leads me to thinking this way.</p>
]]></description><pubDate>Mon, 30 Mar 2026 16:47:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47576677</link><dc:creator>yarn_</dc:creator><comments>https://news.ycombinator.com/item?id=47576677</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47576677</guid></item><item><title><![CDATA[New comment by yarn_ in "Copilot edited an ad into my PR"]]></title><description><![CDATA[
<p>Sure, the point about LLM "mistakes" etc being harder to detect is valid, although I'm not entirely sure how to compare this with human hard to detect mistakes. If anything I find LLM code shortcomings often a bit easier to spot because a lot of the time they're just uneeded dependencies, useless comments, useless replication of logic, etc. This is where testing come into play too and I'm definitely reviewing your tests (obviously).<p>>Burying your head in the sand and choosing to be a guinea pig for AI companies by reviewing all of their slop with the same care you'd review human contributions with (instead of cutting them off early when identified as problematic) is your prerogative, but it assumes you're fine being isolated from the industry.<p>I mean listen: I wish with every fiber of my being that LLMs would dissapear off the face of the earth for eternity, but I really don't think I'm being "isolating myself from the industry" by not simply dismissing LLM code. If I find a PR to be problematic I would just cut it off, thats how I review in the first place. I'm telling some random human who submitted the code to me that I am rejecting their PR cause its low quality, I'm not sending anthropic some long detailed list of my feedback.<p>This is also kind of a moot point either way, because everyone can just trivially hide the fact that they used LLMs if they want to.</p>
]]></description><pubDate>Mon, 30 Mar 2026 16:43:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47576609</link><dc:creator>yarn_</dc:creator><comments>https://news.ycombinator.com/item?id=47576609</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47576609</guid></item><item><title><![CDATA[New comment by yarn_ in "Copilot edited an ad into my PR"]]></title><description><![CDATA[
<p>Future analysis is a valid reason to keep it, thats a good point and I agree with that.</p>
]]></description><pubDate>Mon, 30 Mar 2026 16:24:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47576323</link><dc:creator>yarn_</dc:creator><comments>https://news.ycombinator.com/item?id=47576323</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47576323</guid></item><item><title><![CDATA[New comment by yarn_ in "Copilot edited an ad into my PR"]]></title><description><![CDATA[
<p>> I'd like to be able to review commits and see which were substantially bot-written and which were mostly human) then it's also easy.<p>Why is this, though? I'm genuinely curious. My code-quality bar doesn't change either way, so why would this be anything but distracting to my decision making?</p>
]]></description><pubDate>Mon, 30 Mar 2026 16:21:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47576285</link><dc:creator>yarn_</dc:creator><comments>https://news.ycombinator.com/item?id=47576285</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47576285</guid></item><item><title><![CDATA[New comment by yarn_ in "Copilot edited an ad into my PR"]]></title><description><![CDATA[
<p>I agree with a lot of this, but thats kind of my point: if all these things (poor tests, non-DRY, redundant comments, etc) were true about a piece of purely human-written code then I would reject it just the same, so whats the difference? Likewise, if claude solely produced some really clean, concise and rigorously thought-through and testsed piece of code with a human backer then why wouldn't I take it?<p>As you allude to (and i agree), any non-trivial quantity of code, if SOLELY written by claude will probably be low-quality, but this is apparent whether I know its AI beforehand or not.<p>I am admittedly coming at this as much more of an AI-hater than many, but I still don't really get why I'd care about how-much or how-little you used AI as a standalone metric.<p>The people who are using AI "well" are the ones producing code where you'd never even guess it involved AI. I'm sure theres linux kernel maintainers using claude here and there, its not like they expect to have their patches merged because "oh well i just used claude here don't worry about that part".<p>(But also yes, of course I'm not going to talk to claude about your PR, I will only talk to you, the human contributor, and if you don't know whats up with the PR then into the trash it goes!)</p>
]]></description><pubDate>Mon, 30 Mar 2026 16:19:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47576254</link><dc:creator>yarn_</dc:creator><comments>https://news.ycombinator.com/item?id=47576254</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47576254</guid></item><item><title><![CDATA[New comment by yarn_ in "Folk are getting dangerously attached to AI that always tells them they're right"]]></title><description><![CDATA[
<p>Hah, ya exactly. I say sorry to objects if I drop them or accidently whack them, and feel remorse. What hope do I have with an LLM who talks to me?<p>(jk f those clankers ofc)</p>
]]></description><pubDate>Mon, 30 Mar 2026 16:11:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47576133</link><dc:creator>yarn_</dc:creator><comments>https://news.ycombinator.com/item?id=47576133</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47576133</guid></item><item><title><![CDATA[New comment by yarn_ in ""Over 1.5 million GitHub PRs have had ads injected into them by Copilot""]]></title><description><![CDATA[
<p>> Likewise, leaving AI attribution in will probably have the opposite effect as well, where a perfectly good few lines of code gets rejected because some reviewer saw it was claude and assumed it was slop. Neither of these cases seems helpful to anyone (obviously its not like AI can't write a single useable line of code)."<p>That was my point here, it is a false signal in both directions.</p>
]]></description><pubDate>Mon, 30 Mar 2026 16:09:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47576104</link><dc:creator>yarn_</dc:creator><comments>https://news.ycombinator.com/item?id=47576104</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47576104</guid></item></channel></rss>