<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: vga1</title><link>https://news.ycombinator.com/user?id=vga1</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 04 May 2026 08:45:59 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=vga1" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by vga1 in "Uncle Bob: It's Over"]]></title><description><![CDATA[
<p>Using LLMs is so obviously great from my point-of-view and when I read about the skepticism, some of which even seems scientifically proven, I get a funny deja vu feeling.<p>I started using Linux back when most people were super excited about Windows 98. This is what it kinda feels like now.<p>I'm forced to conclude that either I'm having the worst cognitive dissonance of my life or perhaps I'm just that much better at using LLMs.</p>
]]></description><pubDate>Mon, 04 May 2026 04:05:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=48004510</link><dc:creator>vga1</dc:creator><comments>https://news.ycombinator.com/item?id=48004510</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48004510</guid></item><item><title><![CDATA[New comment by vga1 in "Big Tech will spend nearly $700B on AI in 2026. No one knows where buildout ends"]]></title><description><![CDATA[
<p>Why would it end? Probably will plateau and normalize though.</p>
]]></description><pubDate>Sun, 03 May 2026 08:07:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47994614</link><dc:creator>vga1</dc:creator><comments>https://news.ycombinator.com/item?id=47994614</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47994614</guid></item><item><title><![CDATA[New comment by vga1 in "Uber torches 2026 AI budget on Claude Code in four months"]]></title><description><![CDATA[
<p>Resistance to technological change has been a thing since farming was invented. Socrates thought that writing will ruin everyone's memory, and that people who just rely on written word will appear knowledgeable while actually knowing nothing.<p>The only difference is that this is happening to <i>us</i>.</p>
]]></description><pubDate>Fri, 01 May 2026 17:28:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47977492</link><dc:creator>vga1</dc:creator><comments>https://news.ycombinator.com/item?id=47977492</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47977492</guid></item><item><title><![CDATA[New comment by vga1 in "The Zig project's rationale for their anti-AI contribution policy"]]></title><description><![CDATA[
<p>>because if you could understand it, then you could write it yourself.<p>I accept most things you said there as valid opinions, but this is where the logic goes wrong.<p>I use LLMs to give me more from the only resource (now that my basic and mid-level needs are largely met) that ultimately matters: time. That means that I need to waste far less time in front of the computer, typing code, and use far more time doing more useful things, like hobbies, art, being with my children.<p>But as I said before, every project is obviously allowed to make their own rules, and contributors should obey those rules. There are plenty of projects that take both AI deniers and plenty of projects who prefer AI aficiandos.<p>At least for now. My belief is that one those groups will fade away like horseback riding did, but we'll see. Perhaps you have heard the famous stages quoted by many different people in different forms: first an idea is ridiculed, then it's attacked, then it's accepted. Some open-source communities have clearly entered the attacking phase in the last year so.</p>
]]></description><pubDate>Thu, 30 Apr 2026 16:25:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47964845</link><dc:creator>vga1</dc:creator><comments>https://news.ycombinator.com/item?id=47964845</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47964845</guid></item><item><title><![CDATA[New comment by vga1 in "The Zig project's rationale for their anti-AI contribution policy"]]></title><description><![CDATA[
<p>It seems to me that people might be arguing from conflicting hidden premises here. "AI Coding" is a spectrum that could mean something as simple as letting the LLM proofread your changes and then act on those with your own human brain, or it could mean just telling the agent what you want and let it rip and tear until it is done.<p>If I do the latter and submit a PR to something like Zig, I'll be certainly caught doing it and rightfully chastised. If I do the former, my PR will be better without anybody besides myself having any way of knowing how it got better. Probably I do something in between when I contribute to open-source these days.<p>Blanket banning all of these seems like a bad idea to me. It actively gates people like myself from contributing, because I respect these people and projects that much. It feels like I would be doing something they find disgusting if my work has touched an LLM and I obviously don't want to do that to people I respect. But it's fine, there are plenty of things to do in the world even when some doors are closed.<p>I do not presume to have any say on Zig project's well argued decisions[0] -- I'm not really even their user let alone someone important like a contributor. Their point of preferring human contact is superb, frankly. Probably a different kind of problem in an open-source project staffed with a lot of remote working people, where human contact is scarce.<p><a href="https://kristoff.it/blog/contributor-poker-and-ai/" rel="nofollow">https://kristoff.it/blog/contributor-poker-and-ai/</a></p>
]]></description><pubDate>Thu, 30 Apr 2026 09:35:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47960142</link><dc:creator>vga1</dc:creator><comments>https://news.ycombinator.com/item?id=47960142</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47960142</guid></item><item><title><![CDATA[New comment by vga1 in "The Zig project's rationale for their anti-AI contribution policy"]]></title><description><![CDATA[
<p>Because LLM models are obviously much more than the sum of their parts.</p>
]]></description><pubDate>Thu, 30 Apr 2026 09:32:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47960126</link><dc:creator>vga1</dc:creator><comments>https://news.ycombinator.com/item?id=47960126</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47960126</guid></item><item><title><![CDATA[New comment by vga1 in "The Zig project's rationale for their anti-AI contribution policy"]]></title><description><![CDATA[
<p>I think this ignores the amount of work needed to make LLM contributions be of high quality. It's much less work than making pure human contribution, but it's definitely not zero.<p>So centralizing that common work is a benefit of open-source just as much with LLMs as it was before.</p>
]]></description><pubDate>Thu, 30 Apr 2026 06:30:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47958945</link><dc:creator>vga1</dc:creator><comments>https://news.ycombinator.com/item?id=47958945</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47958945</guid></item><item><title><![CDATA[New comment by vga1 in "The Zig project's rationale for their anti-AI contribution policy"]]></title><description><![CDATA[
<p>How would you differentiate a 3000 line LLM commit made by the best models and good AI processes from a 3000 line commit made by the best human developer?<p><i>edit</i> Okay, I set the bar too high here with "best human developer" and vague "good AI processes". My bad. Yes, LLM is not quite there yet.</p>
]]></description><pubDate>Thu, 30 Apr 2026 06:28:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47958929</link><dc:creator>vga1</dc:creator><comments>https://news.ycombinator.com/item?id=47958929</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47958929</guid></item><item><title><![CDATA[New comment by vga1 in "Google co-founder Sergey Brin says he fled socialism, rips billionaire tax"]]></title><description><![CDATA[
<p>>They are far smarter than I am<p>"Contradictions do not exist. Whenever you think you are facing a contradiction, check your premises. You will find that one of them is wrong." - Ayn Rand</p>
]]></description><pubDate>Wed, 29 Apr 2026 05:56:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47944647</link><dc:creator>vga1</dc:creator><comments>https://news.ycombinator.com/item?id=47944647</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47944647</guid></item></channel></rss>