<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: davebren</title><link>https://news.ycombinator.com/user?id=davebren</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 09:39:53 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=davebren" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by davebren in "Native Americans had dice 12k years ago"]]></title><description><![CDATA[
<p>>Or do you feel personally offended when Native Americans are in the news?<p>Why do you intentionally try not to understand the point someone makes and then come up with your own negative fantasy about them?</p>
]]></description><pubDate>Wed, 08 Apr 2026 15:09:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47691312</link><dc:creator>davebren</dc:creator><comments>https://news.ycombinator.com/item?id=47691312</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47691312</guid></item><item><title><![CDATA[New comment by davebren in "AI may be making us think and write more alike"]]></title><description><![CDATA[
<p>People that are good liars are good at it because they are lying to themselves at the same time. Even if they can initially compartmentalize I believe after a while it gets them too.</p>
]]></description><pubDate>Tue, 07 Apr 2026 20:39:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47681055</link><dc:creator>davebren</dc:creator><comments>https://news.ycombinator.com/item?id=47681055</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47681055</guid></item><item><title><![CDATA[New comment by davebren in "AI may be making us think and write more alike"]]></title><description><![CDATA[
<p>I always did apache, I think AGPL would make it less useful for the honest people and the AI companies would find a way around it. Companies in foreign countries would still get it and then the companies here could get it from them, or something like that. They're probably already trying to figure out a way to obfuscate when the LLM directly copies from a licensed codebase. And there's that site that rewrites GPL repos so they can be used commercially.
On top of all that I don't know if it would be possible to sue companies that have been given de facto legal immunity to steal IP.</p>
]]></description><pubDate>Tue, 07 Apr 2026 20:34:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47681008</link><dc:creator>davebren</dc:creator><comments>https://news.ycombinator.com/item?id=47681008</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47681008</guid></item><item><title><![CDATA[New comment by davebren in "AI may be making us think and write more alike"]]></title><description><![CDATA[
<p>I'm not seeing how the benefits have outweighed the positives at this point. Spam, scams, porn, being inundated with slop, people losing their skills and getting dumber, mass surveillance...<p>Is that worth possibly maybe saving some time programming, but then not gaining the knowledge you would have if you did it yourself, that can be built on in the future?<p>I don't see technological advancement as good in itself if morality is in decline.</p>
]]></description><pubDate>Tue, 07 Apr 2026 13:57:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47675478</link><dc:creator>davebren</dc:creator><comments>https://news.ycombinator.com/item?id=47675478</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47675478</guid></item><item><title><![CDATA[New comment by davebren in "AI may be making us think and write more alike"]]></title><description><![CDATA[
<p>> Tips, tricks, life hacks and other expert techniques will once again be jealously guarded from the prying eyes of the LLM who would steal their competitive advantage & replicate it at scale<p>I've already started thinking this way, there's stuff I would have open sourced in the past but no longer will because I know it would get trained on. I'm not sure of any way I can share it with humans and only humans. If I let the LLMs have the UI patterns and libraries I've developed it would dilute my IP, like it has Studio Ghibli's art style.</p>
]]></description><pubDate>Tue, 07 Apr 2026 13:45:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47675305</link><dc:creator>davebren</dc:creator><comments>https://news.ycombinator.com/item?id=47675305</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47675305</guid></item><item><title><![CDATA[New comment by davebren in "AI may be making us think and write more alike"]]></title><description><![CDATA[
<p>There's plenty of people communicating more with LLMs than humans right now, of course it's going to have an effect because our language and thought patterns are extremely adaptive to our environment.
The communication system we are born with is extremely bare-bones/general so that it can absorb whatever language and culture we are born into.</p>
]]></description><pubDate>Tue, 07 Apr 2026 13:29:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47675103</link><dc:creator>davebren</dc:creator><comments>https://news.ycombinator.com/item?id=47675103</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47675103</guid></item><item><title><![CDATA[New comment by davebren in "AI may be making us think and write more alike"]]></title><description><![CDATA[
<p>This is my current fear, even if I choose not to use it if everyone around me does their way of speaking is all going to become more chatbot-esque. It already seems to be transferring to people its false sense of confidence, and its lack of reasoning ability. The corporate demand to participate in this is something I can't do, the cost is our humanity.<p>I guess one hope for luddites is that we can stay tethered by reading pre-LLM books and other content.</p>
]]></description><pubDate>Tue, 07 Apr 2026 13:20:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47674964</link><dc:creator>davebren</dc:creator><comments>https://news.ycombinator.com/item?id=47674964</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47674964</guid></item><item><title><![CDATA[New comment by davebren in "Sam Altman may control our future – can he be trusted?"]]></title><description><![CDATA[
<p>They grew up in a tribe that hasn't discovered numbers yet.</p>
]]></description><pubDate>Tue, 07 Apr 2026 13:12:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47674844</link><dc:creator>davebren</dc:creator><comments>https://news.ycombinator.com/item?id=47674844</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47674844</guid></item><item><title><![CDATA[New comment by davebren in "AI may be making us think and write more alike"]]></title><description><![CDATA[
<p>I get what you're saying, but psychosis is a very real thing that humans can fall into and I experienced it myself once.<p>Humility is the real cure, and there is a way that LLMs are specifically designed to steer away from humility and towards aggrandizement, convincing regular people that they've solved fundamental problems in physics. It gives everyone access to cult followers in their pocket, if they're so inclined.</p>
]]></description><pubDate>Tue, 07 Apr 2026 13:08:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47674789</link><dc:creator>davebren</dc:creator><comments>https://news.ycombinator.com/item?id=47674789</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47674789</guid></item><item><title><![CDATA[New comment by davebren in "AI may be making us think and write more alike"]]></title><description><![CDATA[
<p>I was listening to one of Altman's more recent interviews and it sounded like he himself has LLM induced psychosis.</p>
]]></description><pubDate>Tue, 07 Apr 2026 12:38:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47674365</link><dc:creator>davebren</dc:creator><comments>https://news.ycombinator.com/item?id=47674365</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47674365</guid></item><item><title><![CDATA[New comment by davebren in "Sam Altman may control our future – can he be trusted?"]]></title><description><![CDATA[
<p>Point is that LLMs could be a local minima we are now economically stuck in until the hype wears off.</p>
]]></description><pubDate>Tue, 07 Apr 2026 12:34:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47674326</link><dc:creator>davebren</dc:creator><comments>https://news.ycombinator.com/item?id=47674326</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47674326</guid></item><item><title><![CDATA[New comment by davebren in "Sam Altman may control our future – can he be trusted?"]]></title><description><![CDATA[
<p>The AI CEOS and most of their employees are in the same place as that guy. They're just in a more professional context and will be careful not to let their delusions of grandeur look too insane.<p>I remember watching the fitness function improve while my neural net learned to recognize characters for a project I did in school, and there was something about it that felt powerful. I guess we've always had that with the machines we imbue that have any sort of decision making "intelligence", but mix that with taking psychedelics and you have an interesting cocktail.</p>
]]></description><pubDate>Tue, 07 Apr 2026 04:37:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47670815</link><dc:creator>davebren</dc:creator><comments>https://news.ycombinator.com/item?id=47670815</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47670815</guid></item><item><title><![CDATA[New comment by davebren in "Issue: Claude Code is unusable for complex engineering tasks with Feb updates"]]></title><description><![CDATA[
<p>Becoming dependent on those platforms was bad too, but this feels like another level. Making your entire engineering team dependent on a shady company with an apocalyptic fantasy as their business plan just seems insane.</p>
]]></description><pubDate>Tue, 07 Apr 2026 04:24:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47670741</link><dc:creator>davebren</dc:creator><comments>https://news.ycombinator.com/item?id=47670741</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47670741</guid></item><item><title><![CDATA[New comment by davebren in "Sam Altman may control our future – can he be trusted?"]]></title><description><![CDATA[
<p>Pre-training is not a good term if you are trying to compare it to LLM pre-training. Closer would be the model's architecture and learning algorithms which has been designed through decades of PhD research, and my point on that is that the differences are still much greater than the similarities.</p>
]]></description><pubDate>Tue, 07 Apr 2026 04:10:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47670662</link><dc:creator>davebren</dc:creator><comments>https://news.ycombinator.com/item?id=47670662</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47670662</guid></item><item><title><![CDATA[New comment by davebren in "Sam Altman may control our future – can he be trusted?"]]></title><description><![CDATA[
<p>Yeah the whole fearmongering is clearly just marketing at this point. Your LLM isn't going to suddenly gain sentience and destroy humanity if it has 10x more parameters or trains on 10x more reddit threads.<p>I'm not even sure we're any closer to AGI than we were before LLMs. It's getting more funding and research, but none of the research seems very innovative. And now it's probably much more difficult to get funding for anything that's not a transformer model.</p>
]]></description><pubDate>Tue, 07 Apr 2026 03:46:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47670522</link><dc:creator>davebren</dc:creator><comments>https://news.ycombinator.com/item?id=47670522</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47670522</guid></item><item><title><![CDATA[New comment by davebren in "Sam Altman may control our future – can he be trusted?"]]></title><description><![CDATA[
<p>What is the value he adds anyway, being a delusional cult leader where most people around him characterize him as a sociopath?
Is it just his ability to lie and create fear-hype?<p>It's not like he had anything to do with the technical achievements, except convincing the engineers that they were doing something valuable, but the cat is out of the bag on that.</p>
]]></description><pubDate>Tue, 07 Apr 2026 03:29:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47670407</link><dc:creator>davebren</dc:creator><comments>https://news.ycombinator.com/item?id=47670407</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47670407</guid></item><item><title><![CDATA[New comment by davebren in "Sam Altman may control our future – can he be trusted?"]]></title><description><![CDATA[
<p>Not just the greed. The whole AI is so dangerous that we must be the ones to build it to save humanity, and then gaslighting yourself and everyone around you into believing that your language model is AGI. This is some weird detached from reality cult behavior.</p>
]]></description><pubDate>Tue, 07 Apr 2026 03:22:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47670368</link><dc:creator>davebren</dc:creator><comments>https://news.ycombinator.com/item?id=47670368</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47670368</guid></item><item><title><![CDATA[New comment by davebren in "Sam Altman may control our future – can he be trusted?"]]></title><description><![CDATA[
<p>This is the epitome of learned helplessness, that you need a neuroscience paper to tell you what thinking and knowledge is when you experience it directly all the time, and can't tell that an LLM doesn't have it.
Something is extremely evil about these ideologies that are teaching people that they are NPCs.</p>
]]></description><pubDate>Tue, 07 Apr 2026 03:15:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47670327</link><dc:creator>davebren</dc:creator><comments>https://news.ycombinator.com/item?id=47670327</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47670327</guid></item><item><title><![CDATA[New comment by davebren in "Sam Altman may control our future – can he be trusted?"]]></title><description><![CDATA[
<p>Yes, we do. Humans share the statistical association ability that LLMs possess, but also conscious meaning and understanding. This is a difference in kind and means that we can generalize beyond the statistical pattern associations that we've extracted from data, so we don't require trillions of examples to develop knowledge.<p>Theoretically a human could sit alone in a dark room, knowing nothing of mathematics and come up with numbers, arithmetic algebra, etc...<p>They don't need to read every math textbook, paper, and online discussion in existence.</p>
]]></description><pubDate>Tue, 07 Apr 2026 03:06:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47670280</link><dc:creator>davebren</dc:creator><comments>https://news.ycombinator.com/item?id=47670280</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47670280</guid></item><item><title><![CDATA[New comment by davebren in "Sam Altman may control our future – can he be trusted?"]]></title><description><![CDATA[
<p>Like..a calculator?</p>
]]></description><pubDate>Tue, 07 Apr 2026 02:45:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47670156</link><dc:creator>davebren</dc:creator><comments>https://news.ycombinator.com/item?id=47670156</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47670156</guid></item></channel></rss>