<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: gabbagool</title><link>https://news.ycombinator.com/user?id=gabbagool</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 25 Apr 2026 08:49:50 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=gabbagool" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by gabbagool in "John Ternus to become Apple CEO"]]></title><description><![CDATA[
<p>I'm genuinely curious why you think Apple software is terrible?</p>
]]></description><pubDate>Mon, 20 Apr 2026 21:16:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47840901</link><dc:creator>gabbagool</dc:creator><comments>https://news.ycombinator.com/item?id=47840901</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47840901</guid></item><item><title><![CDATA[New comment by gabbagool in "AI may be making us think and write more alike"]]></title><description><![CDATA[
<p>The first thing I thought when I read the abstract of the underlying paper was that this sounds like "model collapse" at the society level.<p>I don't feel super confident that we'll "soon" find ourselves in a world where there is no variance left in thought (would that be the net effect of total model/epistemic collapse?), though if you do accept that there could be any loss of variance due to AI, perhaps it's not unreasonable to consider how much and how quickly could this happen?<p>All this is by way of saying, I don't think it's wrong to ask these kinds of questions and think deeply about the consequences of societal shifts like this.</p>
]]></description><pubDate>Tue, 07 Apr 2026 16:41:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47678005</link><dc:creator>gabbagool</dc:creator><comments>https://news.ycombinator.com/item?id=47678005</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47678005</guid></item><item><title><![CDATA[New comment by gabbagool in "LinkedIn is searching your browser extensions"]]></title><description><![CDATA[
<p>Just because someone lets the electrician (LinkedIn) into their home (browser) doesn't mean they can do whatever the hell they want that isn't expressly prohibited. If the electrician wants to rifle through my desk drawers, they should ask for permission, and I will politely tell them to leave.</p>
]]></description><pubDate>Thu, 02 Apr 2026 16:29:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47616650</link><dc:creator>gabbagool</dc:creator><comments>https://news.ycombinator.com/item?id=47616650</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47616650</guid></item><item><title><![CDATA[New comment by gabbagool in "The Atlantic Quantum team is joining Google"]]></title><description><![CDATA[
<p>To this day, I wonder if Google knew that they couldn't be the ones to unleash AI unto the world. They clearly had the wherewithal and expertise to do it (Vaswani et al, 2017), but were under so much antitrust pressure at the time that it seemed inconceivable that they could be the ones to introduce such a polarizing technology to the world. What kind of firestorm would have rained down on them if they were the first.<p>Or, you might think, if Google had the technology, and they knew how to turn it into a trillion-dollar product, it's beyond ridiculous to think they would just hand the win over to someone else.</p>
]]></description><pubDate>Thu, 02 Oct 2025 18:57:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=45453850</link><dc:creator>gabbagool</dc:creator><comments>https://news.ycombinator.com/item?id=45453850</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45453850</guid></item></channel></rss>