<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: FloorEgg</title><link>https://news.ycombinator.com/user?id=FloorEgg</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 02:25:34 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=FloorEgg" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by FloorEgg in "LinkedIn Is Illegally Searching Your Computer"]]></title><description><![CDATA[
<p>I wonder if their motivation for doing this is to detect the LinkedIn automation tools that power all the spam messaging and connection requests?</p>
]]></description><pubDate>Thu, 02 Apr 2026 15:53:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47616151</link><dc:creator>FloorEgg</dc:creator><comments>https://news.ycombinator.com/item?id=47616151</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47616151</guid></item><item><title><![CDATA[New comment by FloorEgg in "LinkedIn is searching your browser extensions"]]></title><description><![CDATA[
<p>In 2023 I did a deep dive into the crypto community with two main questions:<p>- do these people understand the principles of making good products?<p>- is anyone clearly working towards a microtransaction system that could replace advertising and subscription models?<p>After attending two conferences, hundreds of conversations and hours spent researching, my conclusion to both questions was no. The community felt more like an ouroboros. It was disappointing.<p>I don't want to pay NYT a subscription fee, I want to pay them some fraction of a cent per paragraph of article that I load in. Same goes for seconds of video on YouTube, etc.<p>Apparently I'm alone in this vision, or at least very rare...</p>
]]></description><pubDate>Thu, 02 Apr 2026 15:50:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47616093</link><dc:creator>FloorEgg</dc:creator><comments>https://news.ycombinator.com/item?id=47616093</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47616093</guid></item><item><title><![CDATA[New comment by FloorEgg in "Google restricting Google AI Pro/Ultra subscribers for using OpenClaw"]]></title><description><![CDATA[
<p>I wonder if this was causing the increase in the number of 429 errors I've been getting from Gemini on vertex.</p>
]]></description><pubDate>Mon, 02 Mar 2026 17:00:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47220654</link><dc:creator>FloorEgg</dc:creator><comments>https://news.ycombinator.com/item?id=47220654</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47220654</guid></item><item><title><![CDATA[New comment by FloorEgg in "I asked Claude for 37,500 random names, and it can't stop saying Marcus"]]></title><description><![CDATA[
<p>Fwiw: I didn't read the post carefully, this is just a passing by comment.<p>For my own use case I was trying to test consistency or an evaluation process and found that injecting a UUID into the system prompt (busting cache) made a material difference.<p>Without it, resubmitting the same inputs in close time intervals (e.g. 1, 5, or 30 min) would produce very consistent evaluations. Adding the UUID would decrease consistency (showing true evaluation consistency not artificially improved by catching) and highlight ambiguous evaluation criteria that was causing problems.<p>So I wonder how much prompt caching is a factor here. I think these LLM providers (all of them) are caching several layers beyond just tokenization.</p>
]]></description><pubDate>Wed, 25 Feb 2026 22:12:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47158774</link><dc:creator>FloorEgg</dc:creator><comments>https://news.ycombinator.com/item?id=47158774</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47158774</guid></item><item><title><![CDATA[New comment by FloorEgg in "Child's Play: Tech's new generation and the end of thinking"]]></title><description><![CDATA[
<p>That's not true. You're going to have to bring some strong evidence to convince me of that. I've been around and paying attention for a few decades and what you just said contradicts everything I know.</p>
]]></description><pubDate>Sun, 22 Feb 2026 02:14:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47107423</link><dc:creator>FloorEgg</dc:creator><comments>https://news.ycombinator.com/item?id=47107423</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47107423</guid></item><item><title><![CDATA[New comment by FloorEgg in "Child's Play: Tech's new generation and the end of thinking"]]></title><description><![CDATA[
<p>And that point is foolish no matter who is making it.</p>
]]></description><pubDate>Sat, 21 Feb 2026 19:54:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47104043</link><dc:creator>FloorEgg</dc:creator><comments>https://news.ycombinator.com/item?id=47104043</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47104043</guid></item><item><title><![CDATA[New comment by FloorEgg in "Child's Play: Tech's new generation and the end of thinking"]]></title><description><![CDATA[
<p>Hmm. I'll think more about this.<p>It makes sense to me that a culture that values collectivistic cohesion would shy away from paradigm shifting ideas (disruption). I also see the correlation between disruptive ideas driven by principled critical thinking over conventional thinking.<p>I guess on some level my assumption is that they are adjacent. Those embedded in a collectivistic culture can think critically but can run into walls within a sandbox of convention. This is how they can be great at iterative improvement and engineering but struggle with paradigm shifting ideas.<p>I think you have a point, but there's definitely some nuance here I'm still untangling.</p>
]]></description><pubDate>Fri, 20 Feb 2026 22:57:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47095212</link><dc:creator>FloorEgg</dc:creator><comments>https://news.ycombinator.com/item?id=47095212</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47095212</guid></item><item><title><![CDATA[New comment by FloorEgg in "Child's Play: Tech's new generation and the end of thinking"]]></title><description><![CDATA[
<p>I realize that now, and feel a bit foolish for being triggered by it. It's too late for me to edit my comment now though.</p>
]]></description><pubDate>Fri, 20 Feb 2026 22:29:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47094894</link><dc:creator>FloorEgg</dc:creator><comments>https://news.ycombinator.com/item?id=47094894</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47094894</guid></item><item><title><![CDATA[New comment by FloorEgg in "Child's Play: Tech's new generation and the end of thinking"]]></title><description><![CDATA[
<p>Why conflate critical thinking with individualistic values?<p>It seems you are unnecessarily muddying the water.</p>
]]></description><pubDate>Fri, 20 Feb 2026 22:02:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47094628</link><dc:creator>FloorEgg</dc:creator><comments>https://news.ycombinator.com/item?id=47094628</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47094628</guid></item><item><title><![CDATA[New comment by FloorEgg in "Child's Play: Tech's new generation and the end of thinking"]]></title><description><![CDATA[
<p>For what it's worth, most of the pennies I've earned definitely came from my ability to think and communicate well.<p>I can't help but wonder whether the person who gave you advice "to think critically wasn't for [you]" didn't have YOUR best interests at heart, and/or wasn't a wise person.<p>I also worked jobs where I was actively discouraged to think critically. Those jobs made me itchy and I moved on. Every time I did it was one step back, three steps forward. My career has been a weird zigzag like that but trended up exponentially over 25 years.<p>We all have our anecdotes we can share. But ask yourself this: if you get better at making decisions and communicating with other people, who is that most likely to benefit?</p>
]]></description><pubDate>Fri, 20 Feb 2026 21:48:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47094472</link><dc:creator>FloorEgg</dc:creator><comments>https://news.ycombinator.com/item?id=47094472</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47094472</guid></item><item><title><![CDATA[New comment by FloorEgg in "Child's Play: Tech's new generation and the end of thinking"]]></title><description><![CDATA[
<p>Yep you're right, but it's too late for me to edit my comment. The idea triggered me, and I tend to struggle with sarcasm.</p>
]]></description><pubDate>Fri, 20 Feb 2026 20:23:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47093395</link><dc:creator>FloorEgg</dc:creator><comments>https://news.ycombinator.com/item?id=47093395</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47093395</guid></item><item><title><![CDATA[New comment by FloorEgg in "Child's Play: Tech's new generation and the end of thinking"]]></title><description><![CDATA[
<p>Mayb in some extreme cases, but I wouldn't go so far as using the "con" word most of the time.<p>The hardest part of startups is probably the making good decisions part. To be a good VC you need to be better at founders at judging startup decisions, AND you need to be good at LP deal flow AND you need to be good at startup deal flow. LP deal flow has to come first (otherwise there is no fund), and because of zirp a lot of VCs got funds up without good startup deal flow or the ability to judge startups well.<p>In other words it's hard to be good a VC too, but for a while it was artificially easy to be a bad VC.</p>
]]></description><pubDate>Fri, 20 Feb 2026 18:57:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47092234</link><dc:creator>FloorEgg</dc:creator><comments>https://news.ycombinator.com/item?id=47092234</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47092234</guid></item><item><title><![CDATA[New comment by FloorEgg in "Child's Play: Tech's new generation and the end of thinking"]]></title><description><![CDATA[
<p>Ah, I struggle with sarcasm sometimes and I was a bit distracted while reading. I'll give it another chance.</p>
]]></description><pubDate>Fri, 20 Feb 2026 16:57:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47090533</link><dc:creator>FloorEgg</dc:creator><comments>https://news.ycombinator.com/item?id=47090533</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47090533</guid></item><item><title><![CDATA[New comment by FloorEgg in "Child's Play: Tech's new generation and the end of thinking"]]></title><description><![CDATA[
<p>Building a successful startup is very hard, and not just in the "it's a lot of hard work" sense, but also in terms of making good decisions. For the average person who went to college and worked in some other industry/capacity, the good decisions are very counterintuitive.<p>Most VCs have no idea how to accuratly judge startups based on their core merit, or how to make good decision in startups (though they may think they do), so instead they focus on things like "will this founder be able to hype up this startup and sell the next round so I can mark it up on my books".</p>
]]></description><pubDate>Fri, 20 Feb 2026 16:53:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47090493</link><dc:creator>FloorEgg</dc:creator><comments>https://news.ycombinator.com/item?id=47090493</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47090493</guid></item><item><title><![CDATA[New comment by FloorEgg in "Child's Play: Tech's new generation and the end of thinking"]]></title><description><![CDATA[
<p>I was enjoying the article until I got to this paragraph:<p>> Individual intelligence will mean nothing once we have superhuman AI, at which point the difference between an obscenely talented giga-nerd and an ordinary six-pack-drinking bozo will be about as meaningful as the difference between any two ants. If what you do involves anything related to the human capacity for reason, reflection, insight, creativity, or thought, you will be meat for the coltan mines.<p>Believing this feels incredibly unwise to me. I think it's going to do more damage than the AI itself will.<p>To any impressionable students reading this: the most valuable and important thing you can learn will be to think critically and communicate well. No AI can take it away from you, and the more powerful AI will get the more you will be able to harness it's potential. Don't let these people saying this ahit discourage you from building a good life.</p>
]]></description><pubDate>Fri, 20 Feb 2026 16:45:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47090361</link><dc:creator>FloorEgg</dc:creator><comments>https://news.ycombinator.com/item?id=47090361</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47090361</guid></item><item><title><![CDATA[The Legendary Lost Labryrthn of Egypt has a Rescue Plan]]></title><description><![CDATA[
<p>Article URL: <a href="https://archaeologicalrescue.org/hawara/">https://archaeologicalrescue.org/hawara/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47082452">https://news.ycombinator.com/item?id=47082452</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 20 Feb 2026 01:22:31 +0000</pubDate><link>https://archaeologicalrescue.org/hawara/</link><dc:creator>FloorEgg</dc:creator><comments>https://news.ycombinator.com/item?id=47082452</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47082452</guid></item><item><title><![CDATA[New comment by FloorEgg in "Why I don't think AGI is imminent"]]></title><description><![CDATA[
<p><a href="https://archive.is/D4EYW" rel="nofollow">https://archive.is/D4EYW</a><p>For anyone seeing 404</p>
]]></description><pubDate>Mon, 16 Feb 2026 07:45:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47032097</link><dc:creator>FloorEgg</dc:creator><comments>https://news.ycombinator.com/item?id=47032097</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47032097</guid></item><item><title><![CDATA[New comment by FloorEgg in "America Isn't Ready for What AI Will Do to Jobs"]]></title><description><![CDATA[
<p>It's because they optimize for attention over truth.</p>
]]></description><pubDate>Sun, 15 Feb 2026 08:14:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47021968</link><dc:creator>FloorEgg</dc:creator><comments>https://news.ycombinator.com/item?id=47021968</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47021968</guid></item><item><title><![CDATA[New comment by FloorEgg in "Clay Christensen's Milkshake Marketing (2011)"]]></title><description><![CDATA[
<p>The thing I find the most hilarious about all these companies jamming llms in all over the place is that they don't ever put it where it makes the most sense to me - to manage the settings.<p>They could do away with all these mazes of settings and configurations and just have a little chat thing. You pop open and then tell the AI hey I want to change the background and then it just does it. You could have a huge and complex array of settings that would be a headache to navigate in a typical form format, but a breeze with an llm that has an API into them.<p>As an aside, another one that I just find hilarious is the LLM implementation into Google sheets. I'll ask it. "Hey how do I do this?" and then it goes "I don't know" and I'm like WTF why is this here</p>
]]></description><pubDate>Thu, 12 Feb 2026 20:48:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=46994936</link><dc:creator>FloorEgg</dc:creator><comments>https://news.ycombinator.com/item?id=46994936</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46994936</guid></item><item><title><![CDATA[New comment by FloorEgg in "Clay Christensen's Milkshake Marketing (2011)"]]></title><description><![CDATA[
<p>I disagree. The whole point of it is to shift the frame of reference away from the product and into the customer's job to be done. Ironically, we call it the milkshake example instead of the eating breakfast on the way to work example. But the lesson embedded in it is there all the same.<p>Personally, I found this example really helpful to wrap my head around jobs to be done theory.<p>It switched their frame from trying to innovate on the milkshake itself to the whole experience. Instead of making the milkshake chunky or thick or sweet or whatever they move the milkshake machine up closer to the door so people can be in and out faster.<p>I'm curious why you didn't see it this way. Did you miss something, or am I missing something?</p>
]]></description><pubDate>Thu, 12 Feb 2026 20:44:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=46994865</link><dc:creator>FloorEgg</dc:creator><comments>https://news.ycombinator.com/item?id=46994865</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46994865</guid></item></channel></rss>