<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: keeda</title><link>https://news.ycombinator.com/user?id=keeda</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 14 Apr 2026 17:12:46 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=keeda" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by keeda in "Stanford report highlights growing disconnect between AI insiders and everyone"]]></title><description><![CDATA[
<p>Not really managers, I would put the new role more in the senior engineer / architect category. Those still have to deal with deeply technical things like design, architecture, problem decomposition, research, domain expertise, code review, collaborating with technical peers -- all of which (people) managers don't typically do.<p>If you ever wanted to climb the senior technical ladder, this is now the quickest way to experience it. Except instead of other people you get to work with agents which, while a very different experience, requires largely the same skills.<p>So yes, your job is not what it was before, but with career growth it typically was not anyway.</p>
]]></description><pubDate>Tue, 14 Apr 2026 02:16:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47760486</link><dc:creator>keeda</dc:creator><comments>https://news.ycombinator.com/item?id=47760486</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47760486</guid></item><item><title><![CDATA[New comment by keeda in "If you started a company two years ago, many assumptions are no longer true"]]></title><description><![CDATA[
<p>This comment is getting punished for the incorrect timeline (I would know, I've been harping on about AI getting good at coding for ~2 years now!) but I do think it is directionally correct. Just over 3 years ago, (publicly available) AI could not write code at all. Today it can write whole modules and project scaffoldings and even entire apps, not to mention all the other stuff agents can do today. Considering I didn't think I'd see this kind of stuff in my lifetime, this is a blink of an eye.<p>Even if a lot of the improvements we see today are due to things outside the models themselves -- tools, harnesses, agents, skills, availability of compute, better understanding of how to use AI, etc. -- things are changing very quickly overall. It would be a mistake to just focus on one or two things, like models or benchmarks, and ignore everything else that is changing in the ecosystem.</p>
]]></description><pubDate>Tue, 14 Apr 2026 01:35:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47760208</link><dc:creator>keeda</dc:creator><comments>https://news.ycombinator.com/item?id=47760208</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47760208</guid></item><item><title><![CDATA[New comment by keeda in "Study found that young adults have grown less hopeful and more angry about AI"]]></title><description><![CDATA[
<p>1. This is not speculation. Individuals and small teams are already developing and deploying ambitious projects that previously required entire teams. Entire open source projects have been rewritten from scratch and relicensed by individuals with an AI. People have posted GitHub repos where you can go investigate the commit history. You've been on HN long enough to see the comments and stories. If you're still asking for proof, well, that says something.<p>2. You're stance is equivalent to "show me concrete evidence that the advent of the automobile will have a positive impact on horse-drawn buggy coachmen" while I'm saying, "the automobile is coming, we all better get off our high horses and learn how to drive."</p>
]]></description><pubDate>Sun, 12 Apr 2026 02:47:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47735742</link><dc:creator>keeda</dc:creator><comments>https://news.ycombinator.com/item?id=47735742</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47735742</guid></item><item><title><![CDATA[New comment by keeda in "Molotov cocktail is hurled at home of Sam Altman"]]></title><description><![CDATA[
<p>Once in a while we get to see concrete numbers for some of them, e.g. Meta spent $27M+ in one year on Zuck's security, which is way more than the other CEOs: <a href="https://fortune.com/2025/08/16/mark-zuckerberg-meta-security-detail-costs-apple-nvidia-microsoft-amazon-alphabet-ceos/" rel="nofollow">https://fortune.com/2025/08/16/mark-zuckerberg-meta-security...</a></p>
]]></description><pubDate>Fri, 10 Apr 2026 23:44:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47725376</link><dc:creator>keeda</dc:creator><comments>https://news.ycombinator.com/item?id=47725376</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47725376</guid></item><item><title><![CDATA[New comment by keeda in "Study found that young adults have grown less hopeful and more angry about AI"]]></title><description><![CDATA[
<p>The power of AI is that it amplifies individual capabilities. So the same aspect that lets employers reduce their headcount also lets individuals start ambitious projects that would have previously required an entire team... and hence, a significant amount of funding. The moment you need money, the people who provide that capital hold a lot of power and influence.<p>But now you don't need their money, and so the capital class lose their power over you.<p>As an example, I'm iterating on a niche product based on computer vision -- something I had no background in when I started -- that in the past would have taken a team of 2 - 3 and at least a semester or two of an advanced course in computer vision. Instead, I'm solo bootstrapping this project.<p>There are multiple accounts like mine, and you can find many comments on HN or other forums to this effect. Now, I know this is a very tough path for most people because, well, now everybody needs to be an entrepreneur, but a path exists.<p>AI is a double-edged sword, and more people need to become aware of the edge that is available to us.</p>
]]></description><pubDate>Fri, 10 Apr 2026 23:22:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47725099</link><dc:creator>keeda</dc:creator><comments>https://news.ycombinator.com/item?id=47725099</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47725099</guid></item><item><title><![CDATA[New comment by keeda in "Study found that young adults have grown less hopeful and more angry about AI"]]></title><description><![CDATA[
<p>My personal take, which seems to be consistent with what these folks are saying, is "OMG there's this huge radioactive asteroid that's going to flatten our world, but its gamma rays also give <i>us</i> weird superpowers, here are some ways to harness those..."<p>I'm a bit more optimistic about democratized access to AI. Even today's weaker open source/weight models are plenty powerful enough to supercharge our individual capabilities, and based on current trends, they won't be more than 3 - 6 months behind the frontier models. This may not bode well for the AI labs because their moat is always evaporating, but it's a huge boon to us plebs.</p>
]]></description><pubDate>Fri, 10 Apr 2026 01:29:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47712474</link><dc:creator>keeda</dc:creator><comments>https://news.ycombinator.com/item?id=47712474</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47712474</guid></item><item><title><![CDATA[New comment by keeda in "Study found that young adults have grown less hopeful and more angry about AI"]]></title><description><![CDATA[
<p>My take is "simonw and his retiree friends" spend a lot of their time exploring this disruptive new technology and sharing their learnings (for free!) so that everybody can leverage it too... and yet so many people see that as something bad rather than an opportunity to learn.<p>Radical changes bring radical opportunities too, so "having the time of their lives" is not necessarily incompatible with "adapting to profound disruption."<p>Consider that the traits that make them optimistic about this technology are exactly the traits required to navigate this Brave New World.</p>
]]></description><pubDate>Thu, 09 Apr 2026 20:48:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47709809</link><dc:creator>keeda</dc:creator><comments>https://news.ycombinator.com/item?id=47709809</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47709809</guid></item><item><title><![CDATA[New comment by keeda in "Study found that young adults have grown less hopeful and more angry about AI"]]></title><description><![CDATA[
<p>And there should be a daily reminder that as long as we live in a Capitalist society, what befell the Luddites will also befall those that try to resist an economic force of this magnitude.<p>Would you rather feel justified in the knowledge that the Luddites were principally right and resist, or would you rather learn the lesson of their fate and adapt?<p>How would you even resist? Say the entire US population pushes back and gets protectionist regulations passed; there will always be hungry people just a few 100ms ping away willing to outcompete you using AI.<p>Really, at this point there are only two choices: change society to move beyond Capitalism, or adapt to the new economic reality. Either choice is valid, and I suspect eventually one will lead to the other, but there is no putting the genie back in the bottle.</p>
]]></description><pubDate>Thu, 09 Apr 2026 20:20:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47709308</link><dc:creator>keeda</dc:creator><comments>https://news.ycombinator.com/item?id=47709308</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47709308</guid></item><item><title><![CDATA[New comment by keeda in "Ask HN: Why people still use GCP and AWS?"]]></title><description><![CDATA[
<p>I think a critical limitation is that the database offering is designed for a very specific philosophy (which, honestly, the rest of the platform is too) and not suitable for general purpose use. The 10GB limit per DB is unsuitable for even trivial use-cases. I was going to use it for a new prototype (because I already use it for other stuff) and I realized the DB limitations could quickly become a blocker even for a prototype.<p>If I understand correctly, the primary philosophy of the platform is edge computing with dedicated infra (workers, DB, etc) per-user. While that may be an under-leveraged niche and CloudFlare excels at it, it is still a niche.</p>
]]></description><pubDate>Wed, 08 Apr 2026 21:09:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47696317</link><dc:creator>keeda</dc:creator><comments>https://news.ycombinator.com/item?id=47696317</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47696317</guid></item><item><title><![CDATA[New comment by keeda in "The Claude Code Leak"]]></title><description><![CDATA[
<p>Ah gotcha, that makes complete sense.<p>On that open/close aspect, however, this case is interesting because the leaked code was for a product that was shipped to users' machines in the wild. I'd say that while Anthropic, to your point, absolutely does not want this code leaked, they'd also know very well that any software released this way cannot be considered a competitive advantage for long.<p>Like, the ability of LLMs to reverse engineer software is well known by now. In fact this blog describes how, even before the leak, they reversed the CLI to patch bugs that Anthropic wouldn't! <a href="https://dev.to/kolkov/we-reverse-engineered-12-versions-of-claude-code-then-it-leaked-its-own-source-code-pij" rel="nofollow">https://dev.to/kolkov/we-reverse-engineered-12-versions-of-c...</a><p>Which may be why other tools in this space have been open sourced. Yet Claude Code hasn't been, so clearly Anthropic wants to protect some rights there. I am very curious about these labs' decision processes when considering what functionality to put in the CLI versus on the servers. That could be a hint about their IP strategy and how they're thinking of moats.</p>
]]></description><pubDate>Sat, 04 Apr 2026 18:27:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47641826</link><dc:creator>keeda</dc:creator><comments>https://news.ycombinator.com/item?id=47641826</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47641826</guid></item><item><title><![CDATA[New comment by keeda in "The Claude Code Leak"]]></title><description><![CDATA[
<p>I don't see how that is relevant? I thought the point under discussion was that code does not matter until PMF, and that this would be an illustrative example because there <i>was</i> no code until PMF.<p>Like, from the users' perspectives they were interacting via text messages both before and after PMF, until later down the line they were migrated to an app. At this point, the change was largely aesthetic, the core idea was the same.<p>Maybe we're using different definitions of terms like "PMF" here?</p>
]]></description><pubDate>Thu, 02 Apr 2026 22:50:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47621204</link><dc:creator>keeda</dc:creator><comments>https://news.ycombinator.com/item?id=47621204</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47621204</guid></item><item><title><![CDATA[New comment by keeda in "The Claude Code Leak"]]></title><description><![CDATA[
<p>But the manual workflow <i>was</i> the step-by-step recipe the founders iterated on until they got traction; the product that came later was just an embodiment of that workflow as code.</p>
]]></description><pubDate>Thu, 02 Apr 2026 19:24:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47619004</link><dc:creator>keeda</dc:creator><comments>https://news.ycombinator.com/item?id=47619004</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47619004</guid></item><item><title><![CDATA[New comment by keeda in "LinkedIn is illegally searching your computer"]]></title><description><![CDATA[
<p>LinkedIn actually sued HiQ Labs, which scraped LinkedIn to do exactly this (and this extensions scanning is likely a defense mechanism against similar attacks):<p><a href="https://epic.org/documents/linkedin-corp-v-hiq-labs-inc/" rel="nofollow">https://epic.org/documents/linkedin-corp-v-hiq-labs-inc/</a><p><i>> HiQ has created two specific data products targeted at employers: (1) “Keeper,” which informs employers which of their employees are at “risk” of being recruited by competitors; and... </i><p>My hunch is that HiQ simply looked for spikes in activity on LinkedIn as a signal for a job hunt: <a href="https://news.ycombinator.com/item?id=47566893">https://news.ycombinator.com/item?id=47566893</a><p>In any case, this lawsuit was discussed a few times on HN at the time, and IIRC there were a fair bit of support for allowing free scraping of "public information." Interesting how the sentiment here has turned these days...</p>
]]></description><pubDate>Thu, 02 Apr 2026 18:51:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47618617</link><dc:creator>keeda</dc:creator><comments>https://news.ycombinator.com/item?id=47618617</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47618617</guid></item><item><title><![CDATA[New comment by keeda in "The Claude Code Leak"]]></title><description><![CDATA[
<p>I'm probably missing something. Pre-PMF code by definition is not yet proven to solve a specific pain point, so why does it matter?<p>I think the crux here is the OP means the "quality of code" doesn't matter until PMF, only the utility matters (to the extent it helps you find PMF), in which case you're both in violent agreement.<p>But even then you don't need code. I briefly worked for a startup that found PMF by calling people, sending text messages, creating social media posts, measuring engagement to create reports, and sending invoices... all <i>manually</i>. The "code" as such was a bunch of templates in a doc for each of those. Once they actually started getting paid they moved to writing code.</p>
]]></description><pubDate>Thu, 02 Apr 2026 18:23:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47618203</link><dc:creator>keeda</dc:creator><comments>https://news.ycombinator.com/item?id=47618203</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47618203</guid></item><item><title><![CDATA[New comment by keeda in "SpaceX files to go public"]]></title><description><![CDATA[
<p>Their technical accomplishments are doubtlessly notable, but does the expected business growth justify this valuation? Honest question, how many things do we really need to send up there that reducing the cost to orbit by 100x will trigger Jevon's paradox and lead to 100x more launches?<p>I suppose "data centers in space" is the current answer but again, I'm suspicious about its feasibility.<p>Barring that, until we have another "killer app" besides Starlink, like a giant orbital space station or a moonbase, I'm curious whether there is enough demand.</p>
]]></description><pubDate>Thu, 02 Apr 2026 02:46:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47609405</link><dc:creator>keeda</dc:creator><comments>https://news.ycombinator.com/item?id=47609405</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47609405</guid></item><item><title><![CDATA[New comment by keeda in "The OpenAI graveyard: All the deals and products that haven't happened"]]></title><description><![CDATA[
<p>These days, Google AI overviews regularly add a qualifier to the effect of "... according to this comment on Reddit <link>"<p>That's basically a UX trick to entirely sidestep being held accountable for the results, but seems sufficient to notify the user about the provenance of the answer to adjust their grains of salt.</p>
]]></description><pubDate>Thu, 02 Apr 2026 01:44:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47609063</link><dc:creator>keeda</dc:creator><comments>https://news.ycombinator.com/item?id=47609063</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47609063</guid></item><item><title><![CDATA[New comment by keeda in "AI has suddenly become more useful to open-source developers"]]></title><description><![CDATA[
<p>I know appeal to authority can be a fallacy, but there is something to be said for appeal to a preponderence of concurring authorities. Multiple notable personalities known for their technical chops have been endorsing AI assisted coding, so it's hard to argue that every one of them lowered their standards.<p>It's been fun seeing the cognitive dissonance in anti-LLM tech circles as technical giants that they idolized, from Torvalds through Carmack all the way up to Knuth, say something positive about AI, let alone sing praises of it!</p>
]]></description><pubDate>Wed, 01 Apr 2026 19:27:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47605382</link><dc:creator>keeda</dc:creator><comments>https://news.ycombinator.com/item?id=47605382</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47605382</guid></item><item><title><![CDATA[New comment by keeda in "I Quit. The Clankers Won"]]></title><description><![CDATA[
<p>That's true but generally that applies to purely abstract mathematics. If the mathematics is truly abstract, no form of IP anywhere would protect it. That has always been (rightfully IMO) the realm of scientific publications.<p>Otherwise it's straightforward to say that the mathematics is being applied to achieve a practical goal via execution on a computer. (You'll see the term non-transitory computer-readable media" a lot in claims.) You now have a method and system. Now, caselaw frequently changes things, like the "Alice" decision in the US made it much harder to just patent things done "on a computer" but the underlying principle holds.<p>I'd also guess if your approach makes something faster or cheaper, it should be possible to show it is non-abstract, because resources like time and costs are not abstract quantities.<p>Standard disclaimer: I'm not a lawyer! I've just worked with patents extensively.</p>
]]></description><pubDate>Wed, 01 Apr 2026 17:51:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47604179</link><dc:creator>keeda</dc:creator><comments>https://news.ycombinator.com/item?id=47604179</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47604179</guid></item><item><title><![CDATA[New comment by keeda in "I quit. The clankers won"]]></title><description><![CDATA[
<p>I think what you're looking for is patents. I've said it before, but I think patents are the only protection left for innovative software and "the little guy." It always was, really, but it's blindingly apparent today.<p>Unfortunately, that would be considered heresy on forums like HN, and people will continue to rail against AI and whatever it's causing <i>and</i> patents, instead of realizing that one is the only available leverage against the other.</p>
]]></description><pubDate>Wed, 01 Apr 2026 17:07:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47603606</link><dc:creator>keeda</dc:creator><comments>https://news.ycombinator.com/item?id=47603606</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47603606</guid></item><item><title><![CDATA[New comment by keeda in "I quit. The clankers won"]]></title><description><![CDATA[
<p>If we know the outcome of that code, such as whether it caused bugs or data corruption or a crappy UX or tech debt -- which is potentially available in subsequent PR commit messages -- it's still valuable training data.<p>Probably even more valuable than code that just worked, because evidently we have enough of that and AI code still has issues.</p>
]]></description><pubDate>Wed, 01 Apr 2026 16:50:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47603374</link><dc:creator>keeda</dc:creator><comments>https://news.ycombinator.com/item?id=47603374</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47603374</guid></item></channel></rss>