<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: crakhamster01</title><link>https://news.ycombinator.com/user?id=crakhamster01</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 04:15:42 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=crakhamster01" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by crakhamster01 in "Claude Code's source code has been leaked via a map file in their NPM registry"]]></title><description><![CDATA[
<p>> One thing we know for sure is LLMs write code differently than we do.<p>Kind of. One thing we <i>do</i> know for certain is that LLMs degrade in performance with context length. You will undoubtedly get worse results if the LLM has to reason through long functions and high LOC files. You might get to a working state eventually, but only after burning many more tokens than if given the right amount of context.<p>> The worst outcome I can imagine would be forcing them to code exactly like we do.<p>You're treating "code smells" like cyclomatic complexity as something that is stylistic preference, but these best practices are backed by research. They became popular because teams across the industry analyzed code responsible for bugs/SEVs, and all found high correlation between these metrics and shipping defects.<p>Yes, coding standards should evolve, but... that's not saying anything new. We've been iterating on them for decades now.<p>I think the worst outcome would be throwing out our collective wisdom because the AI labs tell us to. It might be good to question who stands to  benefit when LLMs aren't leveraged efficiently.</p>
]]></description><pubDate>Tue, 31 Mar 2026 18:11:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47591312</link><dc:creator>crakhamster01</dc:creator><comments>https://news.ycombinator.com/item?id=47591312</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47591312</guid></item><item><title><![CDATA[New comment by crakhamster01 in "John Carmack about open source and anti-AI activists"]]></title><description><![CDATA[
<p>I can maybe see this argument being valid for OSS - as Carmack says, by nature it should be "no strings attached".<p>I don't think that's all anti-AI activists care about though. Honestly, I would say most activists don't talk about the use of OSS? The most prominent anti-AI sentiment seems to come from creatives. Artists, musicians, designers, etc.<p>They didn't publish their works with the same notion as OSS developers, but it was scraped up by corporations all the same. In many cases, these works were protected by copyright law and used anyways.<p>To me that feels like the equivalent of training on "private repos", which Carmack would call a violation [1].<p>[1] <a href="https://x.com/ID_AA_Carmack/status/2031769354401091988" rel="nofollow">https://x.com/ID_AA_Carmack/status/2031769354401091988</a></p>
]]></description><pubDate>Sat, 14 Mar 2026 14:02:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47376796</link><dc:creator>crakhamster01</dc:creator><comments>https://news.ycombinator.com/item?id=47376796</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47376796</guid></item><item><title><![CDATA[New comment by crakhamster01 in "No, it doesn't cost Anthropic $5k per Claude Code user"]]></title><description><![CDATA[
<p>The wave of LLM-style writing taking over the internet is definitely a bit scary. Feels like a similar problem to GenAI code/style eventually dominating the data that LLMs are trained on.<p>But luckily there's a large body of well written books/blogs/talks/speeches out there. Also anecdotally, I feel like a lot of the "bad writing" I see online these days is usually in the tech sphere.</p>
]]></description><pubDate>Tue, 10 Mar 2026 15:08:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47324303</link><dc:creator>crakhamster01</dc:creator><comments>https://news.ycombinator.com/item?id=47324303</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47324303</guid></item><item><title><![CDATA[New comment by crakhamster01 in "No, it doesn't cost Anthropic $5k per Claude Code user"]]></title><description><![CDATA[
<p>I had a similar reaction to OP for a different post a few weeks back - I think some analysis on the health economy. Initially as I was reading I thought - "Wow, I've never read a financial article written so clearly". Everything in layman's terms. But as I continued to read, I began to notice the LLM-isms. Oversimplified concepts, "the honest truth" "like X for Y", etc.<p>Maybe the common factor here is not having deep/sufficient knowledge on the topic being discussed? For the article I mentioned, I feel like I was less focused on the strength of the writing and more on just understanding the content.<p>LLMs are very capable at simplifying concepts and meeting the reader at their level. Personally, I subscribe to the philosophy of - "if you couldn't be bothered to write it, I shouldn't bother to read it".</p>
]]></description><pubDate>Tue, 10 Mar 2026 07:33:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47320123</link><dc:creator>crakhamster01</dc:creator><comments>https://news.ycombinator.com/item?id=47320123</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47320123</guid></item><item><title><![CDATA[New comment by crakhamster01 in "AI is not a coworker, it's an exoskeleton"]]></title><description><![CDATA[
<p>> taste scales now.<p>Not having taste also scales now, and the majority of people like to think they're above average.<p>Before AI, friction to create was an implicit filter. It meant "good ideas" were often short-lived because the individual lacked conviction. The ideas that saw the light of day were sharpened through weeks of hard consideration and at least worth a look.<p>Now, anyone who can form mildly coherent thoughts can ship an app. Even if there are newly empowered unicorns, rapidly shipping incredible products, what are the odds we'll find them amongst a sea of slop?</p>
]]></description><pubDate>Fri, 20 Feb 2026 15:47:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47089510</link><dc:creator>crakhamster01</dc:creator><comments>https://news.ycombinator.com/item?id=47089510</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47089510</guid></item><item><title><![CDATA[New comment by crakhamster01 in "Why senior engineers let bad projects fail"]]></title><description><![CDATA[
<p>I think this advice is pretty apt for small to medium sized companies. We're all invested in the company succeeding, but you don't want to become known as the person that always says "no".<p>At large companies, I've rarely found a reason to speak out on a project. Unless it has a considerable effect on my team/work (read: peace of mind), it just doesn't make sense to be the person casting doubt. There's not much ROI for being "right".<p>If you manage to kill the project before it starts, no one will ever know how bad of a disaster you prevented. If the project succeeds despite your objections, you look like an idiot. And if it fails - as the author notes, that doesn't get remembered either.<p>As a senior IC, the only real ROI I've found in these situations is when you can have a solution handy if things fail. People <i>love</i> a fixer. Even if you only manage to pull this off once or twice, your perception in the org/company gets a massive boost. <i>"Wow, so-and-so is always thinking ahead."</i><p>A basic example I saw at my last company was automated E2E testing in production. My teammate had suggested this to improve our ability to detect regressions, but it was ultimately shot down as not being worth the investment over other features.<p>A few months later, we had seen multiple instances of users hitting significant issues before we could catch them. My teammate was able to whip out the test framework they had been building on the side, and was immediately showered with praise/organizational support (and I'm sure a great review as well).</p>
]]></description><pubDate>Fri, 16 Jan 2026 07:23:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46643936</link><dc:creator>crakhamster01</dc:creator><comments>https://news.ycombinator.com/item?id=46643936</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46643936</guid></item><item><title><![CDATA[New comment by crakhamster01 in "Don't fall into the anti-AI hype"]]></title><description><![CDATA[
<p>I feel like both of these examples are insights that won't be relevant in a year.<p>I agree that CC becoming omniscient is science fiction, but the goal of these interfaces is to make LLM-based coding more accessible. Any strategies we adopt to mitigate bad outcomes are destined to become part of the platform, no?<p>I've been coding with LLMs for maybe 3 years now. Obviously a dev who's experienced with the tools will be more adept than one who's not, but if someone started using CC today, I don't think it would take them anywhere near that time to get to a similar level of competency.</p>
]]></description><pubDate>Sun, 11 Jan 2026 13:43:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46575705</link><dc:creator>crakhamster01</dc:creator><comments>https://news.ycombinator.com/item?id=46575705</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46575705</guid></item><item><title><![CDATA[New comment by crakhamster01 in "Zohran Mamdani wins the New York mayoral race"]]></title><description><![CDATA[
<p>It's funny that you mention moving outside the city when Zohran's tax plan is centered on bringing the corporate tax rate in-line with our neighboring state.<p>I'll also caveat that any parallels you might see in Seattle don't really apply to NYC. Besides the low car ownership rates, wealthy individuals choose to in NYC for it's convenience and culture, which really are unique in the US.</p>
]]></description><pubDate>Wed, 05 Nov 2025 07:30:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=45820347</link><dc:creator>crakhamster01</dc:creator><comments>https://news.ycombinator.com/item?id=45820347</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45820347</guid></item><item><title><![CDATA[Ask HN: How do you measure "AI slop"?]]></title><description><![CDATA[
<p>Recently, my employer has been pushing hard for LLM adoption across eng, with an expectation of increased productivity. Eng has followed suit, and as a result I've been getting a lot more PRs that are clearly AI generated. 100 line diffs that could have been 10, missed error cases, breaking convention. It's not just from junior engineers, but often from other senior engineers now.<p>With our incentive structures, it doesn't seem like there's a great way to prevent this decline in quality. It's been hard for me to quantify _why_ "slop" is bad, but my gut feelings are that:<p><pre><code>  1. The codebase becomes unreadable to human engineers.

  2. Having more bad examples in the codebase creates a negative feedback loop for future LLM changes. And maybe this is the new norm, but ->

  3. Once enough slop gets in, future incidents/SEVs become increasingly more difficult to resolve.
</code></pre>
(3) feels like the only reason that has tangible business impact. Even if it did occur, I don't know if it would be possible to tie the slow response/loss in revenue to AI slop.<p>I’ve seen other posts lamenting the ills of vibe coding, but is there a concrete way to justify code quality in the era of LLMs? My thoughts are that it might be useful to track some code quality metric like cyclomatic complexity, and see if there’s any correlation with regressions over time, but that feels kind of thin (and retroactive).</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44743686">https://news.ycombinator.com/item?id=44743686</a></p>
<p>Points: 4</p>
<p># Comments: 1</p>
]]></description><pubDate>Thu, 31 Jul 2025 08:48:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=44743686</link><dc:creator>crakhamster01</dc:creator><comments>https://news.ycombinator.com/item?id=44743686</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44743686</guid></item><item><title><![CDATA[New comment by crakhamster01 in "Figma will IPO on July 31"]]></title><description><![CDATA[
<p>> and now, what screen you’re on, what do you see?<p>There's a "follow me" feature to see what other users are doing. It's been around for several years.</p>
]]></description><pubDate>Thu, 31 Jul 2025 08:33:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=44743618</link><dc:creator>crakhamster01</dc:creator><comments>https://news.ycombinator.com/item?id=44743618</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44743618</guid></item><item><title><![CDATA[New comment by crakhamster01 in "Overtourism in Japan, and how it hurts small businesses"]]></title><description><![CDATA[
<p>There's undoubtedly a cohort of tourists that come to Japan with the "Disneyland" mindset, and I agree that some sort of government-level change is needed to curb abuse. But I would like to believe these folks are in the minority.<p>I think a greater proportion of the tourist population are individuals that visit Japan and maybe haven't done enough research, or are just unaware of norms here. Not understanding where to queue, how to order, navigate public transport, what to do at a temple, onsen, etc. This group isn't the 15% of "Best in Class tourists" Craig writes about, but rather the 75% that want to be respectful and don't know any better.<p>Many locals/expats will see this group and look down in disdain (or lament about them in a blog post...), but why don't more people just ask if they need help? It takes little effort to point someone in the right direction, and if it helps them better understand the country it's a win-win for both tourists and residents alike.<p>I feel like people love to talk about how considerate Japanese culture is, but don't care to practice it themselves when given the chance.</p>
]]></description><pubDate>Sat, 12 Jul 2025 07:50:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=44540159</link><dc:creator>crakhamster01</dc:creator><comments>https://news.ycombinator.com/item?id=44540159</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44540159</guid></item><item><title><![CDATA[New comment by crakhamster01 in "The Myth of Developer Obsolescence"]]></title><description><![CDATA[
<p>Haha, I tried to couch this by adding "too far", but I agree. Companies should let their teams try out relevant tools in their workflows.<p>My point was more of a response to the inflated expectations that people have about AI. The current generation of AI tech is rife with gotchas and pitfalls. Many companies seem to be making decisions with the hope that they will out-innovate any consequences.</p>
]]></description><pubDate>Tue, 27 May 2025 16:20:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=44108396</link><dc:creator>crakhamster01</dc:creator><comments>https://news.ycombinator.com/item?id=44108396</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44108396</guid></item><item><title><![CDATA[New comment by crakhamster01 in "The Myth of Developer Obsolescence"]]></title><description><![CDATA[
<p>I'm increasingly certain that companies leaning too far into the AI hype are opening themselves up to disruption.<p>The author of this post is right, code is a liability, but AI leaders have somehow convinced the market that code generation on demand is a massive win. They're selling the industry on a future where companies can maintain "productivity" with a fraction of the headcount.<p>Surprisingly, no one seems to ask (or care) about how product quality fares in the vibe code era. Last month Satya Nadella famously claimed that 30% of Microsoft's code was written by AI. Is it a coincidence that Github has been averaging 20 incidents a month this year?[1] That's basically once a work day...<p>Nothing comes for free. My prediction is that companies over-prioritizing efficiency through LLMs will pay for it with quality. I'm not going to bet that this will bring down any giants, but not every company buying this snake oil is Microsoft. There are plenty of hungry entrepreneurs out there that will swarm if businesses fumble their core value prop.<p>[1] <a href="https://www.githubstatus.com/history" rel="nofollow">https://www.githubstatus.com/history</a></p>
]]></description><pubDate>Tue, 27 May 2025 14:19:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=44107270</link><dc:creator>crakhamster01</dc:creator><comments>https://news.ycombinator.com/item?id=44107270</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44107270</guid></item><item><title><![CDATA[New comment by crakhamster01 in "Duolingo CEO tries to walk back AI-first comments, fails"]]></title><description><![CDATA[
<p>Is any of this pushback having a material impact on the company? It seems like their stock is still hovering around all-time highs.</p>
]]></description><pubDate>Tue, 27 May 2025 06:38:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=44104450</link><dc:creator>crakhamster01</dc:creator><comments>https://news.ycombinator.com/item?id=44104450</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44104450</guid></item><item><title><![CDATA[New comment by crakhamster01 in "LLM function calls don't scale; code orchestration is simpler, more effective"]]></title><description><![CDATA[
<p>This looks pretty cool! I'm curious how these sort of workflows are being used internally at Shopify. Any examples you can share?</p>
]]></description><pubDate>Thu, 22 May 2025 04:53:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=44058902</link><dc:creator>crakhamster01</dc:creator><comments>https://news.ycombinator.com/item?id=44058902</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44058902</guid></item><item><title><![CDATA[New comment by crakhamster01 in "Show HN: I’m 16 years old and working on my first startup, a study app"]]></title><description><![CDATA[
<p>Haven't had a chance to try the app, just pointing out that navigation links for "features" and "pricing" from the legal/privacy screens don't work.<p>Will try to give the actual app a shot later, but best of luck regardless!</p>
]]></description><pubDate>Sun, 11 May 2025 15:32:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=43954492</link><dc:creator>crakhamster01</dc:creator><comments>https://news.ycombinator.com/item?id=43954492</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43954492</guid></item><item><title><![CDATA[New comment by crakhamster01 in "The Demoralization is just Beginning"]]></title><description><![CDATA[
<p>I thought this was an interesting post with some outlandish statements, but I was willing to grapple with them because I thought the author was cooking up something new...<p>Then I realized this was a post from geohot and felt very foolish the 15 minutes I spent thinking through his argument. Why is this so upvoted!</p>
]]></description><pubDate>Wed, 05 Mar 2025 08:42:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=43264341</link><dc:creator>crakhamster01</dc:creator><comments>https://news.ycombinator.com/item?id=43264341</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43264341</guid></item><item><title><![CDATA[New comment by crakhamster01 in "Page is under construction: A love letter to the personal website"]]></title><description><![CDATA[
<p>Never mind, sounds like <a href="https://ooh.directory/" rel="nofollow">https://ooh.directory/</a> that listenfaster shared is essentially this!</p>
]]></description><pubDate>Wed, 26 Feb 2025 14:31:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=43183930</link><dc:creator>crakhamster01</dc:creator><comments>https://news.ycombinator.com/item?id=43183930</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43183930</guid></item><item><title><![CDATA[New comment by crakhamster01 in "Page is under construction: A love letter to the personal website"]]></title><description><![CDATA[
<p>+1 to this! If it doesn't exist already, what would it look like?<p>The simple solution could be another search index that hasn't been commoditized like Google has, but I wonder if a manual curation approach might lead to higher quality? Something along the lines of a weekly digest of personal sites that are interesting/unique/fun. Process could look like:<p><pre><code>  1. Users submit their personal sites for review, accompanied by some blurb/tags. Essentially something to make the cost of submission > 0.

  2. Site admin reviews submissions once a week and either select their top X favorite, or just remove any low quality/slop submissions and shares the rest.
</code></pre>
I suppose this approach depends on the judgement of whoever does the curating, but I feel like that's not necessarily a worse alternative to the opaque algorithms we deal with today.</p>
]]></description><pubDate>Wed, 26 Feb 2025 14:30:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=43183907</link><dc:creator>crakhamster01</dc:creator><comments>https://news.ycombinator.com/item?id=43183907</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43183907</guid></item><item><title><![CDATA[New comment by crakhamster01 in "After layoffs, Meta rewards top executives with a substantial bonus increase"]]></title><description><![CDATA[
<p>While it's bad to get triggered by something like this and then move on, I think your take is incorrect as well. We shouldn't take every corporate decision - whether it's how they choose to compensate, govern, or otherwise - face down and just call it "doing business".<p>Your view promotes passive participation. Make the optimal decision for yourself in the short-term and ignore any broader implicaition. No disrespect, but this exact behavior is how individuals cede agency and enables corporations to exert more and more influence on society.<p>If rewarding executives after gross mismanagement of hiring makes someone angry, let them be angry! Then, look into how you can direct that anger somewhere besides the HN comment section. Chances are that others are angry about it as well, and with enough vocal support maybe we can get some semblance of worker protections and corporate oversight in this country [1].<p>[1] <a href="https://inequality.org/article/a-fresh-approach-to-limiting-ceo-pay-and-saving-our-environment/" rel="nofollow">https://inequality.org/article/a-fresh-approach-to-limiting-...</a></p>
]]></description><pubDate>Sun, 23 Feb 2025 03:52:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=43146331</link><dc:creator>crakhamster01</dc:creator><comments>https://news.ycombinator.com/item?id=43146331</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43146331</guid></item></channel></rss>