<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: recitedropper</title><link>https://news.ycombinator.com/user?id=recitedropper</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 24 Apr 2026 08:24:47 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=recitedropper" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by recitedropper in "ChatGPT Images 2.0"]]></title><description><![CDATA[
<p>This is hilarious. Seems like kind of a random image for a model to memorize, but it could be.<p>There is definitely enough empirical validation that shows image models retain lots of original copies in their weights, despite how much AI boosters think otherwise. That said, it is often images that end up in the training set many times, and I would think it strange for this image to do that.<p>Regardless, great find.</p>
]]></description><pubDate>Tue, 21 Apr 2026 20:19:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47854026</link><dc:creator>recitedropper</dc:creator><comments>https://news.ycombinator.com/item?id=47854026</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47854026</guid></item><item><title><![CDATA[New comment by recitedropper in "Claude Design"]]></title><description><![CDATA[
<p>I think it's probably both in the end. Anthropic has a lot of fans, and combine that with excited employees and investors, they probably don't need to do much explicit astroturfing to reach top of HN.<p>But they also desperately need users (and the data those users bring) to build their products, and the people who do have the power to manipulate this site are on their team. And it does get tiring to see a new Claude feature with like 1 comment and 25 points right at the top, multiple times in the last two week. Keeping their needs in mind, it has begun to look like manipulation, even if the above effect could explain it.<p>I'm glad the technology foments it excitement for you. The idea that we can share intellectual processes broadly and implement them without the previously requisite skills will obviously change the world. That it could change the world for the better, excites me too.<p>But many of us have our excitement tampered by the messaging, the questionable ethics behind how it has been done, and the fact that a real % of the space is basically driven by eschatological thinking. And it especially annoys me that Anthropic is the company whose messaging simultaneously encourages that eschatological thinking, and preys upon the emotional reactions it creates.<p>I think it is increasingly clear--if you look at recent public sentiment and feel what is in the air--that they are a villain in this aspect. I don't think we want the people who believe they are building the future to be doing so both out of fear--of China--and gaining power through others' fear of what they are doing.<p>But villains can ultimately do good in the world, despite their villainy. Let's hope that is how it plays out.</p>
]]></description><pubDate>Fri, 17 Apr 2026 21:09:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47810596</link><dc:creator>recitedropper</dc:creator><comments>https://news.ycombinator.com/item?id=47810596</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47810596</guid></item><item><title><![CDATA[New comment by recitedropper in "Claude Design"]]></title><description><![CDATA[
<p>Feeling some heat != in trouble. Just that the pressure cooker is turning to a higher temp.<p>But, I'll gladly admit that I am bias: I'm tired of seeing blatant astroturfing by a company whose main marketing tactic is to play on societal fear, while simultaneously employing safety theatre to look like the "good guys".<p>So take my opinion with a grain of salt :)</p>
]]></description><pubDate>Fri, 17 Apr 2026 15:59:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47807360</link><dc:creator>recitedropper</dc:creator><comments>https://news.ycombinator.com/item?id=47807360</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47807360</guid></item><item><title><![CDATA[New comment by recitedropper in "Claude Design"]]></title><description><![CDATA[
<p>Plus: So much of excellent user interface design is done through iterating on feedback from live humans testing it with their human sensory system.<p>Until we have embodied AI's with eyes and hands that provide good enough approximations, the aspect of design bottlenecked on human experience will stay bottlenecked.</p>
]]></description><pubDate>Fri, 17 Apr 2026 15:52:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47807275</link><dc:creator>recitedropper</dc:creator><comments>https://news.ycombinator.com/item?id=47807275</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47807275</guid></item><item><title><![CDATA[New comment by recitedropper in "Claude Design"]]></title><description><![CDATA[
<p>Seems to me like Anthropic is desperately trying to find as many product-market fits as possible before they IPO. They're reaching a chaotic weekly release cadence--each new product chockful of unclear, overlapping capability with their previous.<p>Combine that with the obvious hackernews manipulation that somehow gets each and every haphazard release instantly to the top, and you can see they're starting to feel some real heat.</p>
]]></description><pubDate>Fri, 17 Apr 2026 15:29:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47807005</link><dc:creator>recitedropper</dc:creator><comments>https://news.ycombinator.com/item?id=47807005</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47807005</guid></item><item><title><![CDATA[New comment by recitedropper in "Nvidia and OpenAI abandon unfinished $100B deal in favour of $30B investment"]]></title><description><![CDATA[
<p>Starts with "astro" and ends with "turfing".<p>Think about how valuable HN is for a company whose primary market is professional devs.</p>
]]></description><pubDate>Fri, 20 Feb 2026 17:43:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47091193</link><dc:creator>recitedropper</dc:creator><comments>https://news.ycombinator.com/item?id=47091193</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47091193</guid></item><item><title><![CDATA[New comment by recitedropper in "Google Antigravity"]]></title><description><![CDATA[
<p>I think we agree no the limitations of the study--I literally began my comment with "for seasoned maintainers of open source repos". I'm not sure if in your first statement ("there are no studies to back up this claim.. I welcome good analysis") you are referring to claims that support an AI-speedup. If so, we agree that good analysis is needed. But if you think there already is good data:<p>Can you link any? All I've seen is stuff like Anthropic claiming 90% of internal code is written by Claude--I think we'd agree that we need an unbiased source and better metrics than "code written". My concern is that whenever AI usage in professional developers is studied empirically, as far as I have seen, the results never corroborate your claim: "Any developer (who was a developer before March 2023) that is actively using these tools and understands the nuances of how to search the vector space (prompt) is being sped up substantially."<p>I'm open to it being possible, but as someone who was a developer before March 2023 and is surrounded by many professionals who were also so, our results are more lukewarm than what I see boosters claim. It speeds up certain types of work, but not everything in a manner that adds up to all work "sped up substantially".<p>I need to see data, and all the data I've seen goes the other way. Did you see the recent Substack looking at public Github data showing no increase in the trend of PRs all the way up to August 2025? All the hard data I've seen is much, much more middling than what people who have something to sell AI-wise are claiming.<p><a href="https://mikelovesrobots.substack.com/p/wheres-the-shovelware-why-ai-coding" rel="nofollow">https://mikelovesrobots.substack.com/p/wheres-the-shovelware...</a></p>
]]></description><pubDate>Sat, 22 Nov 2025 22:42:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=46018979</link><dc:creator>recitedropper</dc:creator><comments>https://news.ycombinator.com/item?id=46018979</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46018979</guid></item><item><title><![CDATA[New comment by recitedropper in "Gemini 3"]]></title><description><![CDATA[
<p>Sure, but the extent to which you bend the truth to get those impressive numbers is absolutely gotcha-able.<p>Showing a new screen by default to everyone who is using your main product flow and then claiming that everyone who is seeing it is a priori a "user" is absurd. And that is the only way they can get to 2 billion a month, by my estimation.<p>They could put a new yellow rectangle at the top of all google search results and claim that the product launch has reached 2 billion monthly users and is one of the fastest-growing products of all time. Clearly absurd, and the same math as what they are saying here. I'm claiming my hottake gotcha :)</p>
]]></description><pubDate>Thu, 20 Nov 2025 15:39:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=45993758</link><dc:creator>recitedropper</dc:creator><comments>https://news.ycombinator.com/item?id=45993758</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45993758</guid></item><item><title><![CDATA[New comment by recitedropper in "Gemini 3"]]></title><description><![CDATA[
<p>There could definitely be a chance. I was just responding to what in your comment sounded like a question.<p>That said, I think there is a good reason to be skeptical that it is a good chance. The consistent trend of finding higher complexity than expected in biological intelligences (like in C. Elegans), combined with the fact that the physical nature of digital architectures versus biological architectures are very different, is a good reason to bet on it being really complex to emulate with our current computing systems.<p>Obviously there is a way to do it physically--biological systems are physical after all--but we just don't understand enough to have the grounds to say it is "likely" doable digitally. Stuff like the Universal Approximation Theorem implies that in theory it may be possible, but that doesn't say anything about whether it is feasible. Same thing with Turing completeness too. All that these theorems say is our digital hardware can emulate anything that is a step-by-step process (computation), but not how challenging it is to emulate it or even that it is realistic to do so. It could turn out that something like human mind emulation is possible but it would take longer than the age of the universe to do it. Far simpler problems turn out to have similar issues (like calculating the optimal Go move without heuristics).<p>This is all to say that there could be plenty of smart ideas out there that break our current understandings in all sorts of ways. Which way the cards will land isn't really predictable, so all we can do is point to things that suggest skepticism, in one direction or another.</p>
]]></description><pubDate>Thu, 20 Nov 2025 15:34:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=45993695</link><dc:creator>recitedropper</dc:creator><comments>https://news.ycombinator.com/item?id=45993695</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45993695</guid></item><item><title><![CDATA[New comment by recitedropper in "Google Antigravity"]]></title><description><![CDATA[
<p>Yes I recognize that, for various reasons, people will fail to document even when it is a profesional expectation.<p>I guess in this case we are comparing an idealized human to an idealized AI, given AI has equally its own failings in non-idealized scenarios (like hallucination).</p>
]]></description><pubDate>Wed, 19 Nov 2025 16:15:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=45981346</link><dc:creator>recitedropper</dc:creator><comments>https://news.ycombinator.com/item?id=45981346</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45981346</guid></item><item><title><![CDATA[New comment by recitedropper in "Google Antigravity"]]></title><description><![CDATA[
<p>OK yes, you are right that we might be talking about employing AI toolings in different modes, and that the paper I am referring to is absolutely about agentic tooling executing code changes on your behalf.<p>That said, the first comment of the person I replied to contained: "You can ask agents to identify and remove cruft", which is pretty explicitly speaking to agent mode. He was also responding to a comment that was talking about how humans spend "hours talking about architectural decisions", which as an action mapped to AI would be more plan mode than ask mode.<p>Overall I definitely agree that using LLM tools to just tell you things about the structure of a codebase are a great way to use them, and that they are generally better at those one-off tasks than things that involve substantial multi-step communications in the ways humans often do.<p>I appreciate being the weeds here haha--hopefully we all got a little better talking abou the nuances of these things :)</p>
]]></description><pubDate>Wed, 19 Nov 2025 16:13:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=45981331</link><dc:creator>recitedropper</dc:creator><comments>https://news.ycombinator.com/item?id=45981331</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45981331</guid></item><item><title><![CDATA[New comment by recitedropper in "Google Antigravity"]]></title><description><![CDATA[
<p>For seasoned maintainers of open source repos, there is explicit evidence it does slow them down, <i>even</i> when they think it sped them up: <a href="https://arxiv.org/abs/2507.09089" rel="nofollow">https://arxiv.org/abs/2507.09089</a><p>Cue: "the tools are so much better now", "the people in the study didn't know how to use Cursor", etc. Regardless if one takes issue with this study, there are enough others of its kind to suggest skepticism regarding how much these tools really create speed benefits when employed at scale. The maintenance cliff is always nigh...<p>There are definitely ways in which LLMs, and agentic coding tools scaffolded in top, help with aspects of development. But to say anyone who claims otherwise is either being disingenuous or doesn't know what they are doing, is not an informed take.</p>
]]></description><pubDate>Wed, 19 Nov 2025 02:19:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=45975122</link><dc:creator>recitedropper</dc:creator><comments>https://news.ycombinator.com/item?id=45975122</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45975122</guid></item><item><title><![CDATA[New comment by recitedropper in "Google Antigravity"]]></title><description><![CDATA[
<p>Alright, I'm glad to hear you've had a successful and rich professional career. We definitely agree that engineers generally fail to document when they have competing priorities, and that LLMs can be of use to help offload some of that work successfully.<p>Your initial comment made it sound like you were commenting on a genuine apples-for-apples comparisons between humans and LLMs, in a controlled setting. That's the place for empiricism, and I think dismissing studies examining such situations is a mistake.<p>A good warning flag for why that is a mistake is the recent article that showed engineers estimated LLMs sped them up by like 24%, but when measured they were actually slower by 17%. One should always examine whether or not the specifics of the study really applies to them--there is no "end all be all" in empiricism--but when in doubt the scientific method is our primary tool for determining what is actually going on.<p>But we can just vibe it lol. Fwiw, the parent comment's claims line up more with my experience than yours. Leave an agent running for "hours" (as specified in the comment) coming up with architectural choices, ask it to document all of it, and then come back and see it is a massive mess. I have yet to have a colleague do that, without reaching out and saying "help I'm out of my depth".</p>
]]></description><pubDate>Tue, 18 Nov 2025 22:04:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=45972841</link><dc:creator>recitedropper</dc:creator><comments>https://news.ycombinator.com/item?id=45972841</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45972841</guid></item><item><title><![CDATA[New comment by recitedropper in "Google Antigravity"]]></title><description><![CDATA[
<p>"major architectural decisions don't get documented anywhere"
"commit messages give no "why""<p>This is so far outside of common industry practices that I don't think your sentiment generalizes. Or perhaps your expectation of what should go in a single commit message is different from the rest of us...<p>LLMs, especially those with reasoning chains, are notoriously bad at explaining their thought process. This isn't vibes, it is empiricism: <a href="https://arxiv.org/abs/2305.04388" rel="nofollow">https://arxiv.org/abs/2305.04388</a><p>If you are genuinely working somewhere where the people around you are worse than LLMs at explaining and documenting their thought process, I would looking elsewhere. Can't imagine that is good for one's own development (or sanity).</p>
]]></description><pubDate>Tue, 18 Nov 2025 21:24:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=45972334</link><dc:creator>recitedropper</dc:creator><comments>https://news.ycombinator.com/item?id=45972334</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45972334</guid></item><item><title><![CDATA[New comment by recitedropper in "Gemini 3"]]></title><description><![CDATA[
<p>I'm sure each of the frontier labs have some secret methods, especially in training the models and the engineering of optimizing inference. That said, I don't think them saying they'd keep a big breakthrough secret would be evidence in this case of a "secret sauce" on ARC-AGI-2.<p>If they had found something fundamentally new, I doubt they would've snuck it into Gemini 3. Probably would cook on it longer and release something truly mindblowing. Or, you know, just take over the world with their new omniscient ASI :)</p>
]]></description><pubDate>Tue, 18 Nov 2025 21:06:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=45972084</link><dc:creator>recitedropper</dc:creator><comments>https://news.ycombinator.com/item?id=45972084</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45972084</guid></item><item><title><![CDATA[New comment by recitedropper in "Gemini 3"]]></title><description><![CDATA[
<p>Astroturfing used as evidence of domination. Public forums truly have come full circle.</p>
]]></description><pubDate>Tue, 18 Nov 2025 20:59:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=45971968</link><dc:creator>recitedropper</dc:creator><comments>https://news.ycombinator.com/item?id=45971968</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45971968</guid></item><item><title><![CDATA[New comment by recitedropper in "Gemini 3"]]></title><description><![CDATA[
<p>I think in this case, tokenization and percpetion are somewhat analogous. I think it is probably the case our current tokenization schemes are really simplistic compared to what nature is working with. If you allow the analogy.</p>
]]></description><pubDate>Tue, 18 Nov 2025 20:55:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=45971916</link><dc:creator>recitedropper</dc:creator><comments>https://news.ycombinator.com/item?id=45971916</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45971916</guid></item><item><title><![CDATA[New comment by recitedropper in "Gemini 3"]]></title><description><![CDATA[
<p>Who wants to bet they benchmaxxed ARC-AGI-2? Nothing in their release implies they found some sort of "secret sauce" that justifies the jump.<p>Maybe they are keeping that itself secret, but more likely they probably just have had humans generate an enormous number of examples, and then synthetically build on that.<p>No benchmark is safe, when this much money is on the line.</p>
]]></description><pubDate>Tue, 18 Nov 2025 19:08:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=45970560</link><dc:creator>recitedropper</dc:creator><comments>https://news.ycombinator.com/item?id=45970560</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45970560</guid></item><item><title><![CDATA[New comment by recitedropper in "Google Brings Gemini 3 AI Model to Search and AI Mode"]]></title><description><![CDATA[
<p>That's a good point, although given I'd never seen this rule I question if it is commonly known enough that it is actually the reason I'm being downvoted.<p>Do you not think what has happened today is suspicious? The Gemini 3 posts are, to my eye, out of hand..</p>
]]></description><pubDate>Tue, 18 Nov 2025 19:02:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=45970480</link><dc:creator>recitedropper</dc:creator><comments>https://news.ycombinator.com/item?id=45970480</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45970480</guid></item><item><title><![CDATA[New comment by recitedropper in "Gemini 3"]]></title><description><![CDATA[
<p>Not sure if this is agreeing or disagreeing with there being astroturfing.<p>But I'd reckon that the negative sentiments at the top, combined with that there are over eight Gemini 3 posts on the front page recently, is good evidence of manipulation. This actually might be the most posted about model release this year, and if people were that excited we wouldn't have negative sentiment abound.</p>
]]></description><pubDate>Tue, 18 Nov 2025 19:00:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=45970453</link><dc:creator>recitedropper</dc:creator><comments>https://news.ycombinator.com/item?id=45970453</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45970453</guid></item></channel></rss>