<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: viscanti</title><link>https://news.ycombinator.com/user?id=viscanti</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 28 Apr 2026 19:40:31 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=viscanti" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by viscanti in "Wall Street ruined the Roomba and then blamed Lina Khan"]]></title><description><![CDATA[
<p>I believe the author's thesis is that if they had invested in innovation over a couple decades, the product probably would have sucked less.</p>
]]></description><pubDate>Fri, 19 Dec 2025 20:27:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46330492</link><dc:creator>viscanti</dc:creator><comments>https://news.ycombinator.com/item?id=46330492</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46330492</guid></item><item><title><![CDATA[New comment by viscanti in "Anthropic acquires Bun"]]></title><description><![CDATA[
<p>They had pretty drastic price cuts on Opus 4.5. It's possible they're now selling inference at a loss to gain market share, or at least that their margins are much lower. Dario claims that all their previous models were profitable (even after accounting for research costs), but it's unclear that there's a path to keeping their previous margins and expanding revenue as fast or faster than their costs (each model has been substantially more expensive than the previous model).</p>
]]></description><pubDate>Tue, 02 Dec 2025 20:00:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46126025</link><dc:creator>viscanti</dc:creator><comments>https://news.ycombinator.com/item?id=46126025</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46126025</guid></item><item><title><![CDATA[New comment by viscanti in "Grammarly rebrands to 'Superhuman,' launches a new AI assistant"]]></title><description><![CDATA[
<p>Brand recognition that they're throwing away with a rebrand.</p>
]]></description><pubDate>Wed, 29 Oct 2025 20:32:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=45752654</link><dc:creator>viscanti</dc:creator><comments>https://news.ycombinator.com/item?id=45752654</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45752654</guid></item><item><title><![CDATA[New comment by viscanti in "iPhone Air"]]></title><description><![CDATA[
<p>You're watching video podcasts while hiking or what's the weekend hike use case for more than 27 hours of video playback on a single charge?</p>
]]></description><pubDate>Tue, 09 Sep 2025 20:13:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=45188162</link><dc:creator>viscanti</dc:creator><comments>https://news.ycombinator.com/item?id=45188162</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45188162</guid></item><item><title><![CDATA[New comment by viscanti in "Anthropic raises $13B Series F"]]></title><description><![CDATA[
<p>Well how much of it is correlation vs causation. Does the next generation of model unlock another 10x usage? Or was Claude 3 "good enough" that it got traction from early adopters and Claude 4 is "good enough" that it's getting a lot of mid/late adopters using it for this generation? Presumably competitors get better and at cheaper prices (Anthropic charges a premium per token currently) as well.</p>
]]></description><pubDate>Tue, 02 Sep 2025 17:53:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=45106645</link><dc:creator>viscanti</dc:creator><comments>https://news.ycombinator.com/item?id=45106645</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45106645</guid></item><item><title><![CDATA[New comment by viscanti in "They're Killing the Humanities on Purpose"]]></title><description><![CDATA[
<p>It's more like the patient needs some fixed amount of food each day and it doesn't make a lot of sense to create lots more food than they need on the hopes that someday they'll want to eat more than they can.<p>If the argument is that everyone should focus on the arts at the expense of everything else, it's hard to imagine that's an ideal outcome relative to alternatives. If we're not arguing that everyone should focus 100% on the arts (no other degrees should be available), then it's a matter of degree and certainly some outcomes might end up with more people pursuing the arts than what society needs.</p>
]]></description><pubDate>Fri, 15 Aug 2025 17:17:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=44914988</link><dc:creator>viscanti</dc:creator><comments>https://news.ycombinator.com/item?id=44914988</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44914988</guid></item><item><title><![CDATA[New comment by viscanti in "OpenAI Codex hands-on review"]]></title><description><![CDATA[
<p>It's much more conservative in the scope of task it will attempt and it's much slower. You need to fire and forget several parallel tasks because you'll be waiting 10+ minutes before you get anything you can review and give feedback on.</p>
]]></description><pubDate>Tue, 20 May 2025 16:15:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=44043260</link><dc:creator>viscanti</dc:creator><comments>https://news.ycombinator.com/item?id=44043260</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44043260</guid></item><item><title><![CDATA[New comment by viscanti in "Gemini 2.5 Pro vs. Claude 3.7 Sonnet: Coding Comparison"]]></title><description><![CDATA[
<p>If it's not astroturfing, the people who are so vocal about it act in a way that's nearly indistinguishable from it. I keep looking for concrete examples of use cases that show it's better, and everything seems to point back to "everyone is talking about it" or anecdotal examples that don't even provide any details about the problem that Gemini did well on and that other models all failed at.</p>
]]></description><pubDate>Mon, 31 Mar 2025 13:30:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=43534848</link><dc:creator>viscanti</dc:creator><comments>https://news.ycombinator.com/item?id=43534848</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43534848</guid></item><item><title><![CDATA[New comment by viscanti in "Silicon Valley got what it wanted"]]></title><description><![CDATA[
<p>A16Z</p>
]]></description><pubDate>Thu, 14 Nov 2024 21:48:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=42141591</link><dc:creator>viscanti</dc:creator><comments>https://news.ycombinator.com/item?id=42141591</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42141591</guid></item><item><title><![CDATA[New comment by viscanti in "iPhone 16 Pro and iPhone 16 Pro Max"]]></title><description><![CDATA[
<p>This kind of proves the point? Presumably your mother didn't buy the latest phone for "continuity" or camera improvements. The features and additional hardware improvements might be noticeable after being used, but are they driving sales to people who aren't tech enthusiasts?</p>
]]></description><pubDate>Tue, 10 Sep 2024 14:22:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=41501091</link><dc:creator>viscanti</dc:creator><comments>https://news.ycombinator.com/item?id=41501091</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41501091</guid></item><item><title><![CDATA[New comment by viscanti in "OpenAI is good at unminifying code"]]></title><description><![CDATA[
<p>If you're able to generate minified code from all the code you can find on the internet, you end up with a very large training set. Of course in some scenarios you won't know what the original variable names were, but you would expect to be able to get something very usable out of it. These things, where you can deterministically generate new and useful training data, you would expect to be used.</p>
]]></description><pubDate>Thu, 29 Aug 2024 17:18:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=41393097</link><dc:creator>viscanti</dc:creator><comments>https://news.ycombinator.com/item?id=41393097</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41393097</guid></item><item><title><![CDATA[New comment by viscanti in "OpenAI is good at unminifying code"]]></title><description><![CDATA[
<p>Because of how trivial that step is, it's likely pretty easy to just take lots of code and minify it. Then you have the training data you need to learn to generate full code from minified code. If your goal is to generate additional useful training data for your LLM, it could make sense to actually do that.</p>
]]></description><pubDate>Thu, 29 Aug 2024 16:43:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=41392726</link><dc:creator>viscanti</dc:creator><comments>https://news.ycombinator.com/item?id=41392726</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41392726</guid></item><item><title><![CDATA[New comment by viscanti in "Rabbit failed to properly reset keys: emails can be sent from rabbit.tech domain"]]></title><description><![CDATA[
<p>> I can't understand the hate<p>I think it's because of the promises of the team (new Large Action Model) vs what's actually being delivered (the model is some scripts). The team has a history of over promising and underdelivering (or scamming - depending on your perspective). It's also economically unviable. Somehow you're meant to get free LLM calls for life but there's no way for them to actually cover those. There's not really any communication about how it might be a limited time thing for early adopters or how it could ever get to be sustainable.<p>If they had focused on what they have, they probably could have charged the same amount and people would generally be OK with it. But they've over promised and under delivered again. I think the reaction is pretty understandable.</p>
]]></description><pubDate>Wed, 26 Jun 2024 17:21:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=40802244</link><dc:creator>viscanti</dc:creator><comments>https://news.ycombinator.com/item?id=40802244</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40802244</guid></item><item><title><![CDATA[New comment by viscanti in "Google DeepMind shifts from research lab to AI product factory"]]></title><description><![CDATA[
<p>It seems to be difficult to turn the pure research back into new products. Apple famously got lots of ideas for free from Xerox PARC. Google researchers wrote the Attention Is All You Need paper and they're now desperately playing catchup because they couldn't convert it to any kind of product. There's nothing wrong with companies investing in pure research, but these large companies sometimes are unable to take advantage of the research. The people running the business want to keep doing what got them successful, not some new experimental thing that might not work.</p>
]]></description><pubDate>Tue, 18 Jun 2024 00:06:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=40712678</link><dc:creator>viscanti</dc:creator><comments>https://news.ycombinator.com/item?id=40712678</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40712678</guid></item><item><title><![CDATA[New comment by viscanti in "Apple's On-Device and Server Foundation Models"]]></title><description><![CDATA[
<p>No one has ever brought a native (not 3rd party) calculator to the iPad before. Apple is the first.</p>
]]></description><pubDate>Tue, 11 Jun 2024 19:44:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=40650634</link><dc:creator>viscanti</dc:creator><comments>https://news.ycombinator.com/item?id=40650634</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40650634</guid></item><item><title><![CDATA[New comment by viscanti in "Apple Intelligence for iPhone, iPad, and Mac"]]></title><description><![CDATA[
<p>On device or in an Apple owned DC. It sounds like they have aspirations for their own Apple owned LLM. ChatGPT seems like it's there until they can get something good enough to generally replace it for cases where their in-house solution isn't capable enough yet. They likely continue to invest heavily on big capable LLMs as well as ones that are small enough to run on device (while working on the hardware side to ensure they have the device capabilities to run more powerful models on the device).</p>
]]></description><pubDate>Mon, 10 Jun 2024 19:40:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=40637809</link><dc:creator>viscanti</dc:creator><comments>https://news.ycombinator.com/item?id=40637809</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40637809</guid></item><item><title><![CDATA[New comment by viscanti in "Rabbit R1 It's a Scam"]]></title><description><![CDATA[
<p>They say it's going to be free forever with no subscription, but they have to pay for chatgpt API calls. Even if you forgive them for overhyping their chatgpt wrapper, they're still a ponzi scheme.</p>
]]></description><pubDate>Wed, 05 Jun 2024 19:36:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=40589495</link><dc:creator>viscanti</dc:creator><comments>https://news.ycombinator.com/item?id=40589495</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40589495</guid></item><item><title><![CDATA[New comment by viscanti in "FTX bankruptcy examiner's report [pdf]"]]></title><description><![CDATA[
<p>If you know a great deal about what is right and wrong, and you choose to do something bad, that feels worse than being bad and not knowing any better.</p>
]]></description><pubDate>Thu, 23 May 2024 23:17:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=40461049</link><dc:creator>viscanti</dc:creator><comments>https://news.ycombinator.com/item?id=40461049</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40461049</guid></item><item><title><![CDATA[New comment by viscanti in "YouTube seems to once again be rolling out its widely hated new web redesign"]]></title><description><![CDATA[
<p>For some people, the comments are the worst part of YouTube. I could see them being pretty vocal about not liking the design that makes them more visible.</p>
]]></description><pubDate>Wed, 22 May 2024 23:23:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=40448125</link><dc:creator>viscanti</dc:creator><comments>https://news.ycombinator.com/item?id=40448125</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40448125</guid></item><item><title><![CDATA[The mobile S-curve ends, and the AI S-curve begins]]></title><description><![CDATA[
<p>Article URL: <a href="https://andrewchen.substack.com/p/the-mobile-s-curve-ends-and-the-ai">https://andrewchen.substack.com/p/the-mobile-s-curve-ends-and-the-ai</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=40280454">https://news.ycombinator.com/item?id=40280454</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 06 May 2024 22:57:41 +0000</pubDate><link>https://andrewchen.substack.com/p/the-mobile-s-curve-ends-and-the-ai</link><dc:creator>viscanti</dc:creator><comments>https://news.ycombinator.com/item?id=40280454</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40280454</guid></item></channel></rss>