<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: vannevar</title><link>https://news.ycombinator.com/user?id=vannevar</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 17:40:26 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=vannevar" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by vannevar in "We gave an AI a 3-year Lease. It opened a store"]]></title><description><![CDATA[
<p>Agreed. Color me skeptical. All of the interactions and decisions described are plausible, but in my experience with AI agents, they would require frequent human intervention.</p>
]]></description><pubDate>Sat, 11 Apr 2026 13:35:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47730468</link><dc:creator>vannevar</dc:creator><comments>https://news.ycombinator.com/item?id=47730468</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47730468</guid></item><item><title><![CDATA[New comment by vannevar in "Claude mixes up who said what"]]></title><description><![CDATA[
<p>I don't know, "pleading for the computer to work" pretty much sums up my entire 40-year career in software. Only the level of abstraction has changed.</p>
]]></description><pubDate>Thu, 09 Apr 2026 14:06:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47703966</link><dc:creator>vannevar</dc:creator><comments>https://news.ycombinator.com/item?id=47703966</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47703966</guid></item><item><title><![CDATA[New comment by vannevar in "Artemis II is not safe to fly"]]></title><description><![CDATA[
<p>> They haven't lost a crewed vehicle.<p>Yes, that's my point. SpaceX understands they need to do <i>many</i> unmanned flights before trusting a launch system with a crew. NASA is trusting Artemis with only a single unmanned flight. That is very high risk tolerance, to the point of recklessness in my opinion, compared to SpaceX.</p>
]]></description><pubDate>Sun, 05 Apr 2026 16:26:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47651009</link><dc:creator>vannevar</dc:creator><comments>https://news.ycombinator.com/item?id=47651009</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47651009</guid></item><item><title><![CDATA[New comment by vannevar in "Artemis II is not safe to fly"]]></title><description><![CDATA[
<p>They do not expect to lose a given vehicle. They <i>are</i> tolerant of losing some vehicles over time, because they understand that every flight may be affected by unknown unknowns. There is certainly no evidence that they expect to lose crewed vehicles, or that they are tolerant of crew loss.<p>I think the high loss rate for Starship can largely be traced back to the choice of using steel for the vehicle, which drastically reduces margins across the system. You could certainly say that they had a higher expectation of failure because they made that choice. In that sense, I understand your point. But to the best of their ability, they try to fly every vehicle successfully.</p>
]]></description><pubDate>Fri, 03 Apr 2026 13:41:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47626567</link><dc:creator>vannevar</dc:creator><comments>https://news.ycombinator.com/item?id=47626567</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47626567</guid></item><item><title><![CDATA[New comment by vannevar in "LinkedIn is searching your browser extensions"]]></title><description><![CDATA[
<p>>"Ads are the only way we've found that actually implements a form of microtransactions... paying a tenth of a penny for a sliver of attention."<p>Ads were the path of least resistance, and once entrenched, they effectively prevented any alternative from emerging. Now that we've seen how advertising scales, and how it's ruined our mediascape, we're finally looking at alternatives. Not dissimilar to how we reacted to pollution, once we saw it at scale.</p>
]]></description><pubDate>Thu, 02 Apr 2026 15:46:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47616042</link><dc:creator>vannevar</dc:creator><comments>https://news.ycombinator.com/item?id=47616042</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47616042</guid></item><item><title><![CDATA[New comment by vannevar in "Artemis II is not safe to fly"]]></title><description><![CDATA[
<p>Until Artemis actually flies a comparable number of missions, any advantage in reliability is pure speculation. Which is not a good way to approach crewed spaceflight. I don't think the two programs are as different as you think, prospectively: both take great care to ensure that their vehicles don't fail. Starships may be cheaper than the SLS, but they're still very expensive. SpaceX doesn't go into a flight expecting to lose a vehicle. The difference in culture is more in the <i>reaction</i> to failure. As a private company, SpaceX moves very quickly in the wake of failure, whereas NASA has in recent decades become much more cautious once a failure has occurred. And while you say SpaceX is more tolerant of risk, I would note that they've never flown a crew on a launch vehicle that had only one previous unmanned launch. Falcon 9 had 85 unmanned launches before there was a crew aboard. And they expect to launch 100 unmanned Starships before they fly one with a crew.<p>Now which program seems the more risk tolerant?</p>
]]></description><pubDate>Tue, 31 Mar 2026 22:51:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47594517</link><dc:creator>vannevar</dc:creator><comments>https://news.ycombinator.com/item?id=47594517</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47594517</guid></item><item><title><![CDATA[New comment by vannevar in "Artemis II is not safe to fly"]]></title><description><![CDATA[
<p>All of the controversy over the heat shield is obscuring the much bigger safety issue: Artemis has had only a single unmanned test flight. By contrast, the Saturn launch system had seven successful unmanned tests before being trusted with a crew, including two unmanned flights of the complete Saturn V stack. And even then, three astronauts were lost during ground testing of the crew capsule due to a critical design flaw. Artemis's closest modern counterpart, the SpaceX Starship, has had 11 test flights, several of which resulted in loss of the vehicle. There is no reason to believe that Artemis has a significantly higher reliability rate than Starship or Saturn V. Even without the heat shield controversy, this is the most dangerous mission NASA has launched since the first flight of the Space Shuttle.</p>
]]></description><pubDate>Tue, 31 Mar 2026 14:41:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47588055</link><dc:creator>vannevar</dc:creator><comments>https://news.ycombinator.com/item?id=47588055</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47588055</guid></item><item><title><![CDATA[New comment by vannevar in "NanoClaw Adopts OneCLI Agent Vault"]]></title><description><![CDATA[
<p>Same here. Coupled with configuring the agent's email account at the provider to only be able to send and receive to my email address.</p>
]]></description><pubDate>Tue, 24 Mar 2026 16:43:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47505476</link><dc:creator>vannevar</dc:creator><comments>https://news.ycombinator.com/item?id=47505476</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47505476</guid></item><item><title><![CDATA[New comment by vannevar in "US SEC preparing to scrap quarterly reporting requirement"]]></title><description><![CDATA[
<p>>So your objection is the way in which they did the accounting?<p>Yes, the accounting is the problem. As I said from the outset, if they actually just traded chips for stock, it would not be an issue.</p>
]]></description><pubDate>Thu, 19 Mar 2026 02:38:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47434180</link><dc:creator>vannevar</dc:creator><comments>https://news.ycombinator.com/item?id=47434180</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47434180</guid></item><item><title><![CDATA[New comment by vannevar in "Speed at the cost of quality: Study of use of Cursor AI in open source projects (2025)"]]></title><description><![CDATA[
<p>In a one-shot scenario, I agree. But LLMs make iteration <i>much</i> faster. So the comparison is not really between an AI and an experienced dev coding by hand, it's between the dev iterating with an LLM and the dev iterating by hand. And the former can produce high-quality code much faster than the latter.<p>The question is, what happens when you have a middling dev iterating with an LLM? And in that case, the drop in quality is probably non-linear---it can get pretty bad, pretty fast.</p>
]]></description><pubDate>Wed, 18 Mar 2026 14:23:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47426164</link><dc:creator>vannevar</dc:creator><comments>https://news.ycombinator.com/item?id=47426164</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47426164</guid></item><item><title><![CDATA[New comment by vannevar in "US SEC preparing to scrap quarterly reporting requirement"]]></title><description><![CDATA[
<p>>If my company wanted to barter with another company to exchange equity for infrastructure how would you expect that to be reported? Did this situation differ from that expectation?<p>As I mentioned, I would have no problem if that's what happened. But it isn't. Nvidia recorded the cash as ordinary income. They did NOT record the stock as income. Cash has a clear value; stock does not. You keep reducing the transaction to its effective outcome, which is not where the problem lies, as I outlined above.</p>
]]></description><pubDate>Wed, 18 Mar 2026 13:56:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47425869</link><dc:creator>vannevar</dc:creator><comments>https://news.ycombinator.com/item?id=47425869</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47425869</guid></item><item><title><![CDATA[New comment by vannevar in "UBI Is the Wrong Answer to the Right Problem"]]></title><description><![CDATA[
<p>>We just don't know and can't really imagine what it's like to live in a world where you are entitled to money for existing, no strings attached, pretty much from cradle to grave.<p>Sure we can. As I noted, wealthy people live in this world already. And we don't see all of them turning into couch potatoes once they have passive income equal to UBI. Sure, there's a human tendency to enjoy leisure. But there's also a human tendency to enjoy work. And a human tendency to project negative attributes onto others we don't know. ;-)</p>
]]></description><pubDate>Wed, 18 Mar 2026 13:45:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47425750</link><dc:creator>vannevar</dc:creator><comments>https://news.ycombinator.com/item?id=47425750</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47425750</guid></item><item><title><![CDATA[New comment by vannevar in "UBI Is the Wrong Answer to the Right Problem"]]></title><description><![CDATA[
<p>The article's central premise is based on a false assumption, which is that people taking UBI will be idle. There is no significant evidence to support that claim. The scant evidence we have so far on UBI is largely limited to relatively small numbers of people in poverty given small amounts of money insufficient to provide any opportunity for savings, and even that evidence is at best mixed. On the other hand, there are many people who receive an inheritance large enough that they never need to work again, yet the vast majority of those people are not idle but actively create new businesses and take on other projects or hobbies.<p>And the reason that our infrastructure is crumbling is not some social problem, nor some intrinsic "undervaluing of the future," but something simpler and more pragmatic: our taxation has not kept up with our necessary spending, particularly taxation of the wealthy as wealth has concentrated at the top. Everyone's talking about abundance as if it is something that is yet to come, but we've had rapidly increasing abundance for 50 years, as technology has made the individual worker more productive. And the vast majority of that increase in productivity has been turned into increased wealth for the top 10%. UBI would be the first reversal of that trend, requiring a massive tax on the productivity of AI and robotic infrastructure that in all likelihood will be 90% owned by the wealthiest top 10%. Naturally, they are concerned about that prospect, and so we see articles like this one.</p>
]]></description><pubDate>Wed, 18 Mar 2026 04:57:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47421731</link><dc:creator>vannevar</dc:creator><comments>https://news.ycombinator.com/item?id=47421731</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47421731</guid></item><item><title><![CDATA[New comment by vannevar in "US SEC preparing to scrap quarterly reporting requirement"]]></title><description><![CDATA[
<p>Wall Street places a value on sales, on the assumption that the sale means a customer had the money and the desire to buy the company's goods. In this case, OpenAI had the desire but not the money---Nvidia basically gave them the money to buy the product. So that "sale" should be devalued in the market. What if Nvidia paid more for the stock than the chips were worth? Now they're essentially paying people to buy their product and hiding the bribe in an equity deal by overvaluing the customer. The market sees the big growing sales number and buys Nvidia stock on the assumption that the growth is organic. It also sees Nvidia putting a big valuation on OpenAI, driving up that company's value at well. At some point, OpenAI ends up with more chips than it needs and Nvidia ends up holding a bunch of overvalued OpenAI stock instead of cash. And both stocks eventually crash as a result.<p>Does that clarify the situation?</p>
]]></description><pubDate>Wed, 18 Mar 2026 01:40:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47420689</link><dc:creator>vannevar</dc:creator><comments>https://news.ycombinator.com/item?id=47420689</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47420689</guid></item><item><title><![CDATA[New comment by vannevar in "US SEC preparing to scrap quarterly reporting requirement"]]></title><description><![CDATA[
<p>And if they actually constructed the deal that way, it would be fine. But by essentially creating a sham sale where they return the cash back to the customer in return for equity, Nvidia can book revenue and claim non-existent cash flow. The key is that the sale <i>would not have happened</i> without the corresponding equity deal. Nvidia had no discretion to use that cash any other way, so the "cash flow" in that case is illusory.</p>
]]></description><pubDate>Tue, 17 Mar 2026 15:02:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47413672</link><dc:creator>vannevar</dc:creator><comments>https://news.ycombinator.com/item?id=47413672</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47413672</guid></item><item><title><![CDATA[New comment by vannevar in "US Job Market Visualizer"]]></title><description><![CDATA[
<p>In the US, since the 1970s virtually all technologically-driven productivity gains have been captured by the top 10% (who own 90% of all public equity). (See, e.g., <a href="https://www.epi.org/productivity-pay-gap/" rel="nofollow">https://www.epi.org/productivity-pay-gap/</a> .)<p>So no, little or none of the AI productivity gains will go to workers, barring significant changes in public policy like universal basic income and the massive tax increases necessary to implement it.</p>
]]></description><pubDate>Mon, 16 Mar 2026 20:47:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47404653</link><dc:creator>vannevar</dc:creator><comments>https://news.ycombinator.com/item?id=47404653</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47404653</guid></item><item><title><![CDATA[New comment by vannevar in "LLMs work best when the user defines their acceptance criteria first"]]></title><description><![CDATA[
<p>I'd highly recommend working top down, getting it to outline a sane architecture before it starts coding. Then if one of the modules starts getting fouled up, start with a clean sheet context (for that module) incorporating any cautions or lessons learned from the bad experience. LLMs are not yet good at working and reworking the same code, for the reasons you outline. But they are pretty good at a "Groundhog Day" approach of going through the implementation process over and over until they get it right.</p>
]]></description><pubDate>Sat, 07 Mar 2026 02:49:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47283958</link><dc:creator>vannevar</dc:creator><comments>https://news.ycombinator.com/item?id=47283958</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47283958</guid></item><item><title><![CDATA[New comment by vannevar in "US tech firms pledge at White House to bear costs of energy for datacenters"]]></title><description><![CDATA[
<p>>The companies are giving average lay people access to a personal PhD to help with whatever they are working on, for $20/mo, and those companies are committing an evil cardinal sin?<p>The social media companies gave their services for <i>free</i>, and now it turns out they've committed quite a few sins. None of the AI companies are doing this out of the goodness of their hearts, nor will they be satisfied with subscription revenue. If they see opportunities to make more money by manipulating the population, rest assured they will take those opportunities.</p>
]]></description><pubDate>Thu, 05 Mar 2026 17:00:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47264122</link><dc:creator>vannevar</dc:creator><comments>https://news.ycombinator.com/item?id=47264122</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47264122</guid></item><item><title><![CDATA[New comment by vannevar in "Will vibe coding end like the maker movement?"]]></title><description><![CDATA[
<p>The enormous difference between vibe-coding and 3D printing is that vibe-coding is improving exponentially at a rapid rate, while 3D printing is improving linearly at a slow rate. Very little that we say about vibe-coding today is likely to be valid even six months' from now, whereas a 3D printer sold 5 years from now will probably be very similar to one sold today.</p>
]]></description><pubDate>Fri, 27 Feb 2026 17:04:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47182864</link><dc:creator>vannevar</dc:creator><comments>https://news.ycombinator.com/item?id=47182864</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47182864</guid></item><item><title><![CDATA[New comment by vannevar in "The DOJ's Top Antitrust Officer Has Left as Lobbying Surges"]]></title><description><![CDATA[
<p>"Lobbying" is a very polite term for it.</p>
]]></description><pubDate>Fri, 20 Feb 2026 03:09:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47083182</link><dc:creator>vannevar</dc:creator><comments>https://news.ycombinator.com/item?id=47083182</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47083182</guid></item></channel></rss>