<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: tmvphil</title><link>https://news.ycombinator.com/user?id=tmvphil</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 19:40:03 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=tmvphil" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by tmvphil in "Artemis II is not safe to fly"]]></title><description><![CDATA[
<p>Have you considered that would do nothing to solve Donald Trump's political problem that he promised to make boomers feel like they were reliving their halcyon days one last time?</p>
]]></description><pubDate>Tue, 31 Mar 2026 13:14:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47586899</link><dc:creator>tmvphil</dc:creator><comments>https://news.ycombinator.com/item?id=47586899</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47586899</guid></item><item><title><![CDATA[New comment by tmvphil in "Zohran Mamdani wins the New York mayoral race"]]></title><description><![CDATA[
<p>Zohran isn't proposing putting any new units under rent control (really rent stabilization), only temporarily halting raises to rents for existing stabilized units. This will make it harder for the city to attract new buildings to join rent stabilization in the future, but will benefit existing habitants. It won't have any effect on the ability to profitably develop market rate units at all.</p>
]]></description><pubDate>Wed, 05 Nov 2025 04:08:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=45818940</link><dc:creator>tmvphil</dc:creator><comments>https://news.ycombinator.com/item?id=45818940</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45818940</guid></item><item><title><![CDATA[New comment by tmvphil in "Zohran Mamdani wins the New York mayoral race"]]></title><description><![CDATA[
<p>I'm optimistic that he will actually be a positive force in reforming how the city operates. I think he is pragmatic in that he understands that efficiency in government administration is something that progressives have insufficiently prioritized. His policies are more populist than I'd prefer, but I think not the crazy socialist fever dream that Rs portray it as. The scariest thing for me is the prospect of active sabotage from the federal level, although I don't know how much they have held back.</p>
]]></description><pubDate>Wed, 05 Nov 2025 03:08:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=45818526</link><dc:creator>tmvphil</dc:creator><comments>https://news.ycombinator.com/item?id=45818526</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45818526</guid></item><item><title><![CDATA[New comment by tmvphil in "Starcloud"]]></title><description><![CDATA[
<p>I know this in the same way that even though I don't know the exact credence to assign the probability of particular bad effects from global warming, I can confidently say that an increase by a factor of 1000 of the CO2 emissions would be a bad thing. This is not because I have done a simulation, but instead my beliefs are based on the assumption that while concerned experts might be wrong in the details, they are probably not wrong with a gap of  3 orders of magnitude.</p>
]]></description><pubDate>Wed, 22 Oct 2025 16:24:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=45671525</link><dc:creator>tmvphil</dc:creator><comments>https://news.ycombinator.com/item?id=45671525</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45671525</guid></item><item><title><![CDATA[New comment by tmvphil in "Starcloud"]]></title><description><![CDATA[
<p>Tell me the nuance then. If people have concerns about Kessler syndrome at the starlink scale then why wouldn't something literally 1000x bigger be even more concerning.</p>
]]></description><pubDate>Wed, 22 Oct 2025 14:57:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=45670122</link><dc:creator>tmvphil</dc:creator><comments>https://news.ycombinator.com/item?id=45670122</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45670122</guid></item><item><title><![CDATA[New comment by tmvphil in "Starcloud"]]></title><description><![CDATA[
<p>See my edit. Just one starcloud would represent an increase in a risk factor of over 300 c.f. status quo. Then multiply that by the number of starclouds you think would be deployed.</p>
]]></description><pubDate>Wed, 22 Oct 2025 14:30:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=45669700</link><dc:creator>tmvphil</dc:creator><comments>https://news.ycombinator.com/item?id=45669700</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45669700</guid></item><item><title><![CDATA[New comment by tmvphil in "Starcloud"]]></title><description><![CDATA[
<p>Not sure what the slippery slope is here. The linked page imagines a 4km x 4km radiator/solar array. The cross-sectional area of the array is going to be directly proportional to the probability of impacting high velocity space debris. In such an event the amount of debris that would be generated could also scale with the area of the array. This seems bad</p>
]]></description><pubDate>Wed, 22 Oct 2025 14:07:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=45669395</link><dc:creator>tmvphil</dc:creator><comments>https://news.ycombinator.com/item?id=45669395</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45669395</guid></item><item><title><![CDATA[New comment by tmvphil in "Starcloud"]]></title><description><![CDATA[
<p>How is a multiple square-kilometer radiator not just an inevitable Kessler syndrome disaster?<p>Edit: Some back of the envelope calculation suggests that the total cross-sectional area of all man-made orbiting satellites is around 55000 m^2. Just one 4km x 4km = 1600000m^2 starcloud would represent an increase by a factor of about 300. That's insane.</p>
]]></description><pubDate>Wed, 22 Oct 2025 13:53:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=45669194</link><dc:creator>tmvphil</dc:creator><comments>https://news.ycombinator.com/item?id=45669194</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45669194</guid></item><item><title><![CDATA[New comment by tmvphil in "Internet's biggest annoyance: Cookie laws should target browsers, not websites"]]></title><description><![CDATA[
<p>I simply do not care if advertisers form an accurate view of my desires and beliefs.</p>
]]></description><pubDate>Wed, 22 Oct 2025 13:31:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=45668884</link><dc:creator>tmvphil</dc:creator><comments>https://news.ycombinator.com/item?id=45668884</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45668884</guid></item><item><title><![CDATA[New comment by tmvphil in "Show HN: Project management system for Claude Code"]]></title><description><![CDATA[
<p>I think for me personally, such a linear breakdown of the design process doesn't work. I might write down "I want to do X, which I think can be accomplished with design Y, which can be broken down into tasks A, B, and C" but after implementing A I realize I actually want X' or need to evolve the design to Y' or that a better next task is actually D which I didn't think of before.</p>
]]></description><pubDate>Wed, 20 Aug 2025 15:39:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=44962927</link><dc:creator>tmvphil</dc:creator><comments>https://news.ycombinator.com/item?id=44962927</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44962927</guid></item><item><title><![CDATA[New comment by tmvphil in "Show HN: Project management system for Claude Code"]]></title><description><![CDATA[
<p>Waterfall might be what you need when dealing with external human clients, but why would you voluntarily impose it on yourself in miniature?</p>
]]></description><pubDate>Wed, 20 Aug 2025 14:35:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=44962363</link><dc:creator>tmvphil</dc:creator><comments>https://news.ycombinator.com/item?id=44962363</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44962363</guid></item><item><title><![CDATA[New comment by tmvphil in "Show HN: Project management system for Claude Code"]]></title><description><![CDATA[
<p>Sorry, I'm going to be critical:<p>"We follow a strict 5-phase discipline" -  So we're doing waterfall again? Does this seem appealing to anyone? The problem is you always get the requirements and spec wrong, and then AI slavishly delivers something that meets spec but doesn't meet the need.<p>What happens when you get to the end of your process and you are unhappy with the result? Do you throw it out and rewrite the requirements and start from scratch? Do you try to edit the requirements spec and implementation in a coordinated way? Do you throw out the spec and just vibe code? Do you just accept the bad output and try to build a new fix with a new set of requirements on top of it?<p>(Also the llm authored readme is hard to read for me. Everything is a bullet point or emoji and it is not structured in a way that makes it clear what it is. I didn't even know what a PRD meant until halfway through)</p>
]]></description><pubDate>Wed, 20 Aug 2025 13:53:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=44961973</link><dc:creator>tmvphil</dc:creator><comments>https://news.ycombinator.com/item?id=44961973</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44961973</guid></item><item><title><![CDATA[New comment by tmvphil in "Imagen 4 is now generally available"]]></title><description><![CDATA[
<p>The way it totally disregards the many explicit instructions given in the "four panel" comic strip.</p>
]]></description><pubDate>Fri, 15 Aug 2025 18:36:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=44915965</link><dc:creator>tmvphil</dc:creator><comments>https://news.ycombinator.com/item?id=44915965</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44915965</guid></item><item><title><![CDATA[New comment by tmvphil in "PYX: The next step in Python packaging"]]></title><description><![CDATA[
<p>Fundamentally we still have the flat namespace of top level python imports, which is the same as the package name for ~95% of projects, so I'm not sure how they 
could really change that.</p>
]]></description><pubDate>Wed, 13 Aug 2025 19:08:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=44892508</link><dc:creator>tmvphil</dc:creator><comments>https://news.ycombinator.com/item?id=44892508</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44892508</guid></item><item><title><![CDATA[New comment by tmvphil in "GPT-5"]]></title><description><![CDATA[
<p>Kind of have one with the missing image benchmark: <a href="https://openai.com/index/introducing-gpt-5/#more-honest-responses" rel="nofollow">https://openai.com/index/introducing-gpt-5/#more-honest-resp...</a></p>
]]></description><pubDate>Thu, 07 Aug 2025 17:50:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=44827867</link><dc:creator>tmvphil</dc:creator><comments>https://news.ycombinator.com/item?id=44827867</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44827867</guid></item><item><title><![CDATA[New comment by tmvphil in "Mastercard deflects blame for NSFW games being taken down"]]></title><description><![CDATA[
<p>As opposed to a hypothetical scenario where it is legal to participate in illegal transactions?</p>
]]></description><pubDate>Mon, 04 Aug 2025 11:27:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=44784355</link><dc:creator>tmvphil</dc:creator><comments>https://news.ycombinator.com/item?id=44784355</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44784355</guid></item><item><title><![CDATA[New comment by tmvphil in "Diet, not lack of exercise, drives obesity, a new study finds"]]></title><description><![CDATA[
<p>I don't think a chipotle burrito is actually 1600 calories unless you do something non-standard. Probably 800-1100</p>
]]></description><pubDate>Thu, 24 Jul 2025 19:52:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=44675241</link><dc:creator>tmvphil</dc:creator><comments>https://news.ycombinator.com/item?id=44675241</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44675241</guid></item><item><title><![CDATA[New comment by tmvphil in "Kiro: A new agentic IDE"]]></title><description><![CDATA[
<p>I turn that stuff off and just use `q chat` for everything, which actually works very well in my experience.</p>
]]></description><pubDate>Tue, 15 Jul 2025 15:45:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=44572361</link><dc:creator>tmvphil</dc:creator><comments>https://news.ycombinator.com/item?id=44572361</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44572361</guid></item><item><title><![CDATA[New comment by tmvphil in "François Chollet: The Arc Prize and How We Get to AGI [video]"]]></title><description><![CDATA[
<p>You can define AGI however you want I suppose, but I would consider it achieved when AI can achieve at least about median human performance on all cognitive tasks. Obviously computers are useful well before this point, but it is clearly meaningful line in the sand, useful enough to merit having a dedicated name like "AGI". Constructed tasks like ARC-AGI simply quantify what everyone can already see, which is that current models cannot be used as a drop-in replacement for humans in most cases.<p>To me, superintelligence means specifically either dominating us in our highest intellectual accomplishments, i.e. math, science, philosophy or literally dominating us via subordinating or eliminating humans. Neither of these things have happened at all.</p>
]]></description><pubDate>Tue, 08 Jul 2025 14:04:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=44500092</link><dc:creator>tmvphil</dc:creator><comments>https://news.ycombinator.com/item?id=44500092</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44500092</guid></item><item><title><![CDATA[New comment by tmvphil in "François Chollet: The Arc Prize and How We Get to AGI [video]"]]></title><description><![CDATA[
<p>> There's already a big meaningful gap between the things AIs can do which humans can't, so why do you only count as "meaningful" the things humans can do which AIs can't?<p>Where did I say there was nothing meaningful about current capabilities? I'm saying that's what is novel about a claim of "AGI" (as opposed to a claim of "computer does something better than humans", which has been an obviously true statement since the ENIAC) is the ability to do at some level <i>everything</i> a normal human intelligence can do.</p>
]]></description><pubDate>Mon, 07 Jul 2025 16:24:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=44491937</link><dc:creator>tmvphil</dc:creator><comments>https://news.ycombinator.com/item?id=44491937</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44491937</guid></item></channel></rss>