<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: mikepavone</title><link>https://news.ycombinator.com/user?id=mikepavone</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 04 Apr 2026 09:18:28 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=mikepavone" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by mikepavone in "Does that use a lot of energy?"]]></title><description><![CDATA[
<p>I'm not sure it's even a particularly relevant comparison to an hour of use of various other electronic devices. I'm sure the median user is running a lot fewer queries than a Claude Code power-user, but I would guess it's still more than one in a typical session.</p>
]]></description><pubDate>Wed, 04 Mar 2026 22:20:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47254770</link><dc:creator>mikepavone</dc:creator><comments>https://news.ycombinator.com/item?id=47254770</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47254770</guid></item><item><title><![CDATA[New comment by mikepavone in "End of an era for me: no more self-hosted git"]]></title><description><![CDATA[
<p>As someone with a self-hosted Mercurial instance dealing with this, I will say that the big names (OpenAI included, but not exclusively them) generally at least use proper user-agents and respect robots.txt, but they are still needlessly aggressive compared to traditional search indexers.<p>There are also scrapers that are hiding behind normal browser user agents. When I looked at IP ranges, at least some of them seemed to be coming from data centers in China.</p>
]]></description><pubDate>Wed, 11 Feb 2026 23:53:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46982929</link><dc:creator>mikepavone</dc:creator><comments>https://news.ycombinator.com/item?id=46982929</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46982929</guid></item><item><title><![CDATA[New comment by mikepavone in "We replaced H.264 streaming with JPEG screenshots (and it worked better)"]]></title><description><![CDATA[
<p>Nothing stopping you from encoding h264 at a low frame rate like 5 or 10 fps. In webRTC, you can actually specify how you want to handle low bitrate situations with degredationPreference. If set to maintain-resolution, it will prefer sacrificing frame rate.</p>
]]></description><pubDate>Wed, 24 Dec 2025 06:01:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=46372912</link><dc:creator>mikepavone</dc:creator><comments>https://news.ycombinator.com/item?id=46372912</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46372912</guid></item><item><title><![CDATA[New comment by mikepavone in "We replaced H.264 streaming with JPEG screenshots (and it worked better)"]]></title><description><![CDATA[
<p>> They shared the polling code in the article. It doesn't request another jpeg until the previous one finishes downloading.<p>You're right, I don't know how I managed to skip over that.<p>> UDP is not necessary to write a loop.<p>True, but this doesn't really have anything to do with using JPEG either. They basically implemented a primitive form of rate control by only allowing a single frame to be in flight at once. It was easier for them to do that using JPEG because they (to their own admission) seem to have limited control over their encode pipeline.</p>
]]></description><pubDate>Tue, 23 Dec 2025 21:59:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=46369971</link><dc:creator>mikepavone</dc:creator><comments>https://news.ycombinator.com/item?id=46369971</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46369971</guid></item><item><title><![CDATA[New comment by mikepavone in "We replaced H.264 streaming with JPEG screenshots (and it worked better)"]]></title><description><![CDATA[
<p>You probably won't get acceptable latency this way since you have no control over buffer sizes on all the boxes between you and the receiver. Buffer bloat is a real problem. That said, yeah if you're getting 30-45 seconds behind at 40 Mbps you've probably got a fair bit of sender-side buffering happening.</p>
]]></description><pubDate>Tue, 23 Dec 2025 21:23:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=46369636</link><dc:creator>mikepavone</dc:creator><comments>https://news.ycombinator.com/item?id=46369636</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46369636</guid></item><item><title><![CDATA[New comment by mikepavone in "We replaced H.264 streaming with JPEG screenshots (and it worked better)"]]></title><description><![CDATA[
<p>> When the network is bad, you get... fewer JPEGs. That’s it. The ones that arrive are perfect.<p>This would make sense... if they were using UDP, but they are using TCP. All the JPEGs they send will get there eventually (unless the connection drops). JPEG does not fix your buffering and congestion control problems. What presumably happened here is the way they implemented their JPEG screenshots, they have some mechanism that minimizes the number of frames that are in-flight. This is not some inherent property of JPEG though.<p>> And the size! A 70% quality JPEG of a 1080p desktop is like 100-150KB. A single H.264 keyframe is 200-500KB. We’re sending LESS data per frame AND getting better reliability.<p>h.264 has better coding efficiency than JPEG. For a given target size, you should be able to get better quality from an h.264 IDR frame than a JPEG. There is no fixed size to an IDR frame.<p>Ultimately, the problem here is a lack of bandwidth estimation (apart from the sort of binary "good network"/"cafe mode" thing they ultimately implemented). To be fair, this is difficult to do and being stuck with TCP makes it a bit more difficult. Still, you can do an initial bandwidth probe and then look for increasing transmission latency as a sign that the network is congested. Back off your bitrate (and if needed reduce frame rate to maintain sufficient quality) until transmission latency starts to decrease again.<p>WebRTC will do this for you if you can use it, which actually suggests a different solution to this problem: use websockets for dumb corporate network firewall rules and just use WebRTC everything else</p>
]]></description><pubDate>Tue, 23 Dec 2025 20:00:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46368838</link><dc:creator>mikepavone</dc:creator><comments>https://news.ycombinator.com/item?id=46368838</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46368838</guid></item><item><title><![CDATA[New comment by mikepavone in "PS5 now costs less than 64GB of DDR5 memory. RAM jumps to $600 due to shortage"]]></title><description><![CDATA[
<p>Wow, I only paid $265 for 96GB of DDR5 back in April. Same brand (G.SKILL) as the kit in the article too.</p>
]]></description><pubDate>Mon, 24 Nov 2025 22:52:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=46040363</link><dc:creator>mikepavone</dc:creator><comments>https://news.ycombinator.com/item?id=46040363</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46040363</guid></item><item><title><![CDATA[New comment by mikepavone in "IKEA launches new smart home range with 21 Matter-compatible products"]]></title><description><![CDATA[
<p>FWIW, they still seem to have not actually pulled the trigger on the account requirement and they've removed the "Starting soon" portion of the nag bar text in the Hue app (though it's still on the web page you get to when hitting "Learn more"). I do wish they would either get it over with or make it clear they're not actually going ahead with forcing accounts though.</p>
]]></description><pubDate>Thu, 06 Nov 2025 22:51:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=45841474</link><dc:creator>mikepavone</dc:creator><comments>https://news.ycombinator.com/item?id=45841474</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45841474</guid></item><item><title><![CDATA[New comment by mikepavone in "EU court rules nuclear energy is clean energy"]]></title><description><![CDATA[
<p>> This is not how nuclear works. Nuclear sets a low price that corresponds to its cost, then lets more expensive marginal energy sources set the final price.<p>This may be an accurate description for fully-depreciated nuclear plants, but it doesn't reflect the economics of new-build nuclear at all. You have to consider both operating and capital costs. Nuclear plants are cheap to operate once built, but those operations have to pay off the capital costs. If the load factor is low, then each unit of generated power has to bear a higher portion of the capital costs. If your capital costs are very high, then you either need a very high load factor or very high spot prices to bear those costs.<p>> Nuclear can by the way be modulated +20%\-20%<p>Net demand on CAISO can go from about 2 MW to 30 MW in the summer. 20 MW of that ramp occurs over just 3 hours. I'm sure you can build nuclear plants that ramp that fast, but you need a lot more than the range you're mentioning here. Regardless, I'm not making an argument about the physics of nuclear power plants, just the economics. Expensive plants generally need high load factors to pay off the capital costs.<p>> nuclear generation in France can go from 25GW to 45GW during a day.<p>Most of France's nuclear plants are old and thus fully depreciated. The only one built recently (Flamanville Unit 3), is a good example of the bad cost trend in nuclear. While this was a bit cheaper than Vogtle Units 3 and 4 in the US on a dollars per nameplate capacity basis, at 19 billion euro it's still very expensive (and also way over budget).<p>France also has high rates of curtailment, which is not necessarily a huge problem for them since so much of their generation is already carbon-free, but it does suggest they're already hitting the limits of their ability to ramp production up and down. Whether this is an engineering problem or something to do with the structure of their electricity market is a bit unclear to me<p>> New small modular reactors promise great improvements, as they can be pre-built in factories, require limited maintenance, lower risk, and as a result much lower cost per MW.<p>This has been the promise for years, but so far the low costs have yet to materialize and they are estimated to have a higher LCOE than traditional plants. Currently only 2 are actually operational, a demonstration plant in China and a floating power plant using adapted ice-breaker reactors in Russia. There are a few more in the pipeline, but they are all at least a couple years out from actually producing power.</p>
]]></description><pubDate>Sat, 13 Sep 2025 15:52:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=45233072</link><dc:creator>mikepavone</dc:creator><comments>https://news.ycombinator.com/item?id=45233072</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45233072</guid></item><item><title><![CDATA[New comment by mikepavone in "EU court rules nuclear energy is clean energy"]]></title><description><![CDATA[
<p>You totally can do it with some combination of overbuilding, storage and increased interconnection. It just starts to get expensive the higher the portion of your generation you want to supply with renewables. There's a good Construction Physics article[0] about this (though it simplifies by only looking at solar, batteries and natural gas plants and mostly does not distinguish between peaker and more baseload oriented combined cycle plants).<p>Personally, while I'm not opposed to nuclear, I'm pretty bearish on it. Most places are seeing nuclear get more expensive and not less. Meanwhile solar and batteries are getting cheaper. There's also the issue that nuclear reactors are generally most economical when operating with very high load factors (i.e. baseload generation) because they have high capital costs, but low fuel costs. Renewables make the net-demand curve (demand - renewable generation) very lumpy which generally favors dispatchable (peaker plants, batteries, etc.) generation over baseload.<p>Now a lot of what makes nuclear expensive (especially in the US) is some combination of regulatory posture and lack of experience (we build these very infrequently). We will also eventually hit a limit on how cheap solar and batteries can get. So it's definitely possible current trends will not hold, but current trends are not favorable. Currently the cheapest way to add incremental zero-carbon energy is solar + batteries. By the time you deploy enough that nuclear starts getting competitive on an LCOE basis, solar and batteries will probably have gotten cheaper and nuclear might have gotten more expensive.<p>[0] <a href="https://www.construction-physics.com/p/can-we-afford-large-scale-solar-pv" rel="nofollow">https://www.construction-physics.com/p/can-we-afford-large-s...</a></p>
]]></description><pubDate>Fri, 12 Sep 2025 20:43:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=45226552</link><dc:creator>mikepavone</dc:creator><comments>https://news.ycombinator.com/item?id=45226552</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45226552</guid></item><item><title><![CDATA[New comment by mikepavone in "I bought the cheapest EV, a used Nissan Leaf"]]></title><description><![CDATA[
<p>I have a 2017 Bolt as my only car and the slow L3 charging is definitely a downside, but I haven't found it to be a huge issue in practice. On a trip long enough to worry about fast-charging you're going to need to stop to eat periodically anyway so if you plan your charging around meals you don't end up waiting too long. Obviously gets a bit more annoying on trips that are long enough to require more than one fast-charge per-day, but I don't take trips that long frequently.<p>Day to day charging is generally all going to be L2 or even L1 depending on how far you drive and how long typically parked somewhere with a plug. That will be roughly the same speed in any car. Some cars do have higher capacity L2 chargers than the Bolt does, but most public L2 stations don't provide the higher current needed to see the difference.</p>
]]></description><pubDate>Fri, 05 Sep 2025 19:57:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=45142935</link><dc:creator>mikepavone</dc:creator><comments>https://news.ycombinator.com/item?id=45142935</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45142935</guid></item><item><title><![CDATA[New comment by mikepavone in "I bought the cheapest EV, a used Nissan Leaf"]]></title><description><![CDATA[
<p>IIRC, the EUV had an option for normal adaptive cruise control, but I don't think any ever had a Super Cruise style option</p>
]]></description><pubDate>Fri, 05 Sep 2025 19:44:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=45142800</link><dc:creator>mikepavone</dc:creator><comments>https://news.ycombinator.com/item?id=45142800</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45142800</guid></item><item><title><![CDATA[New comment by mikepavone in "A computer upgrade shut down BART"]]></title><description><![CDATA[
<p>If you compare it to the commuter rail systems in those places, BART feels impressive (though less so with the service cuts). I was a regular rider on the Metro North New Haven line and had experience with SEPTA and NJT commuter rail and I was really impressed with BART when I moved out here. Peak frequency was pretty good (at least on the Red line I primarily used) and when things were on time they were very on-time ("on-time" Metro North trains were always at least a few minutes late in my experience).<p>If you compare it to the NYC subway, it's obviously not impressive at all (though the tech is less dated). As a rapid-transit system, BART isn't exactly a commuter rail or subway system exactly, but I think it's closer to the former than the latter.</p>
]]></description><pubDate>Fri, 05 Sep 2025 17:16:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=45140990</link><dc:creator>mikepavone</dc:creator><comments>https://news.ycombinator.com/item?id=45140990</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45140990</guid></item><item><title><![CDATA[New comment by mikepavone in "The Real Origin of Cisco Systems (1999)"]]></title><description><![CDATA[
<p>As another commenter pointed out, you can do pre-emptive multitasking just fine without an MMU. And as it turns out AmigaOS had just that. All you need for pre-emptive multitasking is a suitable interrupt source to use for task switching.<p>What it did not have was memory protection or virtual memory. You do need an MMU for those.</p>
]]></description><pubDate>Wed, 06 Aug 2025 23:54:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=44819216</link><dc:creator>mikepavone</dc:creator><comments>https://news.ycombinator.com/item?id=44819216</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44819216</guid></item><item><title><![CDATA[New comment by mikepavone in "Big agriculture mislead the public about the benefits of biofuels"]]></title><description><![CDATA[
<p>Most places growing corn don't have the right climate for sugar cane.</p>
]]></description><pubDate>Mon, 28 Jul 2025 03:47:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=44707051</link><dc:creator>mikepavone</dc:creator><comments>https://news.ycombinator.com/item?id=44707051</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44707051</guid></item><item><title><![CDATA[New comment by mikepavone in "Ask HN: How many of you are working in tech without a STEM degree?"]]></title><description><![CDATA[
<p>Went to Drexel for CS, but dropped out in my Sophmore year back in 2004. Did PHP webdev in my home state of CT until 2011. Moved to the SF Bay Area and transitioned to doing Erlang and C++ for some F2P games for a while. I'm currently a Staff Engineer at Discord focused on AV and other "native" stuff.</p>
]]></description><pubDate>Sat, 26 Jul 2025 00:16:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=44690067</link><dc:creator>mikepavone</dc:creator><comments>https://news.ycombinator.com/item?id=44690067</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44690067</guid></item><item><title><![CDATA[New comment by mikepavone in "SpaceX Starship 36 Anomaly"]]></title><description><![CDATA[
<p>So looking back to the Falcon 9, there were only 4 failures to complete orbital objectives across 503 launches and one of those was only a partial failure (main payload delivered successfully, but the secondary payload was not due to a single-engine failure). These failures were not consecutive (4th, 19th, what would have been the 29th and 354th). Now apart from the first launch or two (COTS Demo Flight 1 had some useful payload, but still seemed pretty disposable) these all had real payloads so they were less experimental than these Starship test flights.<p>If we compare to the propulsive landing campaign for the Falcon 9 1st stage it's a bit more favorable. The first 8 attempts had 4 failures, 3 controlled splashdowns (no landing planned) and 1 success. I think in general it felt like they were making progress on all of these though. Similarly for the Falcon 1 launches they had 3 consecutive failures before their first success, but launch 2 did much better than launch 1. Launch 3 was a bit of a setback, but had a novel failure mode (residual first stage thrust resulted in collision after stage separation).<p>Starship Block 2 has had 4 consecutive failures that seem to be on some level about keeping the propellant where it's supposed to be with the first 2 failures happening roughly in the same part of the flight and this 4th one happening during pre-launch testing.</p>
]]></description><pubDate>Thu, 19 Jun 2025 17:00:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=44320451</link><dc:creator>mikepavone</dc:creator><comments>https://news.ycombinator.com/item?id=44320451</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44320451</guid></item><item><title><![CDATA[New comment by mikepavone in "I convinced HP's board to buy Palm and watched them kill it"]]></title><description><![CDATA[
<p>Yeah, Android had good support for multi-tasking from the start, though at least some early devices did not really have enough RAM for it to work well</p>
]]></description><pubDate>Fri, 13 Jun 2025 18:26:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=44270923</link><dc:creator>mikepavone</dc:creator><comments>https://news.ycombinator.com/item?id=44270923</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44270923</guid></item><item><title><![CDATA[New comment by mikepavone in "Why does my ripped CD have messed up track names? And why is one track missing?"]]></title><description><![CDATA[
<p>This is a small point, but calling the 33-byte unit a sector in CDDA is a bit misleading and probably incorrect for the quantity being labeled. This is a channel data frame and contains 24-bytes of audio data, 1 byte of subcode data (except for the channel data frames that have sync symbols instead) and the rest is error correction. This is the smallest grouping of data in CDDA, but it's not really an individually addressable unit.<p>98 of these channel data frames make up a timecode frame which represents 1/75th of a second of audio and has 2352 audio data bytes, 96 subcode bytes (2 frames have sync codes instead) with the remainder being sync and error correction. Timecode frames are addressable (via the timecodes embedded in the subcode data) and are the unit referred to in the TOC. This is probably what's being called a sector here. Notably, a CD-ROM sector corresponds 1:1 with a timecode frame.<p>Note: Red book actually just confusingly calls both of these things frames and does not use the terms "channel data frame" or "timecode frame"</p>
]]></description><pubDate>Thu, 12 Jun 2025 23:57:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=44264446</link><dc:creator>mikepavone</dc:creator><comments>https://news.ycombinator.com/item?id=44264446</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44264446</guid></item><item><title><![CDATA[New comment by mikepavone in "The PS3 Licked the Many Cookie"]]></title><description><![CDATA[
<p>> The Xbox 360 doubled down on this while the PS3 tried to do clever things with an innovative architecture.<p>I don't think this is really an accurate description of the 360 hardware. The CPU was much more conventional than the PS3, but still custom (derived from the PPE in the cell, but has an extended version of VMX extension). The GPU was the first to use a unified shader architecture. Unified memory was also fairly novel in the context of a high performance 3D game machine. The use of eDRAM for the framebuffer is not novel (the Gamecube's Flipper GPU had this previously), but also wasn't something you generally saw in off-the-shelf designs. Meanwhile the PS3 had an actual off the shelf GPU.<p>These days all the consoles have unified shaders and memory, but I think that just speaks to the success of what the 360 pioneered.<p>Since then, consoles have gotten a lot closer to commodity hardware of course. They're custom parts (well except the original Switch I guess), but the changes from the off the shelf stuff are a lot smaller.</p>
]]></description><pubDate>Fri, 11 Apr 2025 23:31:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=43659841</link><dc:creator>mikepavone</dc:creator><comments>https://news.ycombinator.com/item?id=43659841</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43659841</guid></item></channel></rss>