<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: jgraham</title><link>https://news.ycombinator.com/user?id=jgraham</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 04:28:56 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=jgraham" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by jgraham in "Mozilla Thunderbolt"]]></title><description><![CDATA[
<p>(I work on Firefox Web Compatibility)<p>If you have specific sites that aren't working, please let us know and we can investigate and try to fix them.<p>The usual reporting channels are using <a href="https://webcompat.com" rel="nofollow">https://webcompat.com</a> or the "Report Broken Site" tool in the Firefox menu. Of course I"m also happy to take bug reports here if you (or anyone else) have them.</p>
]]></description><pubDate>Thu, 16 Apr 2026 18:43:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47797712</link><dc:creator>jgraham</dc:creator><comments>https://news.ycombinator.com/item?id=47797712</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47797712</guid></item><item><title><![CDATA[New comment by jgraham in "Cybersecurity looks like proof of work now"]]></title><description><![CDATA[
<p>I think if prediction 1 is true (that it becomes cheap to clone existing software in a way that doesn't violate copyright law), the response will not be purely technical (moving to thin clients, or otherwise trying to technically restrict the access surface to make reverse engineering harder). Instead I'd predict that companies look to the law to replace the protections that they previously got from copyright.<p>Obvious possibilities include:<p>* More use of software patents, since these apply to underlying ideas, rather than specific implementations.<p>* Stronger DMCA-like laws which prohibit breaking technical provisions designed to prevent reverse engineering.<p>Similarly, if the people predicting that humans are going to be required to take ultimate responsibility for the behaviour of software are correct, then it clearly won't be possible for that to be any random human. Instead you'll need legally recognised credentials to be allowed to ship software, similar to the way that doctors or engineers work today.<p>Of course these specific predictions might be wrong. I think it's fair to say that nobody really knows what might have changed in a year, or where the technical capabilities will end up. But I see a lot of discussions and opinions that assume zero feedback from the broader social context in which the tech exists, which seems like they're likely missing a big part of the picture.</p>
]]></description><pubDate>Thu, 16 Apr 2026 08:18:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47790171</link><dc:creator>jgraham</dc:creator><comments>https://news.ycombinator.com/item?id=47790171</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47790171</guid></item><item><title><![CDATA[New comment by jgraham in "iPhone 17 Pro Demonstrated Running a 400B LLM"]]></title><description><![CDATA[
<p>Power in general.<p>Your time-average power budget for things that run on phones is about 0.5W (batteries are about 10Wh and should last at least a day). That's about three orders of magnitude lower than a the GPUs running in datacenters.<p>Even if battery technology improves you can't have a phone running hot, so there are strong physical limits on the total power budget.<p>More or less the same applies to laptops, although there you get maybe an additional order of magnitude.</p>
]]></description><pubDate>Mon, 23 Mar 2026 19:01:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47493717</link><dc:creator>jgraham</dc:creator><comments>https://news.ycombinator.com/item?id=47493717</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47493717</guid></item><item><title><![CDATA[New comment by jgraham in "Global warming has accelerated significantly"]]></title><description><![CDATA[
<p>China has now had flat CO2 emissions for two years, and experienced a decline in overall CO2 emissions during 2025[1]. Part of this is that they're deploying way more renewables than basically any other large economy [2].<p>They've also pivoted their industrial strategy so that basically the entire green energy sector depends on Chinese supply chains. This is significantly contributing to their economic growth [3].<p>I don't know to what extent taxation in Europe contributed to China's decision making here, but it presumably created an market for green energy and therefore  helped solidify the economics.<p>This is of course not to say that there's nothing to criticize in China's environmental policies; there certainly is. But the trope of "why should we do anything because China won't" turns out to be spectacularly ill-informed. Indeed I think it makes more sense to ask the opposite: what are the likely consequences now that China has positioned itself as the global centre of green energy, and what should other countries be doing to ensure that they're not left behind?<p>[1] <a href="https://www.carbonbrief.org/analysis-chinas-co2-emissions-have-now-been-flat-or-falling-for-21-months/" rel="nofollow">https://www.carbonbrief.org/analysis-chinas-co2-emissions-ha...</a>
[2] <a href="https://www.carbonbrief.org/g7-falling-behind-china-as-worlds-wind-and-solar-plans-reach-new-high-in-2025/" rel="nofollow">https://www.carbonbrief.org/g7-falling-behind-china-as-world...</a>
[3] <a href="https://www.carbonbrief.org/analysis-clean-energy-drove-more-than-a-third-of-chinas-gdp-growth-in-2025/" rel="nofollow">https://www.carbonbrief.org/analysis-clean-energy-drove-more...</a></p>
]]></description><pubDate>Fri, 06 Mar 2026 16:54:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47277556</link><dc:creator>jgraham</dc:creator><comments>https://news.ycombinator.com/item?id=47277556</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47277556</guid></item><item><title><![CDATA[New comment by jgraham in "If AI writes code, should the session be part of the commit?"]]></title><description><![CDATA[
<p>To be clear: I don't think it will happen.<p>But the point of comparison is something like the HTML specification. That's supposed to be a document that is detailed enough about how to create an implementation that multiple different groups can produce compatible implementations without having any actual code in common.<p>In practice it still doesn't quite work: the specification has to be supplemented with testsuites that all implementations use, and even then there often needs to be a feedback loop where new implementations find new ambiguities or errors, and the specification needs to be updated. Plus implementors often "cheat" and examine each other's behaviour or even code, rather than just using the specification.<p>Nevertheless it's perhaps the closest thing I'm familiar with to an existing practice where the plan is considered canonical, and therefore worth thinking about as a model for what "code as implementation detail" would entail in other situations.</p>
]]></description><pubDate>Tue, 03 Mar 2026 09:24:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47230141</link><dc:creator>jgraham</dc:creator><comments>https://news.ycombinator.com/item?id=47230141</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47230141</guid></item><item><title><![CDATA[New comment by jgraham in "If AI writes code, should the session be part of the commit?"]]></title><description><![CDATA[
<p>> it's foolish to fight the future<p>And yet, the premise of the question assumes that it's possible in this case.<p>Historically having produced a piece of software to accomplish some non-trivial task implied weeks, months, or more of developing expertise and painstakingly converting that expertise into a formulation of the problem precise enough to run on a computer.<p>One could reasonably assume that any reasonable-looking submission was in fact the result of someone putting in the time to refine their understanding of the problem, and express it in code. By discussing the project one could reasonably hope to learn more about their understanding of the problem domain, or about the choices they made when reifying that understanding into an artifact useful for computation.<p>Now that no longer appears to be the case.<p>Which isn't to say there's no longer any skill involved in producing well engineered software that continues to function over time. Or indeed that there aren't classes of software that require interesting novel approaches that AI tooling can't generate. But now anyone with an idea, some high level understanding of the domain, and a few hundred dollars a month to spend, can write out a plan can ask an AI provider to generate them software to implement that plan. That software may or may not be good, but determining that requires a significant investment of time.<p>That change fundamentally changes the dynamics of "Show HN" (and probably much else besides).<p>It's essentially the same problem that art forums had with AI-generated work. Except they have an advantage: people generally agree that there's some value to art being artisan; the skill and effort that went into producing it are — in most cases — part of the reason people enjoy consuming it. That makes it rather easy to at least develop a policy to exclude AI, even if it's hard to implement in practice.<p>But the most common position here is that the value of software is what it does. Whilst people might intellectually prefer 100 lines of elegant lisp to 10,000 lines of spaghetti PHP to solve a problem, the majority view here is that if the latter provides more economic value — e.g. as the basis of a successful business — then it's better.<p>So now the cost of verifying things for interestingness is higher than the cost of generating plausibly-interesting things, and you can't even have a blanket policy that tries to enforce a minimum level of effort on the submitter.<p>To engage with the original question: if one was serious about extracting the human understanding from the generated code, one would probably take a leaf from the standards world where the important artifact is a specification that allows multiple parties to generate unique, but functionally equivalent, implementations of an idea. In the LLM case, that would presumably be a plan detailed enough to reliably one-shot an implementation across several models.<p>However I can't see any incentive structure that might cause that to become a common practice.</p>
]]></description><pubDate>Mon, 02 Mar 2026 12:57:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47217407</link><dc:creator>jgraham</dc:creator><comments>https://news.ycombinator.com/item?id=47217407</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47217407</guid></item><item><title><![CDATA[New comment by jgraham in "In Europe, wind and solar overtake fossil fuels"]]></title><description><![CDATA[
<p>> But at a national level the data is compelling. I'm convinced by the Environmental Kuznets Curve.<p>Which data do you find compelling?<p>For people who don't know the Environmental Kuznets Curve is basically the hypothesis that as economies grow past a certain they naturally start to cause less environmental damage.<p>As far as I can tell the main empirical evidence in favour of this is the fact that some western countries have managed to maintain economic growth whilst making reductions to their carbon emissions. This has, of course, partially been driven by offshoring especially polluting industries, but also as a result of technological developments like renewable energy, and BEVs.<p>On the other hand, taking a global sample it's still rather clear that there's a strong correlation between wealth and carbon emissions, both at the individual scale and at the level of countries.<p>It's also clear that a lot of the gains that have been made in, say, Europe have been low-hanging fruit that won't be easy to repeat. For example migrating off coal power has a huge impact, but going from there to a fully clean grid is a larger challenge.<p>We also know that there are a bunch of behaviours that come with wealth which have a disproportionately negative effect on the environment. For example, rich people (globally) consume more meat, and take more flights. Those are both problems without clear solutions.<p>(FWIW I agree that solar power is somewhat regressive, but just for the normal "Vimes Boots Theory" reasons that anyone who is able to install solar will save money in the medium term. That requires the capital for the equipment — which is rapidly getting cheaper — but also the ability to own land or a house to install the equipment on. The latter favours the already well off. There are similar problems with electric cars having higher upfront costs but lower running costs. The correct solution is not to discourage people from using things, but to take the cost of being poor into account in other areas of public policy).</p>
]]></description><pubDate>Thu, 22 Jan 2026 20:16:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=46724613</link><dc:creator>jgraham</dc:creator><comments>https://news.ycombinator.com/item?id=46724613</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46724613</guid></item><item><title><![CDATA[New comment by jgraham in "Himalayas bare and rocky after reduced winter snowfall, scientists warn"]]></title><description><![CDATA[
<p>> I said nothing about a monotonic relationship.<p>You made a scale-free claim about increasing greenness with increasing CO2 concentration. That implies a monotonic relationship.<p>> The only debate is over what might happen in the future, which, again, is fortune telling<p>The idea that using models of physical systems to predict their future evolution is "fortune telling" will surprise many scientists. Indeed, you yourself have proposed a simple model and used it to make a prediction about the future ("the world will be greener in a high-CO2 environment"), and used linear extrapolation of the past to justify the adequacy of your model.<p>That's not necessarily a bad starting point, but when actual studies with more complex models show different behaviours you should consider there's a possibility you're over-confident in your predictions.<p>Anyway, I suspect this conversation has become rather pointless. It's always unclear online to what extent people are engaging in good faith, but if it was then I'm rather sure you've now mentally pigeonholed me as a "doomer" who can't be reasoned with.</p>
]]></description><pubDate>Mon, 12 Jan 2026 19:58:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=46593402</link><dc:creator>jgraham</dc:creator><comments>https://news.ycombinator.com/item?id=46593402</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46593402</guid></item><item><title><![CDATA[New comment by jgraham in "Himalayas bare and rocky after reduced winter snowfall, scientists warn"]]></title><description><![CDATA[
<p>> you can't easily separate out CO2 concentration from the other impacts of increased CO2
>> I never said you could?<p>I took the fact that you explicitly mentioned "high-CO2 environment" and claimed there was no room for argument over the "fact"s as an indication that you were trying to separate out the impact of CO2 from other factors caused by climate change such as heat stress and drought. If that wasn't the case then apologies for misunderstanding.<p>> That paper is talking about a net reduction in biomass due to projected losses in places with temperature increases exceeding 10 degrees C.<p>The abstract says:<p>| with great biomass reductions in regions where mean annual temperatures exceeded 10 °C<p>Unless the abstract is especially badly written that suggests that it's not 10°C _change_ but 2°C change leading to biomass loss in areas that are already at 10°C on average.<p>> IPCC report<p>Thanks, that's a useful reference! Do you have a link to the final report? That one seems to be a draft and I didn't find the right published version (but there are many so I'm sure I'm missing it).<p>I note the paragraph you quoted concludes:<p>> The increased greening is largely consistent with CO2 fertilization at the global scale, with other changes being noteworthy at the regional level (Piao et al., 2020); examples include agricultural intensification in China and India (Chen et al., 2019; Gao et al., 2019) and temperature increases in the northern high latitudes (Kong et al., 2017; Keenan and Riley, 2018) and in other areas such as the Loess Plateau in central China (Wang et al., 2018). Notably, some areas (such as parts of Amazonia, central Asia, and the Congo basin) have experienced browning (i.e., decreases in green leaf area and/or mass) (Anderson et al., 2019; Gottschalk et al., 2016; Hoogakker et al., 2015). Because rates of browning have exceeded rates of greening in some regions since the late 1990s, the increase in global greening has been somewhat slower in the last two decades<p>So it sounds like a combination of the CO2 increases up to about the year 2000, along with agricultural intensification and various other factors have indeed increased the amount of plant cover, but we are already seeing changes to that picture with further rises to CO2 levels.<p>> You spent a lot of words arguing with me about things I didn't say.<p>Well you started with<p>> The world will be greener in a high-CO2 environment. There’s no legitimate argument over that fact.<p>And my central point is that the model you're implying there is one in which there's a monotonic relationship between CO2 levels and plant growth. However in reality things are clearly more complex than that, and there is indeed legitimate argument over what factors are dominant in different scenarios.<p>Your claim that things will only change over long-enough timescales so that you don't have to worry about also seems to lack evidence. In systems with significant feedback loops it seems dangerous to assume that changes will only happen slowly unless you're very confident that you fully understand all the system dynamics. With climate change it's clear that we don't fully understand the system, and some changes are happening faster than earlier models predicted. So _maybe_ we have a few centuries to figure out how to move global agriculture to northern latitudes, and deal with more variable conditions, but from a risk-analysis point of view it seems like a rather poor strategy.</p>
]]></description><pubDate>Mon, 12 Jan 2026 17:34:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=46591625</link><dc:creator>jgraham</dc:creator><comments>https://news.ycombinator.com/item?id=46591625</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46591625</guid></item><item><title><![CDATA[New comment by jgraham in "Himalayas bare and rocky after reduced winter snowfall, scientists warn"]]></title><description><![CDATA[
<p>> The world will be greener in a high-CO2 environment. There’s no legitimate argument over that fact.<p>However it's important to remember that world isn't a high school physics experiment, and you can't easily separate out CO2 concentration from the other impacts of increased CO2:<p>| Climate change can prolong the plant growing season and expand the areas suitable for crop planting, as well as promote crop photosynthesis thanks to increased atmospheric carbon dioxide concentrations. However, an excessive carbon dioxide concentration in the atmosphere may lead to unbalanced nutrient absorption in crops and hinder photosynthesis, respiration, and transpiration, thus affecting crop yields. Irregular precipitation patterns and extreme weather events such as droughts and floods can lead to hypoxia and nutrient loss in the plant roots. An increase in the frequency of extreme weather events directly damages plants and expands the range of diseases and pests. In addition, climate change will also affect soil moisture content, temperature, microbial activity, nutrient cycling, and quality, thus affecting plant growth.<p>[<a href="https://www.mdpi.com/2073-4395/14/6/1236" rel="nofollow">https://www.mdpi.com/2073-4395/14/6/1236</a>]<p>In global models of climate change the overall impact on plant growth is significant, but not positive:<p>| Global above ground biomass is projected to decline by 4 to 16% under a 2 °C increase in climate warming<p>[<a href="https://www.pnas.org/doi/10.1073/pnas.2420379122" rel="nofollow">https://www.pnas.org/doi/10.1073/pnas.2420379122</a>]<p>> Certainly it’s more favorable for growth of plants that make food<p>That does not seem to be what agricultural researchers believe:<p>| In wheat a mean daily temperature of 35°C caused total failure of the plant, while exposure to short episodes (2–5 days) of HS (>24°C) at the reproductive stage (start of flowering) resulted in substantial damage to floret fertility leading to an estimated 6.0 ± 2.9% loss in global yield with each degree-Celsius (°C) increase in temperature<p>| Although it might be argued that the ‘fertilization effect’ of increasing CO2 concentration may benefit crop biomass thus raising the possibility of an increased food production, emerging evidence has demonstrated a reduction in crop yield if increased CO2 is combined with high temperature and/or water scarcity, making a net increase in crop productivity unlikely<p>| When the combination of drought and heatwave is considered, production losses considering cereals including wheat (−11.3%), barley (−12.1%) and maize (−12.5%), and for non-cereals: oil crops (−8.4%), olives (−6.2%), vegetables (−3.5%), roots and tubers (−4.5%), sugar beet (−8.8%), among others<p>[<a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC10796516/" rel="nofollow">https://pmc.ncbi.nlm.nih.gov/articles/PMC10796516/</a>]</p>
]]></description><pubDate>Mon, 12 Jan 2026 13:24:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46588150</link><dc:creator>jgraham</dc:creator><comments>https://news.ycombinator.com/item?id=46588150</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46588150</guid></item><item><title><![CDATA[New comment by jgraham in "A Note on Fil-C"]]></title><description><![CDATA[
<p>Notice that it says "almost all programs" and not "almost all _C_ programs".<p>I think if you understand the meaning of "crash" to include any kind of unhandled state that causes the program to terminate execution then it includes things like unwrapping a None value in Rust or any kind of uncaught exception in Python.<p>That interpretation makes sense to me in terms of the point he's making: Fil-C replaces memory unsafety with program termination, which is strictly worse than e.g. (safe) Rust which replaces memory unsafety with a compile error. But it's also true that most programs (irrespective of language, and including Rust) have some codepaths in which programs can terminate where the assumed variants aren't upheld, so in practice that's often an acceptable behaviour, as long as the defect rate is low enough.<p>Of course there is also a class of programs for which that behaviour is not acceptable, and in those cases Fil-C (along with most other languages, including Rust absent significant additional tooling) isn't appropriate.</p>
]]></description><pubDate>Fri, 07 Nov 2025 10:09:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=45845045</link><dc:creator>jgraham</dc:creator><comments>https://news.ycombinator.com/item?id=45845045</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45845045</guid></item><item><title><![CDATA[New comment by jgraham in "Ladybird passes the Apple 90% threshold on web-platform-tests"]]></title><description><![CDATA[
<p>As someone who's been quite heavily involved with web-platform-tests, I'd caution against any use of the test pass rate as a metric for anything.<p>That's not to belittle the considerable achievements of Ladybird; their progress is really impressive, and if web-platform-tests are helping their engineering efforts I consider that a win. New implementations of the web platform, including Ladybird, Servo, and Flow, are exciting to see.<p>However, web-platform-tests specifically decided to optimise for being a useful engineering tool rather than being a good metric. That means there's no real attempt to balance the testsuite across the platform; for example a surprising fraction of the overall test count is encoding tests because they're easy to generate, not because it's an especially hard problem in browser development.<p>We've also consciously wanted to ensure that contributing tests is low friction, both technically and socially, in order that people don't feel inclined to withhold useful tests. Again that's not the tradeoff you make for a good metric, but is the right one for a good engineering resource.<p>The Interop Project is designed with different tradeoffs in mind, and overcomes some of these problems by selecting a subsets of tests which are broadly agreed to represent a useful level of coverage of an important feature. But unfortunately the current setup is designed for engines that are already implementing enough feature to be usable as general purpose web-browsers.</p>
]]></description><pubDate>Mon, 06 Oct 2025 19:46:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=45495456</link><dc:creator>jgraham</dc:creator><comments>https://news.ycombinator.com/item?id=45495456</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45495456</guid></item><item><title><![CDATA[New comment by jgraham in "Oxford loses top 3 university ranking in the UK"]]></title><description><![CDATA[
<p>In addition, the colleges have a lot of data about the people they interview and how well they do during the degree programme.<p>My understanding (based on a discussion with one Natural Sciences admissions tutor at one Cambridge college nearly 20 years ago, so strictly speaking this may not be true in general, but I'd be surprised if it wasn't common) is that during the admissions process, including interviews, applicants are scored so they can be stack-ranked, and the top N given offers. Then, for the students that are accepted, and get the required exam results, the college also records their marks at each stage of their degree. To verify the admissions process is fair, these marks are compared with the original interview ranking, expecting that interview performance is (on average) correlated with later degree performance.<p>I don't know if they go further and build models to suggest the correct offer to give different students based on interview performance, educational background, and other factors, but it seems at least plausible that one could try that kind of thing, and have the data to prove that it was working.<p>Anyway my guess is that of the population of people who would do well if they got in, but don't, the majority are those whose background makes them believe it's "not for the likes of me", and so never apply, rather than people who went to private schools, applied, and didn't get a place.<p>(also a Cambridge alumni from a state school, FWIW),</p>
]]></description><pubDate>Sun, 21 Sep 2025 19:56:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=45326089</link><dc:creator>jgraham</dc:creator><comments>https://news.ycombinator.com/item?id=45326089</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45326089</guid></item><item><title><![CDATA[New comment by jgraham in "How can England possibly be running out of water?"]]></title><description><![CDATA[
<p>It's true, see <a href="https://www.carbonbrief.org/factcheck-why-expensive-gas-not-net-zero-is-keeping-uk-electricity-prices-so-high/" rel="nofollow">https://www.carbonbrief.org/factcheck-why-expensive-gas-not-...</a><p>From that article:<p>> The UK’s electricity market operates using a system known as “marginal pricing”. This means that all of the power plants running in each half-hour period are paid the same price, set by the final generator that has to switch on to meet demand, which is known as the “marginal” unit.<p>> While this is unfamiliar to many people, marginal pricing is far from unique to the UK’s electricity market. It is used in most electricity markets in Europe and around the world, as well as being widely used in commodity markets in general.<p>The thing that's unique about the UK is that the marginal price is almost always (98% of the time) set by the price of gas. That means when the gas price increases, the wholesale price of electricity, and hence consumer bills, increase in direct response.<p>Of course the situation is also made worse by the fact that gas is used directly for heating and cooking in a high proportion of British homes.</p>
]]></description><pubDate>Tue, 09 Sep 2025 09:04:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=45179447</link><dc:creator>jgraham</dc:creator><comments>https://news.ycombinator.com/item?id=45179447</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45179447</guid></item><item><title><![CDATA[New comment by jgraham in "Merlin Bird ID"]]></title><description><![CDATA[
<p>Yes! It's a rare example of an app that instead of trying to capture your attention into a virtual environment, helps you to direct it outwards into the real world.<p>The sound id in particular is just an amazing way to really extend what's possible for most people, and provides an on-ramp for people to identify more birds by ear alone (and in general to pay more attention to sound when in nature).<p>I might argue that Merlin — and especially eBird — lean a bit to heavily to the competitive "high scores" view of birding; given the impact of climate change on bird populations, encouraging people to travel the world and see as many species as possible is clearly problematic.<p>But that's a minor quibble, and Merlin remains one of the few apps I'd unconditionally recommend to anyone with the faintest chance they'd use it.</p>
]]></description><pubDate>Wed, 04 Jun 2025 18:23:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=44183872</link><dc:creator>jgraham</dc:creator><comments>https://news.ycombinator.com/item?id=44183872</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44183872</guid></item><item><title><![CDATA[New comment by jgraham in "Mozilla Firefox – Official GitHub repo"]]></title><description><![CDATA[
<p>Gecko and Firefox have been using Bugzilla for more than 25 years at this point. There's a lot of internal workflows, tooling and processes that are really dependent on the specific functionality in Bugzilla. I think it would be an extremely high risk project to try and replace Bugzilla with GitHub issues.<p>That said, there are also other teams and projects who do use GitHub for issue tracking. However the closer to Firefox/Gecko you are the harder this gets. For example it's hard to cross-reference GitHub issues with Bugzilla issues, or vice versa. I've seen people try to build two-way sync between GitHub and Bugzilla, but there are quite considerable technical challenges in trying to make that kind of cross-system replication work well.<p>However your point that GitHub makes issue submission easier for people who aren't deeply embedded in the project is a good one. I'm directly involved with webcompat.com, which aims to collect reports of broken sites from end users. It's using a GitHub issue tracker as the backend; allowing developers to directly report through GitHub, and a web-form frontend so that people without even a GitHub account can still submit reports (as you can imagine quite some effort is required here to ensure that it's not overwhelmed by spam). So finding ways to enable users to report issues is something we care about.<p>However, even in the webcompat.com case where collecting issues from people outside the project is the most important concern, we've taken to moving confirmed reports into bugzilla, so that they can be cross-referenced with the corresponding platform bugs, more easily used as inputs to prioritization, etc. That single source of truth for all bugs turns out to be very useful for process reasons as well as technical ones.<p>So — (again) without being any kind of decision maker here — I think it's very unlikely that Firefox will move entirely to GitHub issues in the foreseeable future; it's just too challenging given the history and requirements. Having some kind of one-way sync from GitHub to Bugzilla seems like a more tractable approach from an engineering point of view, but even there it's likely that there are non-trivial costs and tradeoffs involved.</p>
]]></description><pubDate>Tue, 13 May 2025 11:18:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=43971672</link><dc:creator>jgraham</dc:creator><comments>https://news.ycombinator.com/item?id=43971672</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43971672</guid></item><item><title><![CDATA[New comment by jgraham in "Mozilla Firefox – Official GitHub repo"]]></title><description><![CDATA[
<p>This is all true, but as the sibling says, not really related to the change discussed here.<p>Firefox does indeed have a large CI system and ends up running thousands of jobs on each push to main (formerly mozilla-central), covering builds, linting, multiple testsuites, performance testing, etc. all across multiple platforms and configurations. In addition there are "try" pushes for work in progress patches, and various other kinds of non-CI tasks (e.g. fuzzing). That is all run on our taskcluster system and I don't believe there are any plans to change that.</p>
]]></description><pubDate>Tue, 13 May 2025 08:58:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=43970938</link><dc:creator>jgraham</dc:creator><comments>https://news.ycombinator.com/item?id=43970938</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43970938</guid></item><item><title><![CDATA[New comment by jgraham in "Mozilla Firefox – Official GitHub repo"]]></title><description><![CDATA[
<p>Again, I can only comment from the perspective of a user; I haven't worked on the VCS infrastructure.<p>The obvious generic challenges are availability and security: Firefox has contributors around the globe and if the VCS server goes down then it's hard to get work done (yes, you can work locally, but you can't land patches or ship fixes to users). Firefox is also a pretty high value target, and an attacker with access to the VCS server would be a problem.<p>To be clear I'm not claiming that there were specific problems related to these things; just that they represent challenges that Mozilla has to deal with when self hosting.<p>The other obvious problem at scale is performance. With a large repo both read and write performance are concerns. Cloning the repo is the first step that new contributors need to take, and if that's slow then it can be a dealbreaker for many people, especially on less reliable internet. Out hg backend was using replication to help with this [1], but you can see from the link how much complexity that adds.<p>Firefox has enough contributors that write contention also becomes a problem; for example pushing to the "try" repo (to run local patches through CI) often ended up taking tens of minutes waiting for a lock. This was (recently) mostly hidden from end users by pushing patches through a custom "lando" system that asynchronously queues the actual VCS push rather than blocking the user locally, but that's more of a mitigation than a real solution (lando is still required with the GitHub backend because it becomes the places where custom VCS rules which previously lived directly in the hg server, but which don't map onto GitHub features, are enforced).<p>[1] <a href="https://mozilla-version-control-tools.readthedocs.io/en/latest/hgmo/replication.html#hgmo-replication" rel="nofollow">https://mozilla-version-control-tools.readthedocs.io/en/late...</a></p>
]]></description><pubDate>Tue, 13 May 2025 08:51:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=43970906</link><dc:creator>jgraham</dc:creator><comments>https://news.ycombinator.com/item?id=43970906</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43970906</guid></item><item><title><![CDATA[New comment by jgraham in "Mozilla Firefox – Official GitHub repo"]]></title><description><![CDATA[
<p>(I work at Mozilla, but not on the VCS tooling, or this transition)<p>To give a bit of additional context here, since the link doesn't have any:<p>The Firefox code has indeed recently moved from having its canonical home on mercurial at hg.mozilla.org to GitHub. This only affects the code; bugzilla is still being used for issue tracking, phabricator for code review and landing, and our taskcluster system for CI.<p>In the short term the mercurial servers still exist, and are synced from GitHub. That allows automated systems to transfer to the git backend over time rather than all at once. Mercurial is also still being used for the "try" repository (where you push to run CI on WIP patches), although it's increasingly behind an abstraction layer; that will also migrate later.<p>For people familiar with the old repos, "mozilla-central" is mapped onto the more standard branch name "main", and "autoland" is a branch called "autoland".<p>It's also true that it's been possible to contribute to Firefox exclusively using git for a long time, although you had to install the "git cinnabar" extension. The choice between the learning hg and using git+extension was a it of an impediment for many new contributors, who most often knew git and not mercurial. Now that choice is no longer necessary. Glandium, who wrote git cinnabar, wrote extensively at the time this migration was first announced about the history of VCS at Mozilla, and gave a little more context on the reasons for the migration [1].<p>So in the short term the differences from the point of view of contributors are minimal: using stock git is now the default and expected workflow, but apart from that not much else has changed. There may or may not eventually be support for GitHub-based workflows (i.e. PRs) but that is explicitly not part of this change.<p>On the backend, once the migration is complete, Mozilla will spend less time hosting its own VCS infrastructure, which turns out to be a significant challenge at the scale, performance and availability needed for such a large project.<p>[1] <a href="https://glandium.org/blog/?p=4346" rel="nofollow">https://glandium.org/blog/?p=4346</a></p>
]]></description><pubDate>Tue, 13 May 2025 07:56:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=43970574</link><dc:creator>jgraham</dc:creator><comments>https://news.ycombinator.com/item?id=43970574</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43970574</guid></item><item><title><![CDATA[New comment by jgraham in "First ammonia-fueled ship hits a snag"]]></title><description><![CDATA[
<p>Note that biofuels aren't especially environmentally friendly, even just considering carbon emissions. See e.g. [1], which makes the most optimistic possible assumption by ignoring land use changes and still concludes "the reductions for most feedstocks are insufficient to meet the GHG savings required by the EU Renewable Energy Directive" (second generation biofuels may do better, but that isn't clear). Also ignoring land use changes is a very bad asssumption; if your plan is to run global shipping (or other industries) on biofuels it seems highly implausible that it's not going to end up with more land overall used for growing crops. If that's land that could otherwise be sequestering carbon (e.g. drained peat bogs, which have the advantage of being highly fertile), then it's clearly going to be a significant contribution to carbon emissions (not to mention the ecological impacts of converting yet more land to agriculture).<p>[1] <a href="https://royalsocietypublishing.org/doi/10.1098/rspa.2020.0351" rel="nofollow">https://royalsocietypublishing.org/doi/10.1098/rspa.2020.035...</a></p>
]]></description><pubDate>Wed, 12 Mar 2025 14:17:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=43343505</link><dc:creator>jgraham</dc:creator><comments>https://news.ycombinator.com/item?id=43343505</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43343505</guid></item></channel></rss>