<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: rsfern</title><link>https://news.ycombinator.com/user?id=rsfern</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 19:33:19 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=rsfern" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by rsfern in "US appeals court declares 158-year-old home distilling ban unconstitutional"]]></title><description><![CDATA[
<p>While the treatment for methanol poisoning indeed includes ethanol, I don’t think your dosage suggestion is right. Your body would still have to process all the methanol, the job of the ethanol is just to slow down the reaction. If you suspect methanol poisoning you need the hospital, they will administer the ethanol intravenously and I think do dialysis to remove the methanol and the formic acid it metabolizes to (this is one of the toxins in ant venom)<p><a href="https://doi.org/10.1053/j.ajkd.2016.02.058" rel="nofollow">https://doi.org/10.1053/j.ajkd.2016.02.058</a></p>
]]></description><pubDate>Sun, 12 Apr 2026 10:19:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47738023</link><dc:creator>rsfern</dc:creator><comments>https://news.ycombinator.com/item?id=47738023</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47738023</guid></item><item><title><![CDATA[New comment by rsfern in "Inside the 'self-driving' lab revolution"]]></title><description><![CDATA[
<p>There are groups that are actively working on automating conventional labs like this. Most of the efforts I know about use non-humanoid mobile robots or even just a six-axis arm on a rail and some lab space reconfiguration</p>
]]></description><pubDate>Thu, 02 Apr 2026 11:59:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47613247</link><dc:creator>rsfern</dc:creator><comments>https://news.ycombinator.com/item?id=47613247</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47613247</guid></item><item><title><![CDATA[New comment by rsfern in "ArXiv Declares Independence from Cornell"]]></title><description><![CDATA[
<p>This issue of accessibility is widely acknowledged in the academic literature, but it doesn’t mean that only large companies are doing good research.<p>Personally I think this resource mismatch can help drive creative choice of research problems that don’t require massive resources. To misquote Feynman, there’s plenty of room at the bottom</p>
]]></description><pubDate>Fri, 20 Mar 2026 11:54:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47453321</link><dc:creator>rsfern</dc:creator><comments>https://news.ycombinator.com/item?id=47453321</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47453321</guid></item><item><title><![CDATA[New comment by rsfern in "I'm Not Consulting an LLM"]]></title><description><![CDATA[
<p>I like this analogy of always choosing “I’m feeling lucky” on Google, I feel like it clarifies a boundary between information retrieval and evaluation that gets blurred by language model summarizations. I’ve been frustrated with the LLM summary at the top of the Google search results for scientific topics because often the sources linked to don’t actually contain the information the summary is citing them for. Then I have a side quest of finding the right backing literature or deciding the summary was just wrong in the first place</p>
]]></description><pubDate>Sun, 08 Mar 2026 12:26:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47296768</link><dc:creator>rsfern</dc:creator><comments>https://news.ycombinator.com/item?id=47296768</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47296768</guid></item><item><title><![CDATA[New comment by rsfern in "U.S. science agency moves to restrict foreign scientists from its labs"]]></title><description><![CDATA[
<p>I don’t know why the author of the article wrote “could”, but I personally work closely with some non-high-risk-country NIST foreign guest researchers. It’s been filtered down verbally through the management chain that the end of this September is the re-review deadline, and it’s not been stated as a hypothetical.</p>
]]></description><pubDate>Tue, 03 Mar 2026 01:32:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47226751</link><dc:creator>rsfern</dc:creator><comments>https://news.ycombinator.com/item?id=47226751</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47226751</guid></item><item><title><![CDATA[New comment by rsfern in "When I lost my university email, my identity as scientist took an unexpected hit"]]></title><description><![CDATA[
<p>There is <a href="https://orcid.org" rel="nofollow">https://orcid.org</a> which is a persistent identifier for a researcher. It would be interesting if sending email to a researchers orcid handle resolved to their current institutional email address I guess?<p>My usual workflow is find the person on google scholar, find their uni/lab homepage, and hope they published their email there.</p>
]]></description><pubDate>Mon, 02 Mar 2026 23:18:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47225680</link><dc:creator>rsfern</dc:creator><comments>https://news.ycombinator.com/item?id=47225680</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47225680</guid></item><item><title><![CDATA[New comment by rsfern in "U.S. science agency moves to restrict foreign scientists from its labs"]]></title><description><![CDATA[
<p>It’s all foreign guest researchers by the end of September, high risk countries by the end of March. Your first quote doesn’t imply the NIST sources for this article don’t have firsthand knowledge that this is coming, it’s just that it appears the lab management is avoiding putting things in writing</p>
]]></description><pubDate>Mon, 02 Mar 2026 21:25:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47224333</link><dc:creator>rsfern</dc:creator><comments>https://news.ycombinator.com/item?id=47224333</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47224333</guid></item><item><title><![CDATA[New comment by rsfern in "U.S. science agency moves to restrict foreign scientists from its labs"]]></title><description><![CDATA[
<p>Not a lot, but what is your point exactly? There are a lot of really Chinese scientists working in the US, and the ones who are postdocs and research scientists at NIST are apparently being pushed out at the end of this month. They’ve already been vetted for security concerns, so that justification is kind of thin.<p>How many Taiwanese, German, Indian, French, South Korean, etc scientists are working in the US? The ones working at NIST are facing being pushed out at the end of September.</p>
]]></description><pubDate>Mon, 02 Mar 2026 20:45:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47223794</link><dc:creator>rsfern</dc:creator><comments>https://news.ycombinator.com/item?id=47223794</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47223794</guid></item><item><title><![CDATA[New comment by rsfern in "U.S. science agency moves to restrict foreign scientists from its labs"]]></title><description><![CDATA[
<p>This list of high risk countries is not new (with the exception of maybe Venezuela being recently added, I’m not sure). Researchers with these citizenships have faced extra security review before joining NIST for years, and last year the lab increased the level of security review for everyone (not just this list)<p>I can understand a clearly communicated need for additional security requirements. But NIST operates almost totally in open science mode, with the main exceptions of being industry cooperative agreements. I don’t think this move to shed international researchers by reneging on commitments from the lab has been at all justified from a security standpoint.</p>
]]></description><pubDate>Mon, 02 Mar 2026 13:04:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47217478</link><dc:creator>rsfern</dc:creator><comments>https://news.ycombinator.com/item?id=47217478</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47217478</guid></item><item><title><![CDATA[New comment by rsfern in "U.S. carbon pollution rose in 2025, a reversal from prior years"]]></title><description><![CDATA[
<p>Your first reply was insightful, but this one is not a thoughtful take.<p>Power consumption and emissions are already increasing, and any regulatory changes in 2025 are not factored in to discussion of those numbers. It’s more interesting to discuss what these changes mean when they are a factor in 2026 and on.</p>
]]></description><pubDate>Fri, 16 Jan 2026 13:04:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46645982</link><dc:creator>rsfern</dc:creator><comments>https://news.ycombinator.com/item?id=46645982</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46645982</guid></item><item><title><![CDATA[New comment by rsfern in "Typing an AI prompt is not 'active' music creation"]]></title><description><![CDATA[
<p>I think this is not quite the right analogy. A better analogy is procedurally generated music, because that’s what model-generated music is. But just like with LLM code generation, the input to the program is natural language (or maybe multimodal image/audio/whatever), and the program is implicitly defined by learning from examples of music.<p>I think a lot of the issues are the same. Like you might expect the model to go off the rails if you venture away from the bulk of the training distribution. Or maybe the b most effective way to use it creatively is in some kind of interactive workflow revising specific chunks of the project instead of vibe-coding/composing from whole cloth.</p>
]]></description><pubDate>Mon, 24 Nov 2025 13:24:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=46033843</link><dc:creator>rsfern</dc:creator><comments>https://news.ycombinator.com/item?id=46033843</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46033843</guid></item><item><title><![CDATA[New comment by rsfern in "Abundant Intelligence"]]></title><description><![CDATA[
<p>I think it’s just scale to the moon rhetoric, like “what if we  used 100x more compute?”. Since the units are power and not energy, I’m going with 10 GW continuous load (for training? inference?) but I think it’s not exactly meant literally</p>
]]></description><pubDate>Wed, 24 Sep 2025 00:08:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=45354509</link><dc:creator>rsfern</dc:creator><comments>https://news.ycombinator.com/item?id=45354509</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45354509</guid></item><item><title><![CDATA[New comment by rsfern in "You can't test if quantum uses complex numbers"]]></title><description><![CDATA[
<p>I think this is just loose terminology, instead of squaring they should have said “multiply by the complex conjugate”, which is what you do to quantum mechanical wavefunctions to obtain real-valued probability amplitudes</p>
]]></description><pubDate>Thu, 18 Sep 2025 16:38:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=45291781</link><dc:creator>rsfern</dc:creator><comments>https://news.ycombinator.com/item?id=45291781</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45291781</guid></item><item><title><![CDATA[New comment by rsfern in "There are no new ideas in AI, only new datasets"]]></title><description><![CDATA[
<p>This paper “were RNNs all we needed?” explores this hypothesis a bit, finding that some pre-transformer sequence models can match transformers when trained at appropriate scale. Though they did have to make some modifications to unlock more parallelism<p><a href="https://arxiv.org/abs/2410.01201" rel="nofollow">https://arxiv.org/abs/2410.01201</a></p>
]]></description><pubDate>Tue, 01 Jul 2025 03:50:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=44430415</link><dc:creator>rsfern</dc:creator><comments>https://news.ycombinator.com/item?id=44430415</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44430415</guid></item><item><title><![CDATA[New comment by rsfern in "Uv and Ray: Pain-Free Python Dependencies in Clusters"]]></title><description><![CDATA[
<p>Are there particular libraries that make your setup difficult? I just manually set the index and source following the docs (didn’t know about the auto backend feature) and pin a specific version if I really have to with `uv add “torch==2.4”`. This works pretty well for me for projects that use dgl, which heavily uses C++ extensions and can be pretty finicky about working with particular versions<p>This is in a conventional HPC environment, and I’ve found it way better than conda since the dependency solves are so much faster and I no longer experience PyTorch silently getting downgraded to cpu version of I install a new library. Maybe I’ve been using conda poorly though?</p>
]]></description><pubDate>Fri, 27 Jun 2025 10:51:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=44395700</link><dc:creator>rsfern</dc:creator><comments>https://news.ycombinator.com/item?id=44395700</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44395700</guid></item><item><title><![CDATA[New comment by rsfern in "Reproducing the deep double descent paper"]]></title><description><![CDATA[
<p>I don’t think so? the double descent phenomenon also occurs in linear models under the right conditions. My understanding of this is that when the effective model capacity is exactly equal to the  information in the dataset, there is only one solution that interpolates the training data perfectly, but when the capacity increases far beyond this there are many such interpolating solutions. Apply enough regularization and you are likely to find an interpolating solution that generalizes well</p>
]]></description><pubDate>Fri, 06 Jun 2025 01:14:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=44197022</link><dc:creator>rsfern</dc:creator><comments>https://news.ycombinator.com/item?id=44197022</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44197022</guid></item><item><title><![CDATA[New comment by rsfern in "US science is being wrecked"]]></title><description><![CDATA[
<p>This is already what the funding agencies do! The merit review process solicits outside expert assessment of the importance, feasibility, and potential impact (including economic development and societal impact) of the research, and the funding agencies do their best to maintain a balanced portfolio of research that is promising for advancing national priorities<p>By all means we should discuss the transparency of this process, what those national priorities are, and exactly what we (collectively as taxpayers) the risk-reward tradeoff should be. But let’s not pretend that the funding agencies don’t already view science as a public investment, or be too hasty about dismissing the potential medium term economic value of research into for example geology and geochemistry on mars</p>
]]></description><pubDate>Thu, 05 Jun 2025 16:33:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=44193229</link><dc:creator>rsfern</dc:creator><comments>https://news.ycombinator.com/item?id=44193229</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44193229</guid></item><item><title><![CDATA[New comment by rsfern in "Deep learning gets the glory, deep fact checking gets ignored"]]></title><description><![CDATA[
<p>This is all awesome, but a bit off topic for the thread which focuses on AI for science<p>The disconnect here is that the cost of iteration is low and it’s relatively easy to verify the quality of a generated C program (does the compiler issue warnings or errors? Does  it pass a test suite?) or a recipe (basic experience is probably enough to tell if an ingredient sends out of place or proportions are wildly off)<p>In science, verifying a prediction is often super difficult and/or expensive because at prediction time we’re trying to shortcut around an expensive or intractable measurement or simulation. Unreliable models can really change the tradeoff point of whether AI accelerates science or just massively inflated the burn rate</p>
]]></description><pubDate>Wed, 04 Jun 2025 12:07:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=44179825</link><dc:creator>rsfern</dc:creator><comments>https://news.ycombinator.com/item?id=44179825</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44179825</guid></item><item><title><![CDATA[New comment by rsfern in "Yoshua Bengio Launches LawZero: A New Nonprofit Advancing Safe-by-Design AI"]]></title><description><![CDATA[
<p>“Kill the [model] for trying” kind of sounds like using reinforcement learning to get models to behave a certain way</p>
]]></description><pubDate>Wed, 04 Jun 2025 02:20:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=44176659</link><dc:creator>rsfern</dc:creator><comments>https://news.ycombinator.com/item?id=44176659</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44176659</guid></item><item><title><![CDATA[New comment by rsfern in "Vertically rolling ball 'challenges our basic understanding of physics'"]]></title><description><![CDATA[
<p>The article in Soft Matter is open access: <a href="https://doi.org/10.1039/D4SM01490A" rel="nofollow">https://doi.org/10.1039/D4SM01490A</a><p>They have some interesting analysis of the elastic deformation that happens during the rolling process (as opposed to the ball just falling or sliding). Turns out it’s pretty sensitive to the elastic constant of both the ball and the wall</p>
]]></description><pubDate>Tue, 03 Jun 2025 03:49:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=44166088</link><dc:creator>rsfern</dc:creator><comments>https://news.ycombinator.com/item?id=44166088</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44166088</guid></item></channel></rss>