<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: doubleunplussed</title><link>https://news.ycombinator.com/user?id=doubleunplussed</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 00:39:17 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=doubleunplussed" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by doubleunplussed in "SparkFun Officially Dropping AdaFruit due to CoC Violation"]]></title><description><![CDATA[
<p>> "Someone did a CoC violation" is just a way for an org to say "someone was an asshole [...]"<p>Not even that, since so many CoCs are vague enough that someone unprincipled wielding them could be using them for petty interpersonal disputes. Unless I already have reason to trust the accuser, when I see "CoC violation" it tells me there's drama but it doesn't tell me who the asshole is.</p>
]]></description><pubDate>Wed, 14 Jan 2026 23:41:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=46625684</link><dc:creator>doubleunplussed</dc:creator><comments>https://news.ycombinator.com/item?id=46625684</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46625684</guid></item><item><title><![CDATA[New comment by doubleunplussed in "Ozempic is changing the foods Americans buy"]]></title><description><![CDATA[
<p>That is true but requires some extra assumptions to explain why people don't keep losing weight - because the strongest influence on most people's appetite in the short run is how much of a deficit or surplus they're currently in. Thus as TDEE drops, so does hunger.<p>In "setpoint theory" there's an additional hunger drive based on whether you are below or above a given level of adiposity - your "setpoint". This is often given as an explanation for why people can't keep weight off, and is the sort of thing you'd need to posit to explain why people on GLP-1 inhibitors can't as easily get to lower levels of adiposity.</p>
]]></description><pubDate>Tue, 13 Jan 2026 03:27:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46597034</link><dc:creator>doubleunplussed</dc:creator><comments>https://news.ycombinator.com/item?id=46597034</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46597034</guid></item><item><title><![CDATA[New comment by doubleunplussed in "Avoid 2:00 and 3:00 am cron jobs (2013)"]]></title><description><![CDATA[
<p>One thing I hear people say in places DST was abolished is that the late sunrises in winter are similarly depressing, and that this is something not really appreciated by those who want to abolish DST by having it be summer time year round</p>
]]></description><pubDate>Tue, 28 Oct 2025 00:11:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=45727910</link><dc:creator>doubleunplussed</dc:creator><comments>https://news.ycombinator.com/item?id=45727910</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45727910</guid></item><item><title><![CDATA[New comment by doubleunplussed in "Why are there so many rationalist cults?"]]></title><description><![CDATA[
<p>I don't think you really understood my comment</p>
]]></description><pubDate>Sat, 16 Aug 2025 04:07:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=44920059</link><dc:creator>doubleunplussed</dc:creator><comments>https://news.ycombinator.com/item?id=44920059</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44920059</guid></item><item><title><![CDATA[New comment by doubleunplussed in "Why are there so many rationalist cults?"]]></title><description><![CDATA[
<p>I didn't say it did, I said they're aware of it.<p>Also thought it was worth pointing out that the LW "weird label" predates the label in the comment I replied to.</p>
]]></description><pubDate>Sat, 16 Aug 2025 04:06:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=44920049</link><dc:creator>doubleunplussed</dc:creator><comments>https://news.ycombinator.com/item?id=44920049</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44920049</guid></item><item><title><![CDATA[New comment by doubleunplussed in "Why are there so many rationalist cults?"]]></title><description><![CDATA[
<p>I just mean that the existence of the human brain is proof that human-level intelligence is possible.<p>Yes it took billions of years all said and done, but it shows that there are no fundamental limits that prevent this level of intelligence. It even proves it can in principle be done with a few tens of watts a certain approximate amount of computational power.<p>Some used to think the first AIs would be brain uploads, for this reason. They thought we'd have the computing power and scanning techniques to scan and simulate all the neurons of a human brain before inventing any other architecture capable of coming close to the same level of intelligence. That now looks to be less likely.<p>Current state of the art AI still operate with less computational power than the human brain, and they are far less efficient at learning that humans are (there is a sense in which a human intelligence takes a merely years to develop - i.e. childhood - rather than billions, this is also a relevant comparison to make). Humans can learn from far fewer examples than current AI can.<p>So we've got some catching up to do - but humans prove it's possible.</p>
]]></description><pubDate>Fri, 15 Aug 2025 07:13:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=44909473</link><dc:creator>doubleunplussed</dc:creator><comments>https://news.ycombinator.com/item?id=44909473</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44909473</guid></item><item><title><![CDATA[New comment by doubleunplussed in "Why are there so many rationalist cults?"]]></title><description><![CDATA[
<p>> "recursive self improvement" does not imply "self improvement without bounds"<p>Obviously not, but thinking that the bounds are going to lie in between where AI intelligence is now and human intelligence I think is unwarranted - as mentioned, humans are unlikely to be the peak of what's possible since evolution did not optimise us for intelligence alone.<p>If you think the recursive self-improvement people are arguing for improvement without bounds, I think you're simply mistaken, and it seems like you have not made a good faith effort to understand their view.<p>AI only needs to be somewhat smarter than humans to be very powerful, the only arguments worth having IMHO are over whether recursive self-improvement will lead to AI being a head above humans or not. Diminishing returns will happen at some point (in the extreme due to fundamental physics, if nothing sooner), but whether it happens in time to prevent AI from becoming meaningfully more powerful than humans is the relevant question.<p>> we do not have a definition of intelligence<p>This strikes me as an unserious argument to make. Some animals are clearly more intelligent than others, whether you use a shaky definition or not. Pick whatever metric of performance on intellectual tasks you like, there is such a thing as human-level performance, and humans and AIs can be compared. You can't even make your subsequent arguments about AI performance being made worse by various factors unless you acknowledge such performance is measuring something meaningful. You can't even argue against recursive self-improvement if you reject that there is anything measurable that can be improved. I think you should retract this point as it prevents you making your own arguments.<p>> There is not even any evidence that LLMs will continue increasing as techniques improve, except in the tautological sense that if LLMs don't appear to resemble humans more closely we will call the technique a failure.<p>I'm pretty confused by this claim - whatever our difficulties defining intelligence, "resembling humans" is not it. Do you not believe there are tasks on which performance can be objectively graded beyond similarity to humans? I think it's quite easy to define tasks that we can judge the success of without being able to do it ourselves. If AI solves all the Millennium Prize Problems, that would be amazing! I don't need to have resolved all issues with a definition of intelligence to be impressed.<p>Anyway, is there really <i>no</i> evidence? AI having improved so far is not <i>any</i> evidence that it might continue, <i>even a little bit</i>? Are we really helpless to predict whether there will be any better chatbots released in the remainder of this year than we already have?<p>I do not think we are that helpless - if you entirely reject past trends as an indicator of future trends, and treat them as literally zero evidence at all, then this is simply faulty reasoning. Past trends are not a <i>guarantee</i> of future trends, but neither are they zero evidence. They are a nonzero medium amount of evidence, the strength of which depends on how long the trends have been going on and how well we understand the fundamentals driving them.<p>> thinking clearly is about the reasoning, not the conclusion.<p>And I think we have good arguments! You seem to have strong priors that the default is that machines can't reach human intelligence/performance or beyond, and you really need convincing otherwise. I think the fact that we have an existence proof in humans of human intelligence and an algorithm to get there proves it's possible. And I consider it quite unlikely that humans are the peak of intelligence/performance-on-whatever-metrics that is possible given it's now what we were optimised for specifically.<p>All your arguments about why progress might slow or stop short of superhuman-levels are legitimate and can't be ruled out, and yet these things have not been limiting factors so far despite that they would have been equally valid to make these arguments any time in the past few years.<p>> no legitimate argument has been presented that implies the conclusion<p>I mean it's probabilistic, right? I'm expecting something like an 85% chance of AGI before 2040. I don't think it's guaranteed, but when you look at progress so far, and that nature gives us proof (in the form of the human brain) that it's not impossible in any <i>fundamental</i> way, I think that's reasonable. Reasonability arguments and extrapolations are all we have, we can't imply anything definitively.<p>You think what probability?<p>Interested in a bet?</p>
]]></description><pubDate>Fri, 15 Aug 2025 07:04:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=44909403</link><dc:creator>doubleunplussed</dc:creator><comments>https://news.ycombinator.com/item?id=44909403</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44909403</guid></item><item><title><![CDATA[New comment by doubleunplussed in "Why are there so many rationalist cults?"]]></title><description><![CDATA[
<p>I think most in the rationality community (and otherwise in the know) would not say that IQ differences are almost entirely biological - I think they'd say they're about half genetic and half environmental, but that the environmental component is hard to pin to "parenting" or anything else specific. "Non-shared environment" is the usual term.<p>They'd agree it's largely stable over life, after whatever childhood environmental experiences shape that "non-shared environment" bit.<p>This is the current state of knowledge in the field as far as I know - IQ is about half genetic, and fairly immutable after adulthood. I think you'll find the current state of the field supports this.</p>
]]></description><pubDate>Fri, 15 Aug 2025 06:13:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=44909141</link><dc:creator>doubleunplussed</dc:creator><comments>https://news.ycombinator.com/item?id=44909141</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44909141</guid></item><item><title><![CDATA[New comment by doubleunplussed in "Why are there so many rationalist cults?"]]></title><description><![CDATA[
<p>Certainly something they're aware of - the same concept was discussed as early as in 2007 on Less Wrong under the name "evaporative cooling of group beliefs"<p><a href="https://www.lesswrong.com/posts/ZQG9cwKbct2LtmL3p/evaporative-cooling-of-group-beliefs" rel="nofollow">https://www.lesswrong.com/posts/ZQG9cwKbct2LtmL3p/evaporativ...</a></p>
]]></description><pubDate>Wed, 13 Aug 2025 05:15:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=44884842</link><dc:creator>doubleunplussed</dc:creator><comments>https://news.ycombinator.com/item?id=44884842</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44884842</guid></item><item><title><![CDATA[New comment by doubleunplussed in "Why are there so many rationalist cults?"]]></title><description><![CDATA[
<p>I think you'll indeed find, if you were to seek out the relevant literature, that those claims are more or less true, or at least, are the currently best-supported interpretation available. So I don't think they're assumptions so much as simply current state of the science on the matter, and therefore widely accepted among those who for whatever reason have looked into it (or, more likely, inherited the information from someone they trust who has read up on it).<p>Interestingly, I think we're increasingly learning that although most aspects of human intelligence seem to correlate with each other (thus the "singular factor"  interpretation), the grab-bag of skills this corresponds to are maybe a bit arbitrary when compared to AI. What evolution decided to optimise the hell out of in human intelligence is specific to us, and not at all the same set of skills as you get out of cranking up the number of parameters in an LLM.<p>Thus LLMs continuing to make atrocious mistakes of certain kinds, despite outshining humans at other tasks.<p>Nonetheless I do think it's correct to say that the rationalists think intelligence is a real measurable thing, and that although in humans it might be a set of skills that correlate and maybe in AIs it's a different set of skills that correlate (such that outperforming humans in IQ tests is impressive but not  definitive), that therefore AI progress can be measured and it is meaningful to say "AI is smarter than humans" at some point. And that AI with better-than-human intelligence could solve a lot of problems, if of course it doesn't kill us all.</p>
]]></description><pubDate>Wed, 13 Aug 2025 05:07:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=44884807</link><dc:creator>doubleunplussed</dc:creator><comments>https://news.ycombinator.com/item?id=44884807</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44884807</guid></item><item><title><![CDATA[New comment by doubleunplussed in "Why are there so many rationalist cults?"]]></title><description><![CDATA[
<p>How do you reconcile e.g. AlphaGo with the idea that data is a bottleneck?<p>At some point learning can occur with "self-play", and I believe this is already happening with LLMs to some extent. Then you're not limited by imitating human-made data.<p>If learning something like software development or mathematical proofs, it is easier to verify whether a solution is correct than to come up with the solution in the first place, many domains are like this. Anything like that is amenable to learning on synthetic data or self-play like AlphaGo did.<p>I can understand that people who think of LLMs as human-imitation machines, limited to training on human-made data, would think they'd be capped at human-level intelligence. However I don't think that's the case, and we have at least one example of superhuman AI in one domain (Go) showing this.<p>Regarding cost, I'd have to look into it, but I'm under the impression costs have been up and down over time as models have grown but there have also been efficiency improvements.<p>I think I'd hazard a guess that end-user costs have not grown exponentially like time horizon capabilities, even though investment in training probably has. Though that's tricky to reason about because training costs are amortised and it's not obvious whether end user costs are at a loss or what profit margin for any given model.<p>On the fast-slow takeoff - Yud does seem to beleive in a fast takeoff yes, but it's also one of the the oldest disagreements in rationality circles, on which he disagreed with his main co-blogger on the orignal rationalist blog, Overcoming Bias, some discussion of this and more recent disagreements here [1].<p>[1] <a href="https://www.astralcodexten.com/p/yudkowsky-contra-christiano-on-ai" rel="nofollow">https://www.astralcodexten.com/p/yudkowsky-contra-christiano...</a></p>
]]></description><pubDate>Wed, 13 Aug 2025 04:34:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=44884667</link><dc:creator>doubleunplussed</dc:creator><comments>https://news.ycombinator.com/item?id=44884667</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44884667</guid></item><item><title><![CDATA[New comment by doubleunplussed in "Why are there so many rationalist cults?"]]></title><description><![CDATA[
<p>Blocking people on Twitter doesn't necessarily imply intolerance of people who disagree with you. People often block for different reasons than disagreement.</p>
]]></description><pubDate>Wed, 13 Aug 2025 03:05:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=44884277</link><dc:creator>doubleunplussed</dc:creator><comments>https://news.ycombinator.com/item?id=44884277</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44884277</guid></item><item><title><![CDATA[New comment by doubleunplussed in "Why are there so many rationalist cults?"]]></title><description><![CDATA[
<p>Many specific studies on the matter don't replicate, I think the book preceded the replication crisis so this is to be expected, but I don't think that negates the core idea that our brain does some things on autopilot whereas other things take conscious thought which is slower. This is a useful framework to think about cognition, though any specific claims need evidence obviously.<p>TBH I've learned that even the best pop sci books making (IMHO) correct points tend to have poor citations - to studies that don't replicate or don't quite say what they're being cited to say - so when I see this, it's just not very much evidence one way or the other. The bar is super low.</p>
]]></description><pubDate>Wed, 13 Aug 2025 03:03:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=44884263</link><dc:creator>doubleunplussed</dc:creator><comments>https://news.ycombinator.com/item?id=44884263</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44884263</guid></item><item><title><![CDATA[New comment by doubleunplussed in "Why are there so many rationalist cults?"]]></title><description><![CDATA[
<p>> The most powerful AI we have now is strictly hardware-dependent<p>Of course that's the case and it always will be - the cutting edge is the cutting edge.<p>But the best AI you can run on your own computer is way better than the state of the art just a few years ago - progress is being made at all levels of hardware requirements, and hardware is progressing as well. We now have dedicated hardware in some of our own devices for doing AI inference - the hardware-specificity of AI doesn't mean we won't continue to improve and commoditise said hardware.<p>> The part that seems indefensible is the unexamined assumptions about LLMs' ability (or AI's ability more broadly) to jump to optimal human ability [...]<p>I don't think this is at all unexamined. But I think it's risky to not consider the strong possibility when we have an existence proof in ourselves of that level of intelligence, and an algorithm to get there, and no particular reason to believe we're optimal since that algorithm - evolution - did not optimise us for intelligence alone.</p>
]]></description><pubDate>Wed, 13 Aug 2025 02:54:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=44884219</link><dc:creator>doubleunplussed</dc:creator><comments>https://news.ycombinator.com/item?id=44884219</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44884219</guid></item><item><title><![CDATA[New comment by doubleunplussed in "Why are there so many rationalist cults?"]]></title><description><![CDATA[
<p>I'm surprised not see see much pushback on your point here, so I'll provide my own.<p>We have an existence proof for intelligence that can improve AI: humans can do this right now.<p>Do you think AI can't reach human-level intelligence? We have an existence proof of human-level intelligence: humans. If you think AI will reach human-level intelligence then recursive self-improvement naturally follows. How could it not?<p>Do you not think human-level intelligence is some kind of natural maximum? Why? That would be strange, no? Even if you think it's some natural maximum for LLMs specifically, why? And why do you think we wouldn't modify architectures as needed to continue to make progress? That's already happening, our LLMs are a long way from the pure text prediction engines of four or five years ago.<p>There is already a degree of recursive improvement going on right now, but with humans still in the loop. AI researchers currently use AI in their jobs, and despite the recent study suggesting AI coding tools don't improve productivity in the circumstances they tested, I suspect AI researchers' productivity is indeed increased through use of these tools.<p>So we're already on the exponential recursive-improvement curve, it's just that it's not exclusively "self" improvement until humans are no longer a necessary part of the loop.<p>On your specific points:<p>> 1. What if increasing intelligence has diminishing returns, making recursive improvement slow?<p>Sure. But this is a point of active debate between "fast take-off" and "slow take-off" scenarios, it's certainly not settled among rationalists which is more plausible, and it's a straw man to suggest they all believe in a fast take-off scenario. But both fast and slow take-off due to recursive self-improvement are still recursive self-imrpovement, so if you only want to criticise the fast take-off view, you should speak more precisely.<p>I find both slow and fast take-off plausible, as the world has seen both periods of fast economic growth through technology, and slower economic growth. It really depends on the details, which brings us to:<p>> 2. LLMs already seem to have hit a wall of diminishing returns<p>This is IMHO false in any meaningful sense. Yes, we have to use more computing power to get improvements without doing any other work. But have you seen METR's metric [1] on AI progress in terms of the (human) duration of task they can complete? This is an exponential curve that has not yet bent, and if anything has accelerated slightly.<p>Do not confuse GPT-5 (or any other incrementally improved model) failing to live up to unreasonable hype for an actual slowing of progress. AI capabilities are continuing to increase - being on an exponential curve often feels unimpressive at any given moment, because the relative rate of progress isn't increasing. This is a fact about our psychology, if we look at actual metrics (that don't have a natural cap like evals that max out at 100%, these are not good for measuring progress in the long-run) we see steady exponential progress.<p>> 3. What if there are several paths to different kinds of intelligence with their own local maxima, in which the AI can easily get stuck after optimizing itself into the wrong type of intelligence?<p>This seems valid. But it seems to me that unless we see METR's curve bend soon, we should not count on this. LLMs have specific flaws, but I think if we are honest with ourselves and not over-weighting the specific silly mistakes they still make, they are on a path toward human-level intelligence in the coming years. I realise that claim will sound ridiculous to some, but I think this is in large part due to people instinctively internalising that everything LLMs <i>can</i> do is not that impressive (it's incredible how quickly expectations adapt), and therefore over-indexing on their remaining weaknesses, despite those weaknesses improving over time as well. If you showed GPT-5 to someone from 2015, they would be telling you this thing is near human intelligence or even more intelligent than the average human. I think we all agree that's not true, but I think that superficially people would think it was if their expectations weren't constantly adapting to the state of the art.<p>> 4. Once AI realizes it can edit itself to be more intelligent, it can also edit its own goals. Why wouldn't it wirehead itself?<p>It might - but do we think it would? I have no idea. Would you wirehead yourself if you could? I think many humans do something like this (drug use, short-form video addiction), and expect AI to have similar issues (and this is one reason it's dangerous) but most of us don't feel this is an adequate replacement for "actually" satisfying our goals, and don't feel inclined to modify our own goals to make it so, if we were able.<p>> Knowing Yudowsky I'm sure there's a long blog post somewhere where all of these are addressed with several million rambling words of theory<p>Uncalled for I think. There are valid arguments against you, and you're pre-emptively dismissing responses to you by vaguely criticising their longness. This comment is longer than yours, and I reject any implication that that weakens anything about it.<p>Your criticisms are three "what ifs" and a (IMHO) falsehood - I don't think you're doing much better than "millions of words of theory without evidence". To the extent that it's true Yudkowsky and co theorised without evidence, I think they deserve cred, as this theorising predated the current AI ramp-up at a time when most would have thought AI anything like what we have now was a distant pipe dream. To the extent that this theorising continues in the present, it's not without evidence - I point you again to METR's unbending exponential curve.<p>Anyway, so I contend your points comprise three "what ifs" and (IMHO) a falsehood. Unless you think "AI can't recursively self-improve itself" already has strong priors in its favour such that strong arguments are needed to shift that view (and I don't think that's the case at all), this is weak. You will need to argue why we should need to have strong evidence to overturn a default "AI can't recursively self-improve" view, when it seems that a) we are already seeing recursive improvement (just not purely "self"-improvement), and that it's very normal for technological advancement to have recursive gains - see e.g. Moore's law or technological contributions to GDP growth generally.<p>Far from a damning example of rationalists thinking sloppily, this particular point seems like one that shows sloppy thinking on the part of the critics.<p>It's at least debateable, which is all it has to be for calling it "the biggest nonsense axion" to be a poor point.<p>[1] <a href="https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/" rel="nofollow">https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...</a></p>
]]></description><pubDate>Wed, 13 Aug 2025 02:44:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=44884170</link><dc:creator>doubleunplussed</dc:creator><comments>https://news.ycombinator.com/item?id=44884170</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44884170</guid></item><item><title><![CDATA[New comment by doubleunplussed in "Why are there so many rationalist cults?"]]></title><description><![CDATA[
<p>I mean you have to admit that that's a bit of a kafkatrap</p>
]]></description><pubDate>Tue, 12 Aug 2025 22:24:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=44882486</link><dc:creator>doubleunplussed</dc:creator><comments>https://news.ycombinator.com/item?id=44882486</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44882486</guid></item><item><title><![CDATA[New comment by doubleunplussed in "Why are there so many rationalist cults?"]]></title><description><![CDATA[
<p>On the other hand, I'm baffled to encounter recursive self-improvement being discussed as something not only weird to expect,  but as damning evidence of sloppy thinking by those who speculate about it.<p>We have an existence proof for intelligence that can improve AI: humans.<p>If AI ever gets to human-level intelligence, it would be quite strange if it <i>couldn't</i> improve itself.<p>Are people really that sceptical that AI will get to human level intelligence?<p>It that an insane belief worthy of being a primary example of a community not thinking clearly?<p>Come on! There is a good chance AI will recursively self-improve! Those poo pooing this idea are the ones not thinking clearly.</p>
]]></description><pubDate>Tue, 12 Aug 2025 22:10:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=44882367</link><dc:creator>doubleunplussed</dc:creator><comments>https://news.ycombinator.com/item?id=44882367</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44882367</guid></item><item><title><![CDATA[New comment by doubleunplussed in "New colors without shooting lasers into your eyes"]]></title><description><![CDATA[
<p>Well, black-body radiation is still peaked around a certain range of wavelengths depending on the temperature, it's not just equal power at all wavelengths.<p>Light visible to humans is at the peakiest bit of the sun's black body spectrum, see here image here: <a href="https://i.sstatic.net/kRUju.png" rel="nofollow">https://i.sstatic.net/kRUju.png</a><p>Green isn't just the wavelength the atmosphere lets through the best, or the wavelength humans are most sensitive to, it's also the peak of the sun's black body spectrum.</p>
]]></description><pubDate>Mon, 21 Jul 2025 10:54:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=44633732</link><dc:creator>doubleunplussed</dc:creator><comments>https://news.ycombinator.com/item?id=44633732</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44633732</guid></item><item><title><![CDATA[New comment by doubleunplussed in "New colors without shooting lasers into your eyes"]]></title><description><![CDATA[
<p>I thought it was mostly that those are the wavelengths output out by the sun.<p>But I guess it could be both.</p>
]]></description><pubDate>Mon, 21 Jul 2025 02:31:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=44631268</link><dc:creator>doubleunplussed</dc:creator><comments>https://news.ycombinator.com/item?id=44631268</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44631268</guid></item><item><title><![CDATA[New comment by doubleunplussed in "Ferromagnetic half levitation of LK-99-like synthetic samples"]]></title><description><![CDATA[
<p>Andrew is mistaken, the paper he cites doesn't say that levitation is possible in a dipole field. Brauenbecker showed that diamagnetic levitation was possible at all (e.g. in a quadrupole field), but not in a dipole field.<p>Stable levitation in a dipole field is still thought to be something only type II superconductors can do, and Andrew should not uncritically repeat what he read on /sci/ - which is one of the only other google results for "Brauenbecker extension" currently (after his tweet).</p>
]]></description><pubDate>Tue, 08 Aug 2023 07:05:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=37045496</link><dc:creator>doubleunplussed</dc:creator><comments>https://news.ycombinator.com/item?id=37045496</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37045496</guid></item></channel></rss>