<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: PollardsRho</title><link>https://news.ycombinator.com/user?id=PollardsRho</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 10:33:00 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=PollardsRho" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by PollardsRho in "Uncovering insiders and alpha on Polymarket with AI"]]></title><description><![CDATA[
<p>What about bets without insider participation, where you want the market to function as an aggregator of educated guesses? OP has one reaction to insider trading, but I imagine a very common alternative would be "those insiders make their money off of bettors like me, I shouldn't participate." Some questions are clearly insider-proof, but I imagine many questions have insiders who don't bet on Polymarket. If Polymarket is going to be a good prediction market, surely it should incentivize people to make predictions on those questions too?</p>
]]></description><pubDate>Fri, 20 Feb 2026 22:47:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47095085</link><dc:creator>PollardsRho</dc:creator><comments>https://news.ycombinator.com/item?id=47095085</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47095085</guid></item><item><title><![CDATA[New comment by PollardsRho in "Spell Checking a Year's Worth of Hacker News"]]></title><description><![CDATA[
<p>A lot of expressions in English started out as calques, outputs that process: you're paving the way!</p>
]]></description><pubDate>Fri, 20 Feb 2026 18:20:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47091684</link><dc:creator>PollardsRho</dc:creator><comments>https://news.ycombinator.com/item?id=47091684</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47091684</guid></item><item><title><![CDATA[New comment by PollardsRho in "Dario Amodei – "We are near the end of the exponential" [video]"]]></title><description><![CDATA[
<p>All models are wrong; some are useful. Cognizance of that is even more critical for a model like exponential growth that often leads to extremely poor predictions quickly if uncritically extrapolated.<p>I think "are the failures of a simple linear regression on the METR graph relevant" is a much better framing than "does seeing a line if you squint extrapolate forever." As I said, I'd much rather frame the discussion around the actual material conditions of AI progress, but if you are going to be drawing lines I'd at least want to start by acknowledging that no such model will be perfect.</p>
]]></description><pubDate>Fri, 13 Feb 2026 19:03:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47006409</link><dc:creator>PollardsRho</dc:creator><comments>https://news.ycombinator.com/item?id=47006409</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47006409</guid></item><item><title><![CDATA[New comment by PollardsRho in "Dario Amodei – "We are near the end of the exponential" [video]"]]></title><description><![CDATA[
<p>> Given consistent trends of exponential performance improvements over many years and across many industries, it would be extremely surprising if these improvements suddenly stopped.<p>This is the part I find very strange. Let's table the problems with METR [1], just noting that benchmarking AI is extremely hard and METR's methodology is not gospel just because METR's "sole purpose is to study AI capabilities". (That is not a good way to evaluate research!)<p>Taking whatever idealized metric you want, at some point it has to level off. That's almost trivially true: everyone should agree that unrestricted exponential growth forever is impossible, if only for the eventual heat death of the universe. That makes the question when, and not if. When do external forces dominate whatever positive feedback loops were causing the original growth? In AI, those positive feedback loops include increased funding, increased research attention and human capital, increased focus on AI-friendly hardware, and many others, including perhaps some small element of AI itself assisting the research process that could become more relevant in the future.<p>These positive feedback loops have happened many times, and they often do experience quite sharp level-offs as some external factor kicks in. Commercial aircraft speeds experienced a very sharp increase until they leveled off. Many companies grow very rapidly at first and then level off. Pandemics grow exponentially at first before revealing their logistic behavior. Scientific progress often follows a similar trajectory: a promising field emerges, significant increased attention brings a bevy of discoveries, and as the low-hanging fruit is picked the cost of additional breakthroughs surges and whatever fundamental limitations the approach has reveal themselves.<p>It's not "extremely surprising" that COVID did not infect a trillion people, even though there are some <i>extremely</i> sharp exponentials you can find looking at the first spread in new areas. It isn't extremely surprising that I don't book flights at Mach 3, or that Moore's Law was not an ironclad law of the universe.<p>Does that mean the entire field will stop making any sort of progress? Of course not. But any analysis that fundamentally boils down to taking a (deeply flawed) graph and drawing a line through it and simplifying the whole field of AI research to "line go up" is not going to give you well-founded predictions for the future.<p>A much more fruitful line of analysis, in my view, is to focus on the actual conditions and build a reasonable model of AI progress that includes current data while building in estimations of sigmoidal behavior. Does training scaling continue forever? Probably not, given the problems with e.g., GPT-4.5 and the limited amount of quality non-synthetic training data. It's reasonable to expect synthetic training data to work better over time, and it's also reasonable to expect the next generation of hardware to also enable an additional couple orders of magnitude. Beyond that, especially if the money runs out, it seems like scaling will hit a pretty hard wall barring exceptional progress. Is inference hardware going to get better enough that drastically increased token outputs and parallelism won't matter? Probably not, but you can definitely forecast continued hardware improvements to some degree. What might a new architectural paradigm be for AI, and would that have significant improvements over current methodology? To what degree is existing AI deployment increasing the amount of useful data for AI training? What parts of the AI improvement cycle rely on real-world tasks that might fundamentally limit progress?<p>That's what the discussion should be, not reposting METR for the millionth time and saying "line go up" the way people do about Bitcoin.<p>[1] <a href="https://www.transformernews.ai/p/against-the-metr-graph-coding-capabilities-software-jobs-task-ai" rel="nofollow">https://www.transformernews.ai/p/against-the-metr-graph-codi...</a></p>
]]></description><pubDate>Fri, 13 Feb 2026 18:40:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47006102</link><dc:creator>PollardsRho</dc:creator><comments>https://news.ycombinator.com/item?id=47006102</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47006102</guid></item><item><title><![CDATA[New comment by PollardsRho in "The role of the University is to resist AI"]]></title><description><![CDATA[
<p>Students shouldn't be treating class material as something they "do not care to know."<p>AI can be used in ways that lead to deeper understanding. If a student wants AI to give them practice problems, or essay feedback, or a different explanation of something that they struggle with, all of those methods of learning should translate to actual knowledge that can be the foundation of future learning or work and can be evaluated without access to AI.<p>That actual knowledge is really important. Literacy and numeracy are not the same thing as mental arithmetic. Someone who can't read literature in their field (whether that's a Nature paper or a business proposal or a marketing tweet) shouldn't rely on AI to think for them, and certainly universities shouldn't be encouraging that and endorsing it through a degree.<p>I think the most important thing about that kind of deeper knowledge is that it's "frictional", as the original essay says. The highest-rated professors aren't necessarily the ones I've learned the most from, because deep learning is hard and exhausting. Students, by definition, don't know what's important and what isn't. If someone has done that intellectual labor and then finds AI works well enough, great. But that's a far cry from being reliant on AI output and incapable of understanding its limitations.</p>
]]></description><pubDate>Mon, 30 Jun 2025 18:23:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=44426328</link><dc:creator>PollardsRho</dc:creator><comments>https://news.ycombinator.com/item?id=44426328</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44426328</guid></item><item><title><![CDATA[New comment by PollardsRho in "P-Hacking in Startups"]]></title><description><![CDATA[
<p>If you have many metrics that could possibly be construed as "this was what we were trying to improve", that's many different possibilities for random variation to give you a false positive. If you're explicit at the start of an experiment that you're considering only a single metric a success, it turns any other results you get into "hmm, this is an interesting pattern that merits further exploration" and not "this is a significant result that confirms whatever I thought at the beginning."<p>It's basically a variation on the multiple comparisons, but sneakier: it's easy to spend an hour going through data and, over that time, test dozens of different hypotheses. At that point, whatever p-value you'd compute for a single comparison isn't relevant, because after that many comparisons you'd expect at least one to have uncorrected p = 0.05 by random chance.</p>
]]></description><pubDate>Sat, 21 Jun 2025 23:58:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=44341631</link><dc:creator>PollardsRho</dc:creator><comments>https://news.ycombinator.com/item?id=44341631</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44341631</guid></item><item><title><![CDATA[New comment by PollardsRho in "The Claude Bliss Attractor"]]></title><description><![CDATA[
<p>Advantage, sure. I just don't think that advantage is particularly meaningful in situations a human has virtually no chance of escaping. Humans also have a lot of their own advantages. How is a chatbot supposed to cross an air gap unless you assume it has what I consider unrealistic levels of persuasion?<p>I think you also have to consider that AI with superpowers is not going to materialize overnight. If superintelligent AI is on the horizon, the first such AI will be comparable to very capable humans (who do not have the ability to talk their way into nuclear launch codes or out of decades-long prison sentences at will). Energy costs will still be tremendous, and just keeping the system going will require enormous levels of human cooperation. The world will change a lot in that kind of scenario, and I don't know how reasonable it is to claim anything more than the observation of potential risks in a world so different from the one we know.<p>Is it possible that search ends up doing as much for persuasion as it does for chess, superintelligent AI happens relatively soon, and it doesn't have prohibitive energy costs such that escape is a realistic scenario? I suppose? Is any of that obvious or even likely? I wouldn't say so.</p>
]]></description><pubDate>Sat, 14 Jun 2025 15:33:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=44276982</link><dc:creator>PollardsRho</dc:creator><comments>https://news.ycombinator.com/item?id=44276982</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44276982</guid></item><item><title><![CDATA[New comment by PollardsRho in "The Claude Bliss Attractor"]]></title><description><![CDATA[
<p>If someone who is so good at manipulation their life is adapted into a movie  still ends up serving decades behind bars, isn't that actually a pretty good indication that maxing out Speech doesn't give you superpowers?<p>AI that's as good as a persuasive human at persuasion is clearly impactful, but I certainly don't see it as self-evident that you can just keep drawing the line out until you end up with 200 IQ AI that is so easily able to manipulate the environment it's not worth elaborating how exactly a chatbot is supposed to manipulate the world through extremely limited interfaces with the outside world.</p>
]]></description><pubDate>Sat, 14 Jun 2025 06:20:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=44274590</link><dc:creator>PollardsRho</dc:creator><comments>https://news.ycombinator.com/item?id=44274590</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44274590</guid></item><item><title><![CDATA[New comment by PollardsRho in "The Claude Bliss Attractor"]]></title><description><![CDATA[
<p>I don't think there's a confident upper bound. I just don't see why it's self-evident that the upper bound is beyond anything we've ever seen in human history.</p>
]]></description><pubDate>Sat, 14 Jun 2025 06:08:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=44274548</link><dc:creator>PollardsRho</dc:creator><comments>https://news.ycombinator.com/item?id=44274548</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44274548</guid></item><item><title><![CDATA[New comment by PollardsRho in "The Claude Bliss Attractor"]]></title><description><![CDATA[
<p>People are hurt by animals all the time: do you think having a higher IQ than a grizzly bear means you have nothing to fear from one?<p>I certainly think it's possible to imagine that an AI that says the exactly correct thing in any situation would be much more persuasive than any human. (Is that actually possible given the limitations of hardware and information? Probably not, but it's at least not on its face impossible.) Where I think most of these arguments break down is the automatic "superintelligence = superpowers" analogy.<p>For every genius who became a world-famous scientist, there are ten who died in poverty or war. Intelligence doesn't correlate with the ability to actually impact our world as strongly as people would like to think, so I don't think it's reasonable to extrapolate that outwards to a kind of intelligence we've never seen before.</p>
]]></description><pubDate>Sat, 14 Jun 2025 06:06:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=44274539</link><dc:creator>PollardsRho</dc:creator><comments>https://news.ycombinator.com/item?id=44274539</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44274539</guid></item><item><title><![CDATA[New comment by PollardsRho in "The Claude Bliss Attractor"]]></title><description><![CDATA[
<p>Why is 2) "self-evident"? Do you think it's a given that, in any situation, there's something you could say that would manipulate humans to get what you want? If you were smart enough, do you think you could talk your way out of prison?</p>
]]></description><pubDate>Fri, 13 Jun 2025 23:21:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=44273133</link><dc:creator>PollardsRho</dc:creator><comments>https://news.ycombinator.com/item?id=44273133</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44273133</guid></item><item><title><![CDATA[New comment by PollardsRho in "Monsky's Theorem"]]></title><description><![CDATA[
<p>Thanks for sharing this proof! As someone who enjoys math but never got myself through enough Galois theory to finish the standard proof, it's fantastic to see a proof that's more elementary while still giving a sense of why the group structure is important.</p>
]]></description><pubDate>Sat, 19 Apr 2025 23:09:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=43740182</link><dc:creator>PollardsRho</dc:creator><comments>https://news.ycombinator.com/item?id=43740182</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43740182</guid></item><item><title><![CDATA[New comment by PollardsRho in "Analytic Combinatorics – A Worked Example"]]></title><description><![CDATA[
<p>At that point, you'd be better off just using a recursive algorithm like the one in GMP. You're swapping out arbitrary-length for arbitrary-precision.</p>
]]></description><pubDate>Fri, 11 Apr 2025 22:09:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=43659214</link><dc:creator>PollardsRho</dc:creator><comments>https://news.ycombinator.com/item?id=43659214</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43659214</guid></item><item><title><![CDATA[New comment by PollardsRho in "Stop using e for compound interest"]]></title><description><![CDATA[
<p>The compound-interest intro to e (the value of 1 dollar compounded continuously for a year at 100% interest), to me, has several useful advantages over different introductions that are more mathematically rich:<p>- It's elementary to the point that you can introduce it whenever you want.<p>- It automatically gives a sense of scale: larger than 2, but not by a lot.<p>- At least to me, it confers some sense of importance. You can get the sense that this number e has some deep connection to infinity and infinitesimal change and deserves further study even if you haven't seen calculus before.<p>- It directly suggests a way of calculating e, which "the base of the exponential function with derivative equal to itself" doesn't suggest as cleanly.<p>I don't know of any calculus course that relies on this definition for much: that's not its purpose. The goal is just to give students a fairly natural introduction to the constant before you show that e^x and ln x have their own unique properties that will be more useful for further manipulation.</p>
]]></description><pubDate>Fri, 11 Apr 2025 22:06:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=43659182</link><dc:creator>PollardsRho</dc:creator><comments>https://news.ycombinator.com/item?id=43659182</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43659182</guid></item><item><title><![CDATA[New comment by PollardsRho in "That's a Lot of YAML"]]></title><description><![CDATA[
<p>I will die on the hill that TOML should be used for the vast majority of what YAML's used for today. There are times a full language is needed, but I've seen so many YAML files that use none of the features YAML has with all of the footguns.</p>
]]></description><pubDate>Fri, 11 Apr 2025 15:44:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=43655153</link><dc:creator>PollardsRho</dc:creator><comments>https://news.ycombinator.com/item?id=43655153</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43655153</guid></item><item><title><![CDATA[New comment by PollardsRho in "Analytic Combinatorics – A Worked Example"]]></title><description><![CDATA[
<p>They then say there's an approximation for Fibonacci, which makes me think that's what they're calling Binet's formula. (I'd also expect an author with this mathematical sophistication to be aware of Binet's formula, but maybe I'm projecting.)</p>
]]></description><pubDate>Tue, 08 Apr 2025 19:03:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=43625380</link><dc:creator>PollardsRho</dc:creator><comments>https://news.ycombinator.com/item?id=43625380</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43625380</guid></item><item><title><![CDATA[New comment by PollardsRho in "Analytic Combinatorics – A Worked Example"]]></title><description><![CDATA[
<p>I don't think it's controversial to say that asymptotic analysis has flaws: the conclusions you draw from it only hold in the limit of larger inputs, and sometims "larger" means "larger than anything you'd be able to run it on." Perhaps as Moore's law dies we'll be increasingly able to talk more about specific problem sizes in a way that won't become obsolete immediately.<p>I suppose my question is why you think TCS people would do this analysis and development better than non-TCS people. Once you leave the warm cocoon of big-O, the actual practical value of an algorithm depends hugely on specific hardware details. Similarly, once you stop dealing with worst-case or naive average-case complexity, you have to try and define a data distribution relevant for specific real-world problems. My (relatively uninformed) sense is that the skill set required to, say, implement transformer attention customizing to the specific hierarchical memory layout of NVIDIA datacenter GPUs, or evaluate evolutionary optimization algorithms on a specific real-world problem domain, isn't necessarily something you gain in TCS itself.<p>When you can connect theory to the real world, it's fantastic, but my sense is that such connections are often desired and rarely found. At the very least, I'd expect that to often be a response to applied CS and not coming first from TCS: it's observed empirically that the simplex algorithm works well in practice, and then that encourages people to revisit the asymptotic analysis and refine it. I'd worry that TCS work trying to project onto applications from the blackboard would lead to less rigorous presentations and a lot of work that's only good on paper.</p>
]]></description><pubDate>Tue, 08 Apr 2025 19:01:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=43625358</link><dc:creator>PollardsRho</dc:creator><comments>https://news.ycombinator.com/item?id=43625358</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43625358</guid></item><item><title><![CDATA[New comment by PollardsRho in "Analytic Combinatorics – A Worked Example"]]></title><description><![CDATA[
<p>Very cool!<p>What's meant by "it’s already too much to ask for a closed form for fibonacci numbers"? Binet's formula is usually called a closed form in my experience. Is "closed form" here supposed to mean "closed form we can evaluate without needing arbitrary-precision arithmetic"?</p>
]]></description><pubDate>Tue, 08 Apr 2025 18:37:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=43625111</link><dc:creator>PollardsRho</dc:creator><comments>https://news.ycombinator.com/item?id=43625111</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43625111</guid></item><item><title><![CDATA[New comment by PollardsRho in "AI 2027"]]></title><description><![CDATA[
<p>It seems to me that much of recent AI progress has not changed the fundamental scaling principles underlying the tech. Reasoning models are more effective, but at the cost of more computation: it's more for more, not more for less. The logarithmic relationship between model resources and model quality (as Altman himself has characterized it), phrased a different way, means that you need exponentially more energy and resources for each marginal increase in capabilities. GPT-4.5 is unimpressive in comparison to GPT-4, and at least from the outside it seems like it cost an awful lot of money. Maybe GPT-5 is slightly less unimpressive and significantly more expensive: is that the through-line that will lead to the singularity?<p>Compare the automobile. Automobiles today are a lot nicer than they were 50 years ago, and a lot more efficient. Does that mean cars that never need fuel or recharging are coming soon, just because the trend has been higher efficiency? No, because the fundamental physical realities of drag still limit efficiency. Moreover, it turns out that making 100% efficient engines with 100% efficient regenerative brakes is really hard, and "just throw more research at it" isn't a silver bullet. That's not "there won't be many future improvements", but it is "those future improvements probably won't be any bigger than the jump from GPT-3 to o1, which does not extrapolate to what OP claims their models will do in 2027."<p>AI in 2027 might be the metaphorical brand-new Lexus to today's beat-up Kia. That doesn't mean it will drive ten times faster, or take ten times less fuel. Even if high-end cars can be significantly more efficient than what average people drive, that doesn't mean the extra expense is actually worth it.</p>
]]></description><pubDate>Thu, 03 Apr 2025 18:01:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=43573245</link><dc:creator>PollardsRho</dc:creator><comments>https://news.ycombinator.com/item?id=43573245</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43573245</guid></item><item><title><![CDATA[New comment by PollardsRho in "Bitter Lesson is about AI agents"]]></title><description><![CDATA[
<p>It's not that technical work is guaranteed to be in your codebase 10 years from now, it's that customers don't want to use a product that might be good six months from now. The actors in the best position to use new AI advances are the ones with good brands, customer bases, engineering know-how that does transfer, etc.</p>
]]></description><pubDate>Mon, 24 Mar 2025 21:51:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=43465788</link><dc:creator>PollardsRho</dc:creator><comments>https://news.ycombinator.com/item?id=43465788</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43465788</guid></item></channel></rss>