<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: gwern</title><link>https://news.ycombinator.com/user?id=gwern</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 14:13:47 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=gwern" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by gwern in "The Brainrot Industrial Complex"]]></title><description><![CDATA[
<p>It's a lot easier to look at the whitespace and paragraphs, realize it's a LLM, plug it into Pangram to see that it gets 100% (unsurprisingly), and click to close; than it is to read it with a sucker's good faith and realize that it never says anything concrete or meaningful or unpredictable and contains only junk like canned etymologies or cliche quotes.</p>
]]></description><pubDate>Sun, 12 Apr 2026 05:50:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47736500</link><dc:creator>gwern</dc:creator><comments>https://news.ycombinator.com/item?id=47736500</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47736500</guid></item><item><title><![CDATA[New comment by gwern in "Creating the Futurescape for the Fifth Element (2019)"]]></title><description><![CDATA[
<p>But is the <i>cinematography</i> there of any interest? Why would OP include it? They're talking about the hard shots, like the exploding spaceship where they need to find a spot in the desert to shoot dozens of mortars at it, or the crazy blue paint needing special UV light exposures to render just right. That looks like... a matte painting? A nice matte painting, sure, important for worldbuilding. But just that.</p>
]]></description><pubDate>Thu, 09 Apr 2026 17:00:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47706164</link><dc:creator>gwern</dc:creator><comments>https://news.ycombinator.com/item?id=47706164</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47706164</guid></item><item><title><![CDATA[New comment by gwern in "Iran demands Bitcoin fees for ships passing Hormuz during ceasefire"]]></title><description><![CDATA[
<p>The 2% is the camel's nose. They are establishing that they tax the Strait traffic and there is no longer freedom of navigation. Once it is a done deal, the deal will be altered...</p>
]]></description><pubDate>Wed, 08 Apr 2026 20:11:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47695649</link><dc:creator>gwern</dc:creator><comments>https://news.ycombinator.com/item?id=47695649</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47695649</guid></item><item><title><![CDATA[New comment by gwern in "Taste in the age of AI and LLMs"]]></title><description><![CDATA[
<p>My point is more that since you can expect taste's commoditization to lag behind for deep fundamental reasons, then taste <i>does</i> serve as a moat. Just perhaps a weaker one than one would naively expect, and where you will have to frantically keep investing in it to stay ahead of the LLMs slowly catching up, as opposed to a permanent lock-in you can lazily monopolistically coast on indefinitely. (I'm reminded of Neal Stephenson's La Brea tarpit analogy for open source vs proprietary software in _In The Beginning was the Commandline_.)</p>
]]></description><pubDate>Tue, 07 Apr 2026 20:26:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47680914</link><dc:creator>gwern</dc:creator><comments>https://news.ycombinator.com/item?id=47680914</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47680914</guid></item><item><title><![CDATA[New comment by gwern in "Taste in the age of AI and LLMs"]]></title><description><![CDATA[
<p>They do improve, but the general creativity and sparkle we see with increasing scale comes mostly from scaling up pretraining/parameter-size, so it's quite slow and expensive compared to the speed (and decreasing cost) people have come to take for granted in math/coding in small cheap models. Hence the reaction to GPT-4.5: exactly as much better taste and discernment as it should have had based on scaling laws, yet regarded almost universally as a colossal failure. It was as unpopular as the original GPT-3 was when the paper was released, because people look at the log-esque gains from scaling up 10x or 100x and are disappointed. "Is that all?! What has the Bitter Lesson or scaling done for me <i>lately</i>?"<p>So, you can expect coding skills to continue to outpace the native LLM taste.</p>
]]></description><pubDate>Tue, 07 Apr 2026 19:21:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47680102</link><dc:creator>gwern</dc:creator><comments>https://news.ycombinator.com/item?id=47680102</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47680102</guid></item><item><title><![CDATA[New comment by gwern in "After 20 years I turned off Google Adsense for my websites (2025)"]]></title><description><![CDATA[
<p>Similar numbers for other countries: <a href="https://gwern.net/banner#they-just-dont-know" rel="nofollow">https://gwern.net/banner#they-just-dont-know</a> Wouldn't shock me if it's even lower as people move to walled-gardens like smartphones.</p>
]]></description><pubDate>Tue, 07 Apr 2026 03:38:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47670463</link><dc:creator>gwern</dc:creator><comments>https://news.ycombinator.com/item?id=47670463</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47670463</guid></item><item><title><![CDATA[New comment by gwern in "My son pleasured himself on Gemini Live. Entire family's Google accounts banned"]]></title><description><![CDATA[
<p>The description by OP in comments like <a href="https://old.reddit.com/r/LegalAdviceUK/comments/1s92fql/my_son_pleasured_himself_in_front_of_gemini_live/odlduf9/" rel="nofollow">https://old.reddit.com/r/LegalAdviceUK/comments/1s92fql/my_s...</a> seems to strongly imply that all the accounts were unconnected in a GSuite sense, and they are being slowly recursively banned by Google based on indirect connections like recovery emails or co-presence on a device.<p>I don't see any reasonable way they could have saved themselves besides something crazy like requiring every family member use a different feudal lord - one person gets Google, one person gets Apple, one poor guy gets Microsoft...</p>
]]></description><pubDate>Wed, 01 Apr 2026 03:50:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47596591</link><dc:creator>gwern</dc:creator><comments>https://news.ycombinator.com/item?id=47596591</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47596591</guid></item><item><title><![CDATA[New comment by gwern in "I am definitely missing the pre-AI writing era"]]></title><description><![CDATA[
<p>You'll love GreaterWrong, then: <a href="https://www.greaterwrong.com/posts/BJ4pnropWdnzzgeJc/i-am-definitely-missing-the-pre-ai-writing-era" rel="nofollow">https://www.greaterwrong.com/posts/BJ4pnropWdnzzgeJc/i-am-de...</a></p>
]]></description><pubDate>Mon, 30 Mar 2026 18:53:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47578234</link><dc:creator>gwern</dc:creator><comments>https://news.ycombinator.com/item?id=47578234</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47578234</guid></item><item><title><![CDATA[New comment by gwern in "Show HN: Veil – Dark mode PDFs without destroying images, runs in the browser"]]></title><description><![CDATA[
<p>FWIW, we did consider a histogram heuristic, and I believe GreaterWrong still uses one rather than InvertOrNot.com. But I regularly saw images on GW where the heuristic got it wrong but ION got it right, so the accuracy gap was meaningful; and that's why we went for ION rather than port over the histogram heuristic.</p>
]]></description><pubDate>Fri, 27 Mar 2026 04:36:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47539151</link><dc:creator>gwern</dc:creator><comments>https://news.ycombinator.com/item?id=47539151</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47539151</guid></item><item><title><![CDATA[New comment by gwern in "Show HN: Veil – Dark mode PDFs without destroying images, runs in the browser"]]></title><description><![CDATA[
<p>> It's absolutely true that there's a subset of raster images, like diagrams with white backgrounds and black lines, that would benefit from inversion. I could be wrong, but in my experience they're a minority, and the cost of accidentally inverting the wrong one (a medical photo, a color chart) is much higher than the benefit of inverting a black and white diagram, from my point of view. For now the per-page toggle covers those cases.<p>OK, so I did understand, but this sounds very hand wavy to me. You say it's a 'minority'; well sure, I never claimed that was >50% of images, so I suppose yes, that's technically true. And it is also true that a false positive on inverting is usually nastier than a false negative, which is why everyone defaults to dimming rather than inverting.<p>But you don't sound like you have evaluated it very seriously, and at least on my part, when I browse my dark-mode Gwern.net pages, I see lots of images and diagrams which benefit from inverting and where I'm glad we have InvertOrNot.com to rely on (and it's rarely wrong).<p>It may be nice to be able to advertise "No AI" at the top of the page, but I don't understand why you are so committed to biting this bullet and settling for leaving images badly handled when there is such a simple easy-to-use solution you can outsource to, and there's not a whole lot else a 'dark mode PDF' <i>can</i> do if 'handle images correctly' is now out of scope as acceptable collateral damage and 'meh, the user can just solve it every time they read every affected page by pushing a button'. (If Veil doesn't exist to save the user effort and bad-looking PDFs, why does it exist?)</p>
]]></description><pubDate>Fri, 27 Mar 2026 04:34:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47539140</link><dc:creator>gwern</dc:creator><comments>https://news.ycombinator.com/item?id=47539140</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47539140</guid></item><item><title><![CDATA[New comment by gwern in "Show HN: Veil – Dark mode PDFs without destroying images, runs in the browser"]]></title><description><![CDATA[
<p>> In practice they're rare enough that the per-page toggle handles them, but it's the honest limitation of the approach.<p>I don't understand how you handle raster images. You simply cannot invert them blindly. So it sounds like you just bite the bullet of never inverting raster images, and accepting that you false-positive some vector-based diagrams? I don't see how that can justify your conclusion "it wasn't necessary". It sounds necessary to me.</p>
]]></description><pubDate>Fri, 27 Mar 2026 01:17:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47537976</link><dc:creator>gwern</dc:creator><comments>https://news.ycombinator.com/item?id=47537976</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47537976</guid></item><item><title><![CDATA[New comment by gwern in "Show HN: Veil – Dark mode PDFs without destroying images, runs in the browser"]]></title><description><![CDATA[
<p>Have you considered, since you can extract the images via the mask, selectively inverting them?<p>One can fairly reliably use a small NN to classify images by whether they should be inverted or just dimmed, and I've used it with great success for years now on my site: <a href="https://invertornot.com/" rel="nofollow">https://invertornot.com/</a> <a href="https://gwern.net/invertornot" rel="nofollow">https://gwern.net/invertornot</a><p>---<p>On a side note, it'd be nice to have an API or something to let one 'compile' a PDF to dark-mode version PDF. Ephemeral browser-based is a drawback as often as a benefit.</p>
]]></description><pubDate>Fri, 27 Mar 2026 00:38:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47537706</link><dc:creator>gwern</dc:creator><comments>https://news.ycombinator.com/item?id=47537706</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47537706</guid></item><item><title><![CDATA[New comment by gwern in "Personal Encyclopedias"]]></title><description><![CDATA[
<p>A good wiki like MediaWiki supports various levels of visibility. For example, you could define a namespace for each group of readers like 'Family:'. Or use transclusions from subpages. (This might sound like a bit of a hassle but you can use a template to set it up once and for all: a page transcludes a public sub page followed by the distant relatives material followed by parents / siblings followed by your-eyes-only.) And I'm sure one could come up with other approaches too.<p>A real example: Said Achmiz (obormot.net) uses PMWiki for his D&D campaigns, and PMwiki lets you control who can see a page, so he can do access control tricks like a page for a location, where only the DM can see all subpages with all the secrets, while each player can see their own 'notes' subpage. So everyone in their own web browser can go to the same page and see the same thing overall, but will see just their private additional information. And this is quite flexible so you can encode whatever patterns you need. You don't need some WotC fancy custom CMS for your D&D campaign to keep track of information and silo appropriately, you just need a design pattern on wikis.</p>
]]></description><pubDate>Thu, 26 Mar 2026 23:01:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47536946</link><dc:creator>gwern</dc:creator><comments>https://news.ycombinator.com/item?id=47536946</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47536946</guid></item><item><title><![CDATA[New comment by gwern in "So where are all the AI apps?"]]></title><description><![CDATA[
<p>> Detractors of AI are often accused of moving the goalpost, but I think your comment is guilty of the same. Before Claude Code, we had Cursor, Github Copilot, and more. Each of these war purportedly revolutionizing software engineering.<p>What's sauce for the goose is sauce for the gander. If you make that argument that 'I don't believe in kinks or discontinuities in code release due to AI, because so many AI coding systems have come out incrementally since 2020', then OP <i>does</i> provide strong evidence for an AI acceleration - the smooth exponential!</p>
]]></description><pubDate>Tue, 24 Mar 2026 18:22:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47506963</link><dc:creator>gwern</dc:creator><comments>https://news.ycombinator.com/item?id=47506963</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47506963</guid></item><item><title><![CDATA[New comment by gwern in "Thinking Fast, Slow, and Artificial: How AI Is Reshaping Human Reasoning"]]></title><description><![CDATA[
<p>I was being sarcastic, because that point obviously also applies to the subjects in the experiment as well.</p>
]]></description><pubDate>Mon, 23 Mar 2026 01:54:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47484612</link><dc:creator>gwern</dc:creator><comments>https://news.ycombinator.com/item?id=47484612</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47484612</guid></item><item><title><![CDATA[New comment by gwern in "Thinking Fast, Slow, and Artificial: How AI Is Reshaping Human Reasoning"]]></title><description><![CDATA[
<p>One might worry that it would increase the authors' confidence even following their LLM rewrite errors and reduce accuracy overall regardless of moderators.</p>
]]></description><pubDate>Sun, 22 Mar 2026 23:30:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47483490</link><dc:creator>gwern</dc:creator><comments>https://news.ycombinator.com/item?id=47483490</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47483490</guid></item><item><title><![CDATA[New comment by gwern in "The gold standard of optimization: A look under the hood of RollerCoaster Tycoon"]]></title><description><![CDATA[
<p>End-to-end optimization in action! Although I'd've liked more than 1 example (pathfinding) here.</p>
]]></description><pubDate>Sun, 22 Mar 2026 23:24:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47483428</link><dc:creator>gwern</dc:creator><comments>https://news.ycombinator.com/item?id=47483428</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47483428</guid></item><item><title><![CDATA[New comment by gwern in "I turned Markdown into a protocol for generative UI"]]></title><description><![CDATA[
<p>Or more precisely, isn't this reinventing notebooks (not the first JS-centric notebook either)?</p>
]]></description><pubDate>Fri, 20 Mar 2026 19:00:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47459083</link><dc:creator>gwern</dc:creator><comments>https://news.ycombinator.com/item?id=47459083</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47459083</guid></item><item><title><![CDATA[New comment by gwern in "Scaling Karpathy's Autoresearch: What Happens When the Agent Gets a GPU Cluster"]]></title><description><![CDATA[
<p>Yes, it's greedy so may hit local optima. You can fit learning curves and try to extrapolate out to avoid that problem, to let you run long enough to be reasonably sure of a dead end, and periodically revive past candidates to run longer. See past hyperparameter approaches like freeze-thaw <a href="https://arxiv.org/abs/1406.3896" rel="nofollow">https://arxiv.org/abs/1406.3896</a> .</p>
]]></description><pubDate>Thu, 19 Mar 2026 23:43:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47448110</link><dc:creator>gwern</dc:creator><comments>https://news.ycombinator.com/item?id=47448110</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47448110</guid></item><item><title><![CDATA[New comment by gwern in "Scaling Karpathy's Autoresearch: What Happens When the Agent Gets a GPU Cluster"]]></title><description><![CDATA[
<p>> The agent can theoretically come up with a protocol to run those same 12 experiments one-by-one and only then decide which branch to explore next - which I think would lead to the same outcome?<p>At least in theory, adaptiveness should save samples and in this case, compute. (As noted, you can always turn the parallel into serial and so the serial approach, which gets information 'from the future', should be able to meet or beat any parallel approach on sample-efficiency.)<p>So if the batch only matches the adaptive search, that suggests that the LLM is not reasoning well in the adaptive setting and is poorly exploiting the additional information. Maybe some sort of more explicit counterfactual reasoning/planning over a tree of possible outcomes?</p>
]]></description><pubDate>Thu, 19 Mar 2026 23:42:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47448088</link><dc:creator>gwern</dc:creator><comments>https://news.ycombinator.com/item?id=47448088</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47448088</guid></item></channel></rss>