<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: jdoliner</title><link>https://news.ycombinator.com/user?id=jdoliner</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 04 Apr 2026 09:17:00 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=jdoliner" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by jdoliner in "Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?"]]></title><description><![CDATA[
<p>I've always liked that HN typically has comments that are small bits of research relevant to the post that I could have done myself but don't have to because someone else did it for me. In a sense the "I asked $AI, and it said" comments are just the evolved form of that. However the presentation does matter a little, at least to me. Explicitly stating that you asked AI feels a little like an appeal to authority... and a bad one at that. And makes the comment feel low effort. Often times comments that frame themselves in this way will be missing the "last-mile" effort that tailors the LLMs response to the context of the post.<p>So I think maybe the guidelines should say something like:<p>HN readers appreciate research in comments that brings information relevant to the post. The best way to make such a comment is to find the information, summarize it in your own words that explain why it's relevant to the post and then link to the source if necessary. Adding "$AI said" or "Google said" generally makes your post worse.<p>---------<p>Also I asked ChatGPT and it said:<p>Short Answer<p>HN shouldn’t outright ban those comments, but it should culturally discourage them, the same way it discourages low-effort regurgitation, sensationalism, or unearned certainty. HN works when people bring their own insight, not when they paste the output of a stochastic parrot.<p>A rule probably isn’t needed. A norm is.</p>
]]></description><pubDate>Tue, 09 Dec 2025 16:47:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46207156</link><dc:creator>jdoliner</dc:creator><comments>https://news.ycombinator.com/item?id=46207156</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46207156</guid></item><item><title><![CDATA[New comment by jdoliner in "Prompt Injection via Poetry"]]></title><description><![CDATA[
<p>Wordcels, rise up!</p>
]]></description><pubDate>Wed, 03 Dec 2025 19:23:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46138776</link><dc:creator>jdoliner</dc:creator><comments>https://news.ycombinator.com/item?id=46138776</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46138776</guid></item><item><title><![CDATA[New comment by jdoliner in "OpenAI declares 'code red' as Google catches up in AI race"]]></title><description><![CDATA[
<p>I've seen a rumor going around that OpenAI hasn't had a successful pre-training run since mid 2024. This seemed insane to me but if you give ChatGPT 5.1 a query about current events and instruct it not to use the internet it will tell you its knowledge cutoff is June 2024. Not sure if maybe that's just the smaller model or what. But I don't think it's a good sign to get that from any frontier model today, that's 18 months ago.</p>
]]></description><pubDate>Tue, 02 Dec 2025 20:55:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=46126750</link><dc:creator>jdoliner</dc:creator><comments>https://news.ycombinator.com/item?id=46126750</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46126750</guid></item><item><title><![CDATA[New comment by jdoliner in "Leak confirms OpenAI is preparing ads on ChatGPT for public roll out"]]></title><description><![CDATA[
<p>It's not totally obvious to me that you can get the economics of this to work. A Google search costs ~.04 cents to serve, whereas a frontier reasoning LLM request costs about 2 cents. The revenue from a Google search is also around 2 cents. So the margins are dangerously thin on an LLM.<p>Now there's lots of variables that can be tweaked on this. So it's possible to get it to work. But there's a lot less room for error.</p>
]]></description><pubDate>Sun, 30 Nov 2025 01:24:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46092622</link><dc:creator>jdoliner</dc:creator><comments>https://news.ycombinator.com/item?id=46092622</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46092622</guid></item><item><title><![CDATA[New comment by jdoliner in "Nano Banana Pro"]]></title><description><![CDATA[
<p>It's a funny juxtaposition to slap the "Pro" label on it which makes it sound more enterprisey but leave the name as Nano Banana.</p>
]]></description><pubDate>Thu, 20 Nov 2025 16:34:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=45994492</link><dc:creator>jdoliner</dc:creator><comments>https://news.ycombinator.com/item?id=45994492</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45994492</guid></item><item><title><![CDATA[New comment by jdoliner in "Public trust demands open-source voting systems"]]></title><description><![CDATA[
<p>Throughout most of the non-US parts of the western world voting works quite well using paper ballots and hand counts. Any organization treating voting like a tech problem is willfully oblivious of the existing very good low-tech solutions. I think the intention is often good. But tech is also a new vector for attacking elections, so sometimes it's malicious. And it's very hard to tell the difference, and with elections even the appearance of interference is risky. We should outright reject technical solutions to voting, all it does is add risk.</p>
]]></description><pubDate>Tue, 21 Oct 2025 18:05:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=45659297</link><dc:creator>jdoliner</dc:creator><comments>https://news.ycombinator.com/item?id=45659297</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45659297</guid></item><item><title><![CDATA[New comment by jdoliner in "Matrices can be your friends (2002)"]]></title><description><![CDATA[
<p>With the advent of things like r/MyBoyfriendIsAI I was expecting a substantially different article than the one I clicked into.</p>
]]></description><pubDate>Mon, 13 Oct 2025 14:56:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=45569009</link><dc:creator>jdoliner</dc:creator><comments>https://news.ycombinator.com/item?id=45569009</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45569009</guid></item><item><title><![CDATA[New comment by jdoliner in "Hosting a website on a disposable vape"]]></title><description><![CDATA[
<p>Hey can you print this paper off my vape bro? I need to turn it in to my next class.</p>
]]></description><pubDate>Mon, 15 Sep 2025 14:05:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=45249920</link><dc:creator>jdoliner</dc:creator><comments>https://news.ycombinator.com/item?id=45249920</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45249920</guid></item><item><title><![CDATA[New comment by jdoliner in "Anthropic raises $13B Series F"]]></title><description><![CDATA[
<p>Every round Anthropic raises twists the knife deeper in SBF. If only he could have survived the downturn his Antropic investment alone probably could have papered over the other loses.</p>
]]></description><pubDate>Tue, 02 Sep 2025 16:21:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=45105172</link><dc:creator>jdoliner</dc:creator><comments>https://news.ycombinator.com/item?id=45105172</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45105172</guid></item><item><title><![CDATA[New comment by jdoliner in "A dark money group is funding high-profile Democratic influencers"]]></title><description><![CDATA[
<p>The problem is that the DNC establishment feels the same way about Bernie, AOC and Zohran as it does about the normal people that they can't talk to.</p>
]]></description><pubDate>Fri, 29 Aug 2025 14:23:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=45064545</link><dc:creator>jdoliner</dc:creator><comments>https://news.ycombinator.com/item?id=45064545</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45064545</guid></item><item><title><![CDATA[New comment by jdoliner in "AI is different"]]></title><description><![CDATA[
<p>I feel like I see these two opposite behaviors. People who formed an opinion about AI from an older model and haven't updated it. And people who have an opinion about what AI will be able to do in the future and refuse to acknowledge that it doesn't do that in the present.<p>And often when the two are arguing it's tricky to tell which is which, because whether or not it does something isn't totally black and white, there's some things it can sometimes do, which you can argue either way about that being in its capabilities or not.</p>
]]></description><pubDate>Sat, 16 Aug 2025 17:11:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=44925206</link><dc:creator>jdoliner</dc:creator><comments>https://news.ycombinator.com/item?id=44925206</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44925206</guid></item><item><title><![CDATA[New comment by jdoliner in "LLM Inflation"]]></title><description><![CDATA[
<p>This is what the argument I read claimed, I haven't verified it.</p>
]]></description><pubDate>Wed, 06 Aug 2025 13:21:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=44811616</link><dc:creator>jdoliner</dc:creator><comments>https://news.ycombinator.com/item?id=44811616</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44811616</guid></item><item><title><![CDATA[New comment by jdoliner in "LLM Inflation"]]></title><description><![CDATA[
<p>I saw an interesting argument recently that the reason you get this type of verbose language in corporate settings is that English lacks a formal tense. Apparently it's much less common in languages that have one. But in corporate English the verbosity is used as a signal that you took time to produce the text out of respect for the person you're communicating with.<p>This of course now gets weird with LLMs because I doubt it can last as a signal of respect for very long when it just means you fed some bullet points to ChatGPT.</p>
]]></description><pubDate>Wed, 06 Aug 2025 13:06:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=44811435</link><dc:creator>jdoliner</dc:creator><comments>https://news.ycombinator.com/item?id=44811435</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44811435</guid></item><item><title><![CDATA[New comment by jdoliner in "Why Cline doesn't index your codebase"]]></title><description><![CDATA[
<p>Does anybody else see high CPU and GPU utilization on this site with a process called ClineBot?</p>
]]></description><pubDate>Tue, 27 May 2025 20:59:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=44110629</link><dc:creator>jdoliner</dc:creator><comments>https://news.ycombinator.com/item?id=44110629</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44110629</guid></item><item><title><![CDATA[New comment by jdoliner in "Good Writing"]]></title><description><![CDATA[
<p>Beauty is truth, truth is beauty.</p>
]]></description><pubDate>Sat, 24 May 2025 15:18:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=44081672</link><dc:creator>jdoliner</dc:creator><comments>https://news.ycombinator.com/item?id=44081672</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44081672</guid></item><item><title><![CDATA[New comment by jdoliner in "Is 1 Prime, and Does It Matter?"]]></title><description><![CDATA[
<p>Mathematicians generally feel that a single number qualifies as a "product of 1 number." So 7 can be written as just 7 which is still considered a product of prime(s). This is purely a convention thing to make it so theorems can be stated more succinctly, as with not counting 1 as prime.</p>
]]></description><pubDate>Mon, 21 Apr 2025 20:28:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=43756115</link><dc:creator>jdoliner</dc:creator><comments>https://news.ycombinator.com/item?id=43756115</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43756115</guid></item><item><title><![CDATA[New comment by jdoliner in "Middle-aged man trading cards go viral in rural Japan town"]]></title><description><![CDATA[
<p>> Middle-aged man trading cards
Examples are Mr. Honda (74), Mr. Takeshita (81) and Mr. Fujii (68). Japanese are just built different I guess.</p>
]]></description><pubDate>Tue, 08 Apr 2025 15:07:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=43622651</link><dc:creator>jdoliner</dc:creator><comments>https://news.ycombinator.com/item?id=43622651</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43622651</guid></item><item><title><![CDATA[New comment by jdoliner in "Tell HN: Announcing tomhow as a public moderator"]]></title><description><![CDATA[
<p>My experience is that HN's Overton window is probably on average 15-20% larger than most forums. That's not uniform across all topics though. So if you skew toward a particular set of topics it may feel like a typical forum, or even in some ways more constrained.</p>
]]></description><pubDate>Wed, 02 Apr 2025 18:40:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=43559954</link><dc:creator>jdoliner</dc:creator><comments>https://news.ycombinator.com/item?id=43559954</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43559954</guid></item><item><title><![CDATA[New comment by jdoliner in "Tell HN: Announcing tomhow as a public moderator"]]></title><description><![CDATA[
<p>> On the face of it, HN should be terrible. It's a forum owned an investment firm as promotion for their business.<p>I think it's at least as plausible that this is part of the magic that makes it good. HN is sufficiently "on the margin" that they don't have to do things like placate advertisers with their moderation policies. The mods like dang, tomhow and pg mostly care about HN as users rather than owners.<p>> It would be a good thing for the world if HN was spun out as a non-profit and maintained long-term.<p>That sounds good in theory... in practice it might be the beginning of the end. Once there's a non-profit behind it the non-profit has a mission of its own. Although I'm actually not sure of the legal status of HN right now, maybe it's already something like that.</p>
]]></description><pubDate>Wed, 02 Apr 2025 18:37:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=43559902</link><dc:creator>jdoliner</dc:creator><comments>https://news.ycombinator.com/item?id=43559902</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43559902</guid></item><item><title><![CDATA[New comment by jdoliner in "An election forecast that’s 50-50 is not “giving up”"]]></title><description><![CDATA[
<p>I don't know about the framing of "giving up." But I think anyone who's been following election models since the original 538 in 2008 has probably gotten the feeling that they have less alpha in them than they did back then. I think there's some obvious reasons for this that the forecasters would probably agree with.<p>The biggest one seems to be a case of Goodhart's Law, leading to herding. Pollsters care a lot now about what their rating is in forecasting models, so they're reluctant to publish outlier results, those outlier results are very valuable for the models but are likely to get a pollster punished in the ratings next cycle.<p>Lots of changes to polling methods have been made due to polls underestimating Trump. Polls have become like mini models unto themselves. Due to their inability to poll a representative slice of the population they try to correct by adjusting their results to compensate for the difference between who they've polled and the likely makeup of the electorate. This makes sense in theory, but of course introduces a whole bunch of new variables that need to be tuned correctly.<p>On top of all this is the fact that the process is very high stakes and emotional with pollsters and modellers alike bringing their own political biases and only being able to resist pressure from political factions so much.<p>The analogy I kept coming back to watching election models during this last cycle was that it looked like an ML model that didn't have the data it needed to make good predictions and so was making the safest prediction it could make given what it did have. Basically getting stuck in this local minima at 50-50 that was least likely to be off by a lot.</p>
]]></description><pubDate>Mon, 10 Mar 2025 21:51:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=43326502</link><dc:creator>jdoliner</dc:creator><comments>https://news.ycombinator.com/item?id=43326502</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43326502</guid></item></channel></rss>