<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: nospice</title><link>https://news.ycombinator.com/user?id=nospice</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 02 May 2026 11:55:47 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=nospice" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by nospice in "eBay explicitly bans AI "buy for me" agents in user agreement update"]]></title><description><![CDATA[
<p>That's a cynical take, so it will probably get upvoted, but what are you basing it on?<p>Ebay is a pretty eclectic marketplace and I can think of a number of possible reasons that have little to do with ads. For example, they may be worried about high error rates, and thus buyer and seller dissatisfaction. If I instruct an agent to buy X, eBay is almost never interchangeable with Amazon or Target.<p>They have no problem surfacing their listings on Google Shopping.</p>
]]></description><pubDate>Thu, 22 Jan 2026 17:11:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46722036</link><dc:creator>nospice</dc:creator><comments>https://news.ycombinator.com/item?id=46722036</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46722036</guid></item><item><title><![CDATA[New comment by nospice in "GPTZero finds 100 new hallucinations in NeurIPS 2025 accepted papers"]]></title><description><![CDATA[
<p>We've been talking about a "crisis of reproducibility" for years and the incentive to crank out high volumes of low-quality research. We now have a tool that brings down the cost of producing plausibly-looking research down to zero. So of course we're going to see that tool abused on a galactic scale.<p>But here's the thing: let's say you're an university or a research institution that wants to curtail it. You catch someone producing LLM slop, and you confirm it by analyzing their work and conducting internal interviews. You fire them. The fired researcher goes public saying that they were doing nothing of the sort and that this is a witch hunt. Their blog post makes it to the front page of HN, garnering tons of sympathy and prompting many angry calls to their ex-employer. It gets picked up by some mainstream outlets, too. It happened a bunch of times.<p>In contrast, there are basically no consequences to institutions that let it slide. No one is angrily calling the employers of the authors of these 100 NeurIPS papers, right? If anything, there's the plausible deniability of "oh, I only asked ChatGPT to reformat the citations, the rest of the paper is 100% legit, my bad".</p>
]]></description><pubDate>Thu, 22 Jan 2026 16:45:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46721691</link><dc:creator>nospice</dc:creator><comments>https://news.ycombinator.com/item?id=46721691</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46721691</guid></item><item><title><![CDATA[New comment by nospice in "Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant"]]></title><description><![CDATA[
<p>I'm going to lose my mind. This commentary is almost certainly LLM generated.</p>
]]></description><pubDate>Thu, 22 Jan 2026 15:44:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46720779</link><dc:creator>nospice</dc:creator><comments>https://news.ycombinator.com/item?id=46720779</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46720779</guid></item><item><title><![CDATA[New comment by nospice in "Stories removed from the Hacker News Front Page, updated in real time (2024)"]]></title><description><![CDATA[
<p>> I guess I’m making the mistake of assuming others have taken a similar intellectual path as I have.<p>Oh, come on. I know a lot of people who are highly educated and intelligent but fall for the same outrage bait as everyone else... we're bombarded with so much political talking points that we don't carefully consider every headline, verify every source, and then publish nuanced takes on social media where the stories change every hour.<p>The bottom line is that, with all respect, I absolutely don't care about the political hot takes of people on HN. And I'm sure they don't care about mine. I know where to go when I want to talk politics. If I want measured takes from scholars, I can read their columns or blogs. If I want to argue, I'll do it with family and real-world friends.</p>
]]></description><pubDate>Thu, 22 Jan 2026 04:12:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=46715228</link><dc:creator>nospice</dc:creator><comments>https://news.ycombinator.com/item?id=46715228</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46715228</guid></item><item><title><![CDATA[New comment by nospice in "Stories removed from the Hacker News Front Page, updated in real time (2024)"]]></title><description><![CDATA[
<p>> AI, politics, and discussing how HN isn't what it used to be. That's all that's here now. HN isn't what it used to be.<p>Are you spending your time patrolling /newest and upvoting good submissions, then? There are relatively few people doing this and it's easy to have an outsized impact.</p>
]]></description><pubDate>Wed, 21 Jan 2026 17:13:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46708486</link><dc:creator>nospice</dc:creator><comments>https://news.ycombinator.com/item?id=46708486</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46708486</guid></item><item><title><![CDATA[New comment by nospice in "Stories removed from the Hacker News Front Page, updated in real time (2024)"]]></title><description><![CDATA[
<p>All my social media feeds are filled with political rage bait. Yes, tech is political, and yes, techies implicitly take sides; but I really don't need another source for all the political headlines of the day.</p>
]]></description><pubDate>Wed, 21 Jan 2026 16:56:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46708269</link><dc:creator>nospice</dc:creator><comments>https://news.ycombinator.com/item?id=46708269</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46708269</guid></item><item><title><![CDATA[New comment by nospice in "cURL removes bug bounties"]]></title><description><![CDATA[
<p>My point is that <i>on average</i>, filing bad but plausibly-sounding reports makes the reporter money. Curl is the odd exception with naming-and-shaming, not the rule. Spamming H1 with AI-generated reports is lucrative. A modest deposit is unlikely to change that. A big deposit (thousands of dollars) would, but it would also discourage a lot of legitimate reports.</p>
]]></description><pubDate>Wed, 21 Jan 2026 16:05:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46707581</link><dc:creator>nospice</dc:creator><comments>https://news.ycombinator.com/item?id=46707581</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46707581</guid></item><item><title><![CDATA[New comment by nospice in "cURL removes bug bounties"]]></title><description><![CDATA[
<p>> An entry fee that is reimbursed if the bug turns out to matter would stop this, real quick.<p>The problem is that bug bounty slop <i>works</i>. A lot of companies with second-tier bug bounties outsource triage to contractors (there's an entire industry built around that). If a report looks plausible, the contractor files a bug. The engineers who receive the report are often not qualified to debate exploitability, so they just make the suggested fix and move on. The reporter gets credit or a token payout. Everyone is happy.<p>Unless you have a top-notch security team with a lot of time on their hands, pushing back is not in your interest. If you keep getting into fights with reporters, you'll eventually get it wrong and you're gonna get derided on HN and get headlines about how you don't take security seriously.<p>In this model, it doesn't matter if you require a deposit, because on average, bogus reports still pay off. You also create an interesting problem that a sketchy vendor can hold the reporter's money hostage if the reporter doesn't agree to unreasonable terms.</p>
]]></description><pubDate>Wed, 21 Jan 2026 07:23:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=46702206</link><dc:creator>nospice</dc:creator><comments>https://news.ycombinator.com/item?id=46702206</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46702206</guid></item><item><title><![CDATA[New comment by nospice in "Electricity use of AI coding agents"]]></title><description><![CDATA[
<p>My point is that it isn't, not really. Usage begets more training, and this will likely continue for many years. So it's not a vanishing fixed cost, but pretty much just an ongoing expenditure associated with LLMs.</p>
]]></description><pubDate>Wed, 21 Jan 2026 00:58:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=46699866</link><dc:creator>nospice</dc:creator><comments>https://news.ycombinator.com/item?id=46699866</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46699866</guid></item><item><title><![CDATA[New comment by nospice in "Google co-founder reveals that "many" of the new hires do not have a degree"]]></title><description><![CDATA[
<p>Big Tech can afford to be selective, so if you don't have a degree, the basic answer is that you need to stand out in some other way. This can be several years of interesting industry experience or other publicly-visible work (open source code, winning some competition, or even having a good blog). It also helps to know someone who works there and can help you get the first interview.</p>
]]></description><pubDate>Wed, 21 Jan 2026 00:49:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=46699788</link><dc:creator>nospice</dc:creator><comments>https://news.ycombinator.com/item?id=46699788</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46699788</guid></item><item><title><![CDATA[New comment by nospice in "Electricity use of AI coding agents"]]></title><description><![CDATA[
<p>I'm not sure I like this method of accounting for it. The critics of LLMs tend to conflate the costs of training LLMs with the cost of generation. But this makes the opposite error: it pretends that training isn't happening as a consequence of consumer demand. There are enormous resources poured into it on an ongoing basis, so it feels like it needs to be amortized on top of the per-token generation costs.<p>At some point, we might end up in a steady state where the models are as good as they can be and the training arms race is over, but we're not there yet.</p>
]]></description><pubDate>Wed, 21 Jan 2026 00:05:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=46699446</link><dc:creator>nospice</dc:creator><comments>https://news.ycombinator.com/item?id=46699446</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46699446</guid></item><item><title><![CDATA[New comment by nospice in "LG UltraFine Evo 6K 32-inch Monitor Review"]]></title><description><![CDATA[
<p>I don't think people care all that much about phones. It's just that phones are power-constrained, so manufacturers <i>wanted</i> to move to OLEDs to save on backlight; and because the displays are small, the tech was easier to roll out there than on 6k 32-inch monitors.<p>But premium displays exist. IPS displays on higher-end laptops, such as ThinkPads, are great - we're talking stuff like 14" 3840x2160, 100% Adobe RGB. The main problem is just that people want to buy truly gigantic panels on the cheap, and there are trade-offs that come with that. But do you really need 2x32" to code?</p>
]]></description><pubDate>Tue, 20 Jan 2026 21:59:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=46698288</link><dc:creator>nospice</dc:creator><comments>https://news.ycombinator.com/item?id=46698288</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46698288</guid></item><item><title><![CDATA[New comment by nospice in "Google co-founder reveals that "many" of the new hires do not have a degree"]]></title><description><![CDATA[
<p>As far as I know, Google never had a requirement to have a degree for any software engineering job. What they did pretty aggressively, though, is sourcing candidates from universities with top-notch engineering programs (CMU, Stanford, etc). So they ended up with a significant proportion of such hires not because they rejected everyone else, but because their intake process produced more leads of this sort and treated them preferentially. Basically, for applicants going through that funnel, they guaranteed an onsite interview.<p>But they always had a good number of people with no degrees or degrees wholly unrelated to computers.</p>
]]></description><pubDate>Tue, 20 Jan 2026 19:35:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46696701</link><dc:creator>nospice</dc:creator><comments>https://news.ycombinator.com/item?id=46696701</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46696701</guid></item><item><title><![CDATA[New comment by nospice in "De-dollarization: Is the US dollar losing its dominance? (2025)"]]></title><description><![CDATA[
<p>> The international value of the dollar as a reserve and trade currency is inherently tied to the behavior of the US Government and the Federal Reserve.<p>I think this oversimplifies things. The dominance of the dollar emerged chiefly because most of the alternatives were worse, for a combination of military, political, and economic reasons.<p>There is a positive feedback loop at the core of it, because the US economy benefits greatly from being able to issue foreign debt in their own currency. But that doesn't matter: as long as the US faces little risk of getting invaded by any of its neighbors or defaulting on its obligations, everyone is happy.<p>What's been changing - and it started long before Trump - is that the US is also increasingly willing to use its control of USD (and thus the Western banking system) to pursue sometimes petty policy goals. This is giving many of our partners second thoughts, not because of the fundamentals of USD but because they imagine finding themselves at odds with the US policymakers at some point down the line.</p>
]]></description><pubDate>Tue, 20 Jan 2026 18:22:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=46695709</link><dc:creator>nospice</dc:creator><comments>https://news.ycombinator.com/item?id=46695709</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46695709</guid></item><item><title><![CDATA[New comment by nospice in "Nvidia Stock Crash Prediction"]]></title><description><![CDATA[
<p>> My 30k ft view is that the stock will inevitably slide as AI datacenter spending goes down.<p>Their stock trajectory started with one boom (cryptocurrencies) and then seamlessly progressed to another (AI). You're basically looking at a decade of "number goes up". So yeah, it will probably come down <i>eventually</i> (or the inflation will catch up), but it's a poor argument for betting against them right now.<p>Meanwhile, the investors who were "wrong" anticipating a cryptocurrency revolution and who bought NVDA have not much to complain about today.</p>
]]></description><pubDate>Tue, 20 Jan 2026 17:19:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=46694688</link><dc:creator>nospice</dc:creator><comments>https://news.ycombinator.com/item?id=46694688</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46694688</guid></item><item><title><![CDATA[New comment by nospice in ""Anyone else out there vibe circuit-building?""]]></title><description><![CDATA[
<p>> The core issue isn't that LLMs are bad at circuits, it's that we're asking them to do novel design when they should be doing selection and integration.<p>I don't want to detract from what you're building, but I'm puzzled by this sentence. It very much sounds like the problem is that they're bad at circuits and that you're working around this problem by making them choose from a catalog.<p>Try that for code. "The problem isn't that LLMs are bad at coding, it's that we're asking them to write new programs when they should be doing selection and integration".</p>
]]></description><pubDate>Mon, 19 Jan 2026 16:25:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=46680818</link><dc:creator>nospice</dc:creator><comments>https://news.ycombinator.com/item?id=46680818</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46680818</guid></item><item><title><![CDATA[New comment by nospice in ""Anyone else out there vibe circuit-building?""]]></title><description><![CDATA[
<p>Both Gemini and ChatGPT have a pretty comically wrong knowledge of op-amps. They usually recommend outdated chips and are confused about circuit topologies. I was looking at this last week and it hasn't changed. I asked them to suggest and evaluate microphone circuits and they were just bad. I would really, really recommend reading some human-written text if you're learning about that.<p>I can't think of any reason why you'd want to use Schottky diodes to protect op-amp inputs. They have high leakage currents and poor surge capabilities. Most op-amps have internal protection diodes, and if you need some extra ESD or overvoltage protection, a Schottky diode probably isn't the way.<p>I'm not taking an anti-LLM view here. I think they are useful in some fields and are getting better. But in this particular instance, there's a breadth of excellent learning resources and the one you've chosen isn't good.</p>
]]></description><pubDate>Mon, 19 Jan 2026 16:15:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=46680664</link><dc:creator>nospice</dc:creator><comments>https://news.ycombinator.com/item?id=46680664</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46680664</guid></item><item><title><![CDATA[New comment by nospice in "Tired of AI, people are committing to the analog lifestyle in 2026"]]></title><description><![CDATA[
<p>> I think there's a big difference between general ai (LLMs) and the troubling implementations of ai like flock ... spotify and their distortion of music ...<p>I'm curious about the distinction you're making here. If we accept mainstream uses of LLMs, such as writing online content or generating images, why is music different?<p>As for the surveillance stuff, outside some geek bubbles, it's really not something that people care about. The prevailing narrative is that crime is getting worse and when the push comes to shove, most residents want more policing, more license plate readers, etc.</p>
]]></description><pubDate>Mon, 19 Jan 2026 01:02:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46673875</link><dc:creator>nospice</dc:creator><comments>https://news.ycombinator.com/item?id=46673875</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46673875</guid></item><item><title><![CDATA[New comment by nospice in "Tired of AI, people are committing to the analog lifestyle in 2026"]]></title><description><![CDATA[
<p>> The number of knitting kits sold (an example from the article)<p>Also, "knitting kits" were not a thing for most of my life. You'd just buy yarn needles and yarn. This is not some kind of a craft where you need dozens of implements.<p>The kit is pretty much a product of the TikTok / YT influencer era. Indeed, a typical kit will often contain needles, yarn, and a... link to a video you can watch:<p><a href="https://www.amazon.com/Complete-Knitting-Kit-Beginners-Accessories/dp/B0F2GVXBGH" rel="nofollow">https://www.amazon.com/Complete-Knitting-Kit-Beginners-Acces...</a></p>
]]></description><pubDate>Sun, 18 Jan 2026 20:59:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46672100</link><dc:creator>nospice</dc:creator><comments>https://news.ycombinator.com/item?id=46672100</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46672100</guid></item><item><title><![CDATA[New comment by nospice in "The Risks of AI in Schools Outweigh the Benefits, Report Says"]]></title><description><![CDATA[
<p>>> AI designed for use by children and teens should be less sycophantic and more "antagonistic"<p>> Genius. I love this idea.<p>I don't think it would really work with current tech. The sycophancy allows LLMs to <i>not</i> be right about a lot of small things without the user noticing. It also allows them to be useful in the hands of an expert by not questioning the premise and just trying their best to build on that.<p>If you instruct them to question ideas, they just become annoying and obstinate. So while it would be a great way to reduce the students' reliance on LLMs...</p>
]]></description><pubDate>Sat, 17 Jan 2026 16:28:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=46659317</link><dc:creator>nospice</dc:creator><comments>https://news.ycombinator.com/item?id=46659317</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46659317</guid></item></channel></rss>