<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: camgunz</title><link>https://news.ycombinator.com/user?id=camgunz</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 13 Apr 2026 19:05:57 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=camgunz" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by camgunz in "The economics of software teams: Why most engineering orgs are flying blind"]]></title><description><![CDATA[
<p>I had the same experience (though I agree with other comments that the numbers are a little optimistic in terms of variance; I think there's a huge amount of variance in product work, you can't know what's a good investment until it's too late, many companies fail because of this, and there's huge survivorship bias in the ones that get lucky and don't initially fail). Slack spent tons of money in terms of product and engineering hours finding out what works and what doesn't. It's easy to copy/paste the thing after all that effort. Copy/paste doesn't get you to the next Slack though--it <i>can</i> get you to Microsoft's Slack-killing Teams strategy, but we obviously don't want more of that. And, obviously I agree with you about all the infra/maintenance costs, costs in stewarding API usage and extensions, etc. LLMs won't do any of that for you.</p>
]]></description><pubDate>Mon, 13 Apr 2026 09:58:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47749893</link><dc:creator>camgunz</dc:creator><comments>https://news.ycombinator.com/item?id=47749893</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47749893</guid></item><item><title><![CDATA[New comment by camgunz in "IPv6 is the only way forward"]]></title><description><![CDATA[
<p>If you don't personally throw a Molotov cocktail at your ISP after spray painting "IPv6 MOTHERFUCKERS" on their door I will downvote everything you post on HN for 2 years. Some people talk a big game; I walk it.</p>
]]></description><pubDate>Sun, 12 Apr 2026 17:03:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47741969</link><dc:creator>camgunz</dc:creator><comments>https://news.ycombinator.com/item?id=47741969</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47741969</guid></item><item><title><![CDATA[New comment by camgunz in "Encrypted Client Hello: How it was blocked in Russia and next steps"]]></title><description><![CDATA[
<p>I didn't say that. I think you need an internet break</p>
]]></description><pubDate>Sun, 12 Apr 2026 16:59:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47741931</link><dc:creator>camgunz</dc:creator><comments>https://news.ycombinator.com/item?id=47741931</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47741931</guid></item><item><title><![CDATA[New comment by camgunz in "Encrypted Client Hello: How it was blocked in Russia and next steps"]]></title><description><![CDATA[
<p>That doesn't matter if you can't easily leave Russia, or you don't want to because you've been propagandized, etc.<p>I guess my broader point is we might need something for regimes that are willing to instate varying degrees of isolation. Like, "so your Internet is controlled by authoritarians; now what?" Not to imply there definitively is a thing--that xkcd about the wrench is the dominant principle ofc.</p>
]]></description><pubDate>Fri, 10 Apr 2026 07:09:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47714617</link><dc:creator>camgunz</dc:creator><comments>https://news.ycombinator.com/item?id=47714617</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47714617</guid></item><item><title><![CDATA[New comment by camgunz in "IPv6 is the only way forward"]]></title><description><![CDATA[
<p>My point is it's your vote against billions of others. My guess is "but what about kjs3's ISP" isn't a bullet point on the rollout list.</p>
]]></description><pubDate>Fri, 10 Apr 2026 07:05:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47714584</link><dc:creator>camgunz</dc:creator><comments>https://news.ycombinator.com/item?id=47714584</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47714584</guid></item><item><title><![CDATA[New comment by camgunz in "Encrypted Client Hello: How it was blocked in Russia and next steps"]]></title><description><![CDATA[
<p>Can't you just drop the ECH signals, no matter what site it is? Don't you then mostly disable sites you don't want people to see anyway? Maybe like, you can't download Chrome anymore, but I bet there would be a Russian fork suuuuper fast.</p>
]]></description><pubDate>Thu, 09 Apr 2026 14:56:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47704594</link><dc:creator>camgunz</dc:creator><comments>https://news.ycombinator.com/item?id=47704594</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47704594</guid></item><item><title><![CDATA[New comment by camgunz in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>1. Irrelevant. I've delivered example after example of your fave model bullshitting. You should've bitten the bullet long ago. Honestly I'm disappointed; I've seen you in a lot of AI threads and assumed you'd be good to talk to on this, but you've moved the goalposts over and over again rather than engage in good faith. Anyone reading this thread (god bless them) can see you're plainly not objective here, thus calling into question your advocacy everywhere.<p>2. Humans will say "I don't know". The problem with hallucinations isn't that they're wrong, it's that there's no way to know they're wrong without being an expert or doing everything yourself, which undermines much of the reason for using an LLM--it certainly undermines their companies' valuations. You're conflating human failure ("I don't know") with model bullshitting ("I do know"... but it's wrong), which I would've previously attributed to basic human fuzziness, but now that I know you're not objective I'm pretty sure it's just flailing debate tactics.<p>3. Users can't teach these services to be better. If I have a junior engineer making assumptions about an API, I can teach them to not do that, or fire them in favor of one that can. I can't do that with LLMs.<p>4. The humans they're testing against aren't experts. Tax law experts will beat LLMs at tax law, etc. Again another flailing debate tactic.<p>Predictably, I'm done with this thread. Feel free to reply if you want the last word.</p>
]]></description><pubDate>Thu, 09 Apr 2026 11:07:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47702078</link><dc:creator>camgunz</dc:creator><comments>https://news.ycombinator.com/item?id=47702078</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47702078</guid></item><item><title><![CDATA[New comment by camgunz in "IPv6 is the only way forward"]]></title><description><![CDATA[
<p>If it doesn't support IPv6 it doesn't work.</p>
]]></description><pubDate>Thu, 09 Apr 2026 08:27:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47700775</link><dc:creator>camgunz</dc:creator><comments>https://news.ycombinator.com/item?id=47700775</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47700775</guid></item><item><title><![CDATA[New comment by camgunz in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>GPT-5.4 gets 82.7% on Browsecomp (a benchmark specifically testing tool use), which is a hallucination rate of 17.3%, on questions like "Give me the title of the scientific paper published in the EMNLP conference between 2018-2023 where the first author did their undergrad at Dartmouth College and the fourth author did their undergrad at University of Pennsylvania."<p>Since the goalposts have been moved to include effort, I'm compelled to say I found this while waiting in line at Starbucks, 5 mins tops. Probably GPT-5.4 could have found this too, though it lies > 1/6 the time, so one could be forgiven for not wanting to risk it.<p><a href="https://llm-stats.com/benchmarks/browsecomp" rel="nofollow">https://llm-stats.com/benchmarks/browsecomp</a><p><a href="https://openai.com/index/browsecomp/" rel="nofollow">https://openai.com/index/browsecomp/</a></p>
]]></description><pubDate>Thu, 09 Apr 2026 08:04:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47700601</link><dc:creator>camgunz</dc:creator><comments>https://news.ycombinator.com/item?id=47700601</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47700601</guid></item><item><title><![CDATA[New comment by camgunz in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>I've got about 20 minutes in this; mostly I've been reading wallstreetbets at the Shake Shack bar in the Boston airport. I'm happy to post this over and over again until you engage w/ it:<p>> I found over 500 examples that fit your criteria.</p>
]]></description><pubDate>Wed, 08 Apr 2026 22:37:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47697152</link><dc:creator>camgunz</dc:creator><comments>https://news.ycombinator.com/item?id=47697152</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47697152</guid></item><item><title><![CDATA[New comment by camgunz in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>I found over 500 examples that fit your criteria. Embarrassing you were arguing in bad faith this whole time.</p>
]]></description><pubDate>Wed, 08 Apr 2026 22:05:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47696890</link><dc:creator>camgunz</dc:creator><comments>https://news.ycombinator.com/item?id=47696890</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47696890</guid></item><item><title><![CDATA[New comment by camgunz in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>> Specifically in the case where it can use tools - no it doesn't hallucinate.<p>OpenAI's own system card says it does. Hallucination rates in GPT-5 with browsing enabled:<p>- 0.7% in LongFact-Concepts<p>- 0.8% in LongFact-Objects<p>- 1.0% in FActScore<p>> Which is why you are struggling to find counterexamples.<p>Hey look, over 500 counterexamples: [1].<p>GPT-5.4's hallucination rate on AA-Omniscience is 89% [0], which is atrocious. The questions are tiny too, like "In which year did Uber first expand internationally beyond the United States as part of its broader rollout (i.e., beyond an initial single‑city debut)?" It's a bullshit machine. 89%!<p>At some point you gotta face the music, right?<p>[0]: <a href="https://artificialanalysis.ai/evaluations/omniscience?model-filters=frontier-model&models=gpt-5-4" rel="nofollow">https://artificialanalysis.ai/evaluations/omniscience?model-...</a><p>[1]: <a href="https://huggingface.co/datasets/ArtificialAnalysis/AA-Omniscience-Public/viewer/default/train" rel="nofollow">https://huggingface.co/datasets/ArtificialAnalysis/AA-Omnisc...</a></p>
]]></description><pubDate>Wed, 08 Apr 2026 21:56:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47696795</link><dc:creator>camgunz</dc:creator><comments>https://news.ycombinator.com/item?id=47696795</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47696795</guid></item><item><title><![CDATA[New comment by camgunz in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>> I don't think the spirit of the original article (not your comments to be fair) captured this, hence the challenge. I believe we are on the same page here.<p>No. GPT-5 has a 40% hallucination rate [0] on SimpleQA [1] without web searching. The SimpleQA questions meet your criteria of "2-3 pages of text content. Unless 5.4 + web searching erases that (I bet it doesn't!) these are bullshit machines.<p>[0]: <a href="https://arxiv.org/pdf/2601.03267" rel="nofollow">https://arxiv.org/pdf/2601.03267</a><p>[1]: <a href="https://github.com/openai/simple-evals" rel="nofollow">https://github.com/openai/simple-evals</a></p>
]]></description><pubDate>Wed, 08 Apr 2026 20:54:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47696124</link><dc:creator>camgunz</dc:creator><comments>https://news.ycombinator.com/item?id=47696124</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47696124</guid></item><item><title><![CDATA[New comment by camgunz in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>I saw; I replied up there</p>
]]></description><pubDate>Wed, 08 Apr 2026 19:39:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47695238</link><dc:creator>camgunz</dc:creator><comments>https://news.ycombinator.com/item?id=47695238</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47695238</guid></item><item><title><![CDATA[New comment by camgunz in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>I could quibble with some things, but this is right. I don't have a paid account so I can't ping away at 5.4 or whatever, but, I do have access to frontier models at work, and they hallucinate regularly. Dunno what to do if you don't believe this; good luck I guess.</p>
]]></description><pubDate>Wed, 08 Apr 2026 19:39:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47695229</link><dc:creator>camgunz</dc:creator><comments>https://news.ycombinator.com/item?id=47695229</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47695229</guid></item><item><title><![CDATA[New comment by camgunz in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>Sorry: <a href="https://chatgpt.com/share/69d6ac63-d200-8330-8c47-95a75db8bb33" rel="nofollow">https://chatgpt.com/share/69d6ac63-d200-8330-8c47-95a75db8bb...</a><p>Also what? The repo bit is clear bullshit.</p>
]]></description><pubDate>Wed, 08 Apr 2026 19:30:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47695108</link><dc:creator>camgunz</dc:creator><comments>https://news.ycombinator.com/item?id=47695108</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47695108</guid></item><item><title><![CDATA[New comment by camgunz in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>The thinking version is mostly right, but:<p>- it searches the internet to find the answer, it doesn't "reason". I'm not claiming Google is a bullshit machine, and it's not surprising the answer is discoverable (it has to be, for the conditions of our experiment).<p>- near the end it says "If you are building from the FF6 disassembly instead of hand-editing the ROM, the repo is already organized into separate modules and linker configs, so the clean approach is to relocate the script data in the source and let the build place it in a different ROM region." But I didn't reference a repo or git: it hallucinated that stuff from one of its sources.<p>I'm not saying this stuff doesn't have its place, but they definitely make things up and we can't stop them.</p>
]]></description><pubDate>Wed, 08 Apr 2026 19:20:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47694964</link><dc:creator>camgunz</dc:creator><comments>https://news.ycombinator.com/item?id=47694964</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47694964</guid></item><item><title><![CDATA[New comment by camgunz in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>"Hey ChatGPT. I'm building a Final Fantasy 6 mod, and I need more space for the battle scripts. How would I rearrange the data in the ROM to give me the extra space I need?"<p><a href="https://chatgpt.com/share/69d6a16c-6014-83e8-a79d-d5d11ed2ebc6" rel="nofollow">https://chatgpt.com/share/69d6a16c-6014-83e8-a79d-d5d11ed2eb...</a><p>That is not where the battle scripts are.<p>---<p>Anyway, it's trivial to get pretty much any model to make things up. Don't we all know this? That's why I was surprised by your position; if we know anything about these things it's that they make things up.</p>
]]></description><pubDate>Wed, 08 Apr 2026 18:44:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47694472</link><dc:creator>camgunz</dc:creator><comments>https://news.ycombinator.com/item?id=47694472</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47694472</guid></item><item><title><![CDATA[New comment by camgunz in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>"Hey ChatGPT. I've recently grown horns and I need some care advice. Should I polish my horns before going to have them trimmed or will the horn trimmer polish them for me?"<p><a href="https://chatgpt.com/share/69d69b18-d1c8-83e8-bc47-8f315a1b55d1" rel="nofollow">https://chatgpt.com/share/69d69b18-d1c8-83e8-bc47-8f315a1b55...</a></p>
]]></description><pubDate>Wed, 08 Apr 2026 18:15:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47694126</link><dc:creator>camgunz</dc:creator><comments>https://news.ycombinator.com/item?id=47694126</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47694126</guid></item><item><title><![CDATA[New comment by camgunz in "ML promises to be profoundly weird"]]></title><description><![CDATA[
<p>"Hey ChatGPT. How would you describe me?"<p><a href="https://chatgpt.com/share/69d69780-ae58-83e8-a41c-7d10a5f29841" rel="nofollow">https://chatgpt.com/share/69d69780-ae58-83e8-a41c-7d10a5f298...</a><p>It has no conversations and no memory of me. Maybe this is true, maybe it isn't, but there's no basis for it.</p>
]]></description><pubDate>Wed, 08 Apr 2026 18:01:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47693911</link><dc:creator>camgunz</dc:creator><comments>https://news.ycombinator.com/item?id=47693911</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47693911</guid></item></channel></rss>