<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: tedsanders</title><link>https://news.ycombinator.com/user?id=tedsanders</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 14 Apr 2026 09:52:10 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=tedsanders" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by tedsanders in "Stanford report highlights growing disconnect between AI insiders and everyone"]]></title><description><![CDATA[
<p>> Meanwhile, everyone can read the news about layoffs attributed to AI and can see that hiring (especially of junior engineers) has slowed to a trickle.<p>According to FRED/Indeed[1], software job openings have been roughly flat for 2-3 years, and they've actually been slightly increasing again. What data source are you looking at?<p>[1] <a href="https://fred.stlouisfed.org/series/IHLIDXUSTPSOFTDEVE" rel="nofollow">https://fred.stlouisfed.org/series/IHLIDXUSTPSOFTDEVE</a></p>
]]></description><pubDate>Mon, 13 Apr 2026 22:09:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47758540</link><dc:creator>tedsanders</dc:creator><comments>https://news.ycombinator.com/item?id=47758540</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47758540</guid></item><item><title><![CDATA[New comment by tedsanders in "ChatGPT Pro now starts at $100/month"]]></title><description><![CDATA[
<p>Following up - I was wrong about 10x/40x. Here's how it actually works:<p>$20 = 1x<p>$100 = 5X (but temporarily 10x for just Codex til May 31st)<p>$200 = 20x<p>We'll send out new tweets and clarify our pricing page.</p>
]]></description><pubDate>Mon, 13 Apr 2026 06:42:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47748482</link><dc:creator>tedsanders</dc:creator><comments>https://news.ycombinator.com/item?id=47748482</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47748482</guid></item><item><title><![CDATA[New comment by tedsanders in "Exploiting the most prominent AI agent benchmarks"]]></title><description><![CDATA[
<p>> I remember the gpt-5 benchmarks and how wildly inaccurate they were data-wise. Linking one[0] that I found so that other people can remember what I am talking about. I remember some data being completely misleading or some reaching more than 100% (iirc)<p>Yeah, I found that slide very embarrassing. It wasn't intentionally inaccurate or misleading - just a design error made right before we went live. All the numbers on that slide were correct, and there was no problem in terms of research accuracy or data handling or reward hacking. A single bar height had the wrong value, set to its neighbor. Back then, we in the research team would generate data and graphs, and then hand them off to a separate design team, who remade the graphs in our brand style. After the GPT-5 launch with multiple embarrassingly bad graphs, I wrote an internal library so that researchers could generate graphs in our brand style directly, without the handoff. Since then our graphs have been much better.<p>I don't think it's unfair to assume our sloppiness in graphs translates to sloppiness in eval results. But they are different groups of people working on different timelines, so I hope it's at least plausible that our numbers are pretty honest, even if our design process occasionally results in sloppy graphs.<p>Regarding the DoW deal, I don't want to comment too publicly. I also can't say anything with confidence, as I wasn't part of the deal in any way shape or form. My perception from what I have read and heard is that both Anthropic and OpenAI have good intentions, both have loosened their prior policies over time to allow usage by the US military, and both have red lines to prohibit abuse by the US military. One place they differ is in the mechanisms employed to enforce those red lines (e.g. usage policies vs refusals vs human oversight). Each company asserts their methods are stronger than the other's, so I think we have to make our own judgments there. Accounts from the parties involved in the negotiations also conflict, so I don't think anyone's account can be trusted 100%. With that caveat, I thought this article on the DoW's POV was interesting (seems to support the notion that the breakdown wasn't over differing red lines, especially since they almost managed to salvage the deal): <a href="https://www.piratewires.com/p/inside-pentagon-anthropic-deal-culture-clash" rel="nofollow">https://www.piratewires.com/p/inside-pentagon-anthropic-deal...</a><p>Lastly, I hope it's obvious to everyone that Anthropic is not at all a supply chain risk and the threats there were incredibly disappointing. I support them 100% and I'm glad to see them unhurt by the empty threats.</p>
]]></description><pubDate>Sun, 12 Apr 2026 19:28:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47743444</link><dc:creator>tedsanders</dc:creator><comments>https://news.ycombinator.com/item?id=47743444</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47743444</guid></item><item><title><![CDATA[New comment by tedsanders in "Exploiting the most prominent AI agent benchmarks"]]></title><description><![CDATA[
<p>I work at OpenAI and I really don't find this to be the case.<p>We're pretty diligent about applying search blocklists, closing hacking loopholes, and reading model outputs to catch unanticipated hacks. If we wanted to, we could choose to close our eyes and plug our ears and report higher scores for Terminal-bench, SWE-bench, etc. that technically comply with the reference implementation but aren't aligned with real value delivered to users, but we don't do this. My impression is that Anthropic and other labs are similar. E.g., in the Sonnet 4.6 system card they use a model to detect potential contamination and manually score those outputs as 0 if human review agrees there was contamination. If all the labs cared about was marketing material, it would be quite easy not to do this extra work.<p>There are ton of other games you can play with evals too (e.g., test 100 different model checkpoints or run secret prompt optimization to steer away from failing behaviors), but by and large what I've seen inside OpenAI is trustworthy.<p>I won't say everything is 100% guaranteed bulletproof, as we could always hire 100 more SWEs to improve hack detection systems and manually read outputs. Mistakes do happen, in both directions. Plus there's always going to be a bit of unavoidable multiple model testing bias that's hard to precisely adjust for. Also, there are legitimate gray areas like what to do if your model asks genuinely useful clarifying questions that the original reference implementation scores as 0s, despite there being no instruction that clarifying questions are forbidden. Like, if you tell a model not to ask clarifying questions is that cheating or is that patching the eval to better align it with user value?</p>
]]></description><pubDate>Sun, 12 Apr 2026 00:57:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47735308</link><dc:creator>tedsanders</dc:creator><comments>https://news.ycombinator.com/item?id=47735308</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47735308</guid></item><item><title><![CDATA[New comment by tedsanders in "ChatGPT Pro now starts at $100/month"]]></title><description><![CDATA[
<p>I'm honestly not sure, as I don't work on it. My understanding from afar is:<p>- There was a 2x promotion in March that ended on April 2, so limits have felt tighter since then<p>- We sometimes reset rate limits after bugs or milestones or because Tibo feels generous, which can make some days feel different than others (they are typically announced here: <a href="https://x.com/thsottiaux" rel="nofollow">https://x.com/thsottiaux</a>)<p>- Recently Plus was tweaked to have a smaller 5h limit but an increased weekly limit<p>- Lastly, as part of the new Pro launch, the $100 & $200 Pro tiers are getting a 2x promotion, meaning they are temporarily 10x/40x instead of 5x/20x<p>I've asked our team to clarify the pricing page. Agree it's not clear.</p>
]]></description><pubDate>Fri, 10 Apr 2026 06:13:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47714259</link><dc:creator>tedsanders</dc:creator><comments>https://news.ycombinator.com/item?id=47714259</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47714259</guid></item><item><title><![CDATA[New comment by tedsanders in "ChatGPT Pro now starts at $100/month"]]></title><description><![CDATA[
<p>All good, I interpreted it as postulation and not accusation. :)<p>I do like the job! Much more organic than yanking tickets, though I'm on the model training side of things, rather than product side. Always a balance between short-term sprints patching bad behaviors for the next model vs long-term investments in infra and science that make future work easier. Sometimes the negative press gets to me a bit (it's a very different feeling than 2022 or 2023), but my goal is just to make the most useful product I can for people. It's been wild how much Codex has already changed my day-to-day work, I'm so curious to see what it looks like in 2030 or 2040.</p>
]]></description><pubDate>Thu, 09 Apr 2026 22:53:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47711341</link><dc:creator>tedsanders</dc:creator><comments>https://news.ycombinator.com/item?id=47711341</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47711341</guid></item><item><title><![CDATA[New comment by tedsanders in "ChatGPT Pro now starts at $100/month"]]></title><description><![CDATA[
<p>Nope, it's just that a lot of people (especially those using Codex) asked us for a medium-sized $100 plan. $20 felt too restrictive and $200 felt like a big jump.<p>Pricing strategy is always a bit of an art, without a perfect optimum for everyone:<p>- pay-per-token makes every query feel stressful<p>- a single plan overcharges light users and annoyingly blocks heavy users<p>- a zillion plans are confusing / annoying to navigate and change<p>This change mostly just adds a medium-sized plan for people doing medium-sized amounts of work. People were asking for this, and we're happy to deliver.<p>(I work at OpenAI.)</p>
]]></description><pubDate>Thu, 09 Apr 2026 18:35:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47707722</link><dc:creator>tedsanders</dc:creator><comments>https://news.ycombinator.com/item?id=47707722</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47707722</guid></item><item><title><![CDATA[New comment by tedsanders in "ChatGPT won't let you type until Cloudflare reads your React state"]]></title><description><![CDATA[
<p>For what it's worth, the big AI companies do have opt out mechanisms for scraping and search.<p>OpenAI documents how to opt out of scraping here: <a href="https://developers.openai.com/api/docs/bots" rel="nofollow">https://developers.openai.com/api/docs/bots</a><p>Anthropic documents how to opt out of scraping here: <a href="https://privacy.claude.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler" rel="nofollow">https://privacy.claude.com/en/articles/8896518-does-anthropi...</a><p>I'm not sure if Gemini lets you opt out without also delisting you from Google search rankings.</p>
]]></description><pubDate>Mon, 30 Mar 2026 07:35:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47571500</link><dc:creator>tedsanders</dc:creator><comments>https://news.ycombinator.com/item?id=47571500</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47571500</guid></item><item><title><![CDATA[New comment by tedsanders in "ChatGPT won't let you type until Cloudflare reads your React state"]]></title><description><![CDATA[
<p>It's documented here: <a href="https://developers.openai.com/api/docs/bots" rel="nofollow">https://developers.openai.com/api/docs/bots</a></p>
]]></description><pubDate>Mon, 30 Mar 2026 07:26:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47571451</link><dc:creator>tedsanders</dc:creator><comments>https://news.ycombinator.com/item?id=47571451</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47571451</guid></item><item><title><![CDATA[New comment by tedsanders in "Astral to Join OpenAI"]]></title><description><![CDATA[
<p>I work at OpenAI. Software developers are not obsoleted by Codex or Claude Code, nor will they be soon.<p>For our teams, Codex is a massive productivity booster that actually increases the value of each dev. If you check our hiring page, you’ll see we are still hiring aggressively. Our ambitions are bigger than our current workforce, and we continue to pay top dollar for talented devs who want to join us in transforming how silicon chips provide value to humans.<p>Akin to how compilers reduced the demand for assembly but increased the demand for software engineering, I see Codex reducing the demand for hand-typed code but increasing the demand for software engineering. Codex can read and write code faster than you or me, but it still lacks a lot of intelligence and wisdom and context to do whole jobs autonomously.</p>
]]></description><pubDate>Thu, 19 Mar 2026 15:48:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47441452</link><dc:creator>tedsanders</dc:creator><comments>https://news.ycombinator.com/item?id=47441452</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47441452</guid></item><item><title><![CDATA[New comment by tedsanders in "GPT-5.4"]]></title><description><![CDATA[
<p>In the text, we did share one hallucination benchmark: Claim-level errors fell by 33% and responses with an error fell by 18%, on a set of error-prone ChatGPT prompts we collected (though of course the rate will vary a lot across different types of prompts).<p>Hallucinations are the #1 problem with language models and we are working hard to keep bringing the rate down.<p>(I work at OpenAI.)</p>
]]></description><pubDate>Thu, 05 Mar 2026 20:34:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47266925</link><dc:creator>tedsanders</dc:creator><comments>https://news.ycombinator.com/item?id=47266925</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47266925</guid></item><item><title><![CDATA[New comment by tedsanders in "GPT-5.4"]]></title><description><![CDATA[
<p>Yeah, long context vs compaction is always an interesting tradeoff. More information isn't always better for LLMs, as each token adds distraction, cost, and latency. There's no single optimum for all use cases.<p>For Codex, we're making 1M context experimentally available, but we're not making it the default experience for everyone, as from our testing we think that shorter context plus compaction works best for most people. If anyone here wants to try out 1M, you can do so by overriding `model_context_window` and `model_auto_compact_token_limit`.<p>Curious to hear if people have use cases where they find 1M works much better!<p>(I work at OpenAI.)</p>
]]></description><pubDate>Thu, 05 Mar 2026 18:41:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47265466</link><dc:creator>tedsanders</dc:creator><comments>https://news.ycombinator.com/item?id=47265466</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47265466</guid></item><item><title><![CDATA[New comment by tedsanders in "GPT‑5.3 Instant"]]></title><description><![CDATA[
<p>Yeah, for a while ChatGPT Plus has been powered by two series of models under the hood.<p>One series is the Instant series, which is faster and more tuned to ChatGPT, but less accurate.<p>The second series is the Thinking series, which is more accurate and more tuned to professional knowledge work, but slower (because it uses more reasoning tokens).<p>We'd also prefer to have simple experience with just one option, but picking just one would pull back the pareto frontier for some group of people/preferences. So for now we continue to serve two models, with manual control for people who want to choose and an imperfect auto switcher for people who don't want to be bothered. Could change down the road - we'll see.<p>(I work at OpenAI.)</p>
]]></description><pubDate>Tue, 03 Mar 2026 19:11:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47237239</link><dc:creator>tedsanders</dc:creator><comments>https://news.ycombinator.com/item?id=47237239</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47237239</guid></item><item><title><![CDATA[New comment by tedsanders in "OpenAI agrees with Dept. of War to deploy models in their classified network"]]></title><description><![CDATA[
<p>The supply chain risk stuff is bogus. Anthropic is a great, trustworthy company, and no enemy of America. I genuinely root for Anthropic, because its success benefits consumers and all the charities that Anthropic employees have pledged equity toward.<p>Whether Anthropic’s clear mistreatment means that all other companies should refrain from doing business with the US government isn’t as clear to me. I can see arguments on both sides and I acknowledge it’s probably impossible to eliminate all possible bias within myself.<p>One thing I hope we can agree on is that it would be good if the contract (or its relevant portions) is made public so that people can judge for themselves, without having to speculate about who’s being honest and who’s lying.</p>
]]></description><pubDate>Sat, 28 Feb 2026 09:48:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47192974</link><dc:creator>tedsanders</dc:creator><comments>https://news.ycombinator.com/item?id=47192974</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47192974</guid></item><item><title><![CDATA[New comment by tedsanders in "OpenAI agrees with Dept. of War to deploy models in their classified network"]]></title><description><![CDATA[
<p>I agree it makes little sense, and I think if all players were rational it never would have played out this way. My understanding is that there are other reasons (i.e., beyond differing red lines) that made the OpenAI deal more palatable, but unfortunately the information shared with me has not been made public so I won't comment on specifics. I know that's unsatisfying, but I hope it serves as some very mild evidence that it's not all a big fat lie.</p>
]]></description><pubDate>Sat, 28 Feb 2026 06:58:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47191419</link><dc:creator>tedsanders</dc:creator><comments>https://news.ycombinator.com/item?id=47191419</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47191419</guid></item><item><title><![CDATA[New comment by tedsanders in "OpenAI agrees with Dept. of War to deploy models in their classified network"]]></title><description><![CDATA[
<p>I'm an OpenAI employee and I'll go out on a limb with a public comment. I agree AI shouldn't be used for mass surveillance or autonomous weapons. I also think Anthropic has been treated terribly and has acted admirably. My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons, and that OpenAI is asking for the same terms for other AI companies (so that we can continue competing on the basis of differing services and not differing scruples). Given this understanding, I don't see why I should quit. If it turns out that the deal is being misdescribed or that it won't be enforced, I can see why I should quit, but so far I haven't seen any evidence that's the case.</p>
]]></description><pubDate>Sat, 28 Feb 2026 06:27:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47191196</link><dc:creator>tedsanders</dc:creator><comments>https://news.ycombinator.com/item?id=47191196</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47191196</guid></item><item><title><![CDATA[New comment by tedsanders in "Detecting and Preventing Distillation Attacks"]]></title><description><![CDATA[
<p>The people who would otherwise be affected by spam calls, spam messages, ransomware / computer viruses, fake / deceptive websites, or bioengineered viruses.<p>The risk of these could plausibly increase in a world with powerful AI. Obviously the risk isn't high now, and there are benefits to trade off against these costs, but all powerful technologies have costs.</p>
]]></description><pubDate>Mon, 23 Feb 2026 20:45:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47128537</link><dc:creator>tedsanders</dc:creator><comments>https://news.ycombinator.com/item?id=47128537</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47128537</guid></item><item><title><![CDATA[New comment by tedsanders in "Detecting and Preventing Distillation Attacks"]]></title><description><![CDATA[
<p>One consequence of creating a country of geniuses in a data center is that you now have a country of geniuses who can potentially help your competitors catch up on research, coding, and data labeling. It's a tough problem for the industry and, more importantly, for long-term safety.<p>We're obviously nowhere close now, but if we get to a world AI becomes powerful, and powerful AI can be used to create misaligned powerful AI, you may have to start regulating powerful AI like refined uranium processing tech, which is regulated more heavily than refined uranium itself.</p>
]]></description><pubDate>Mon, 23 Feb 2026 18:33:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47126618</link><dc:creator>tedsanders</dc:creator><comments>https://news.ycombinator.com/item?id=47126618</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47126618</guid></item><item><title><![CDATA[Why SWE-bench Verified no longer measures frontier coding capabilities]]></title><description><![CDATA[
<p>Article URL: <a href="https://openai.com/index/why-we-no-longer-evaluate-swe-bench-verified/">https://openai.com/index/why-we-no-longer-evaluate-swe-bench-verified/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47126205">https://news.ycombinator.com/item?id=47126205</a></p>
<p>Points: 10</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 23 Feb 2026 18:08:55 +0000</pubDate><link>https://openai.com/index/why-we-no-longer-evaluate-swe-bench-verified/</link><dc:creator>tedsanders</dc:creator><comments>https://news.ycombinator.com/item?id=47126205</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47126205</guid></item><item><title><![CDATA[New comment by tedsanders in "Uncovering insiders and alpha on Polymarket with AI"]]></title><description><![CDATA[
<p>Bribing employees to disclose confidential information entrusted to them is not kosher nor wholesome. I consider corporate insider trading on these markets to be analogous - if you're an employee and you trade, you are selling your employer's info for money. Nearly every employer would fire employees caught giving away confidential information for personal bribes.<p>In the stock market, Matt Levine likes to say that insider training is about theft, not fairness. You can be prosecuted for merely sharing info with a friend on a golf course who then proceeds to trade. Your crime is not trading (you didn't even trade), but misappropriating information you were entrusted with and not authorized to sell.</p>
]]></description><pubDate>Fri, 20 Feb 2026 22:09:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47094726</link><dc:creator>tedsanders</dc:creator><comments>https://news.ycombinator.com/item?id=47094726</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47094726</guid></item></channel></rss>