<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ceh123</title><link>https://news.ycombinator.com/user?id=ceh123</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 21 Apr 2026 16:11:24 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ceh123" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by ceh123 in "OpenAI ad partner now selling ChatGPT ad placements based on “prompt relevance”"]]></title><description><![CDATA[
<p>For now.</p>
]]></description><pubDate>Mon, 20 Apr 2026 23:24:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47842443</link><dc:creator>ceh123</dc:creator><comments>https://news.ycombinator.com/item?id=47842443</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47842443</guid></item><item><title><![CDATA[Global Intelligence Crisis – Citadel Securities' Response]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.citadelsecurities.com/news-and-insights/2026-global-intelligence-crisis/">https://www.citadelsecurities.com/news-and-insights/2026-global-intelligence-crisis/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47172754">https://news.ycombinator.com/item?id=47172754</a></p>
<p>Points: 5</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 26 Feb 2026 22:16:05 +0000</pubDate><link>https://www.citadelsecurities.com/news-and-insights/2026-global-intelligence-crisis/</link><dc:creator>ceh123</dc:creator><comments>https://news.ycombinator.com/item?id=47172754</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47172754</guid></item><item><title><![CDATA[New comment by ceh123 in "“Erdos problem #728 was solved more or less autonomously by AI”"]]></title><description><![CDATA[
<p>Context: I finished a PhD in pure math in 2025 and have transitioned to being a data scientist and I do ML/stats research on the side now.<p>For me, deep research tools have been essential for getting caught up with a quick lit review about research ideas I have now that I'm transitioning fields. They have also been quite helpful with some routine math that I'm not as familiar with but is relatively established (like standard random matrix theory results from ~5 years ago).<p>It does feel like the spectrum of utility is pretty aligned with what you might expect: routine programming > applied ML research > stats/applied math research > pure math research.<p>I will say ~1 year ago they were still useless for my math research area, but things have been changing quickly.</p>
]]></description><pubDate>Fri, 09 Jan 2026 23:51:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=46561041</link><dc:creator>ceh123</dc:creator><comments>https://news.ycombinator.com/item?id=46561041</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46561041</guid></item><item><title><![CDATA[New comment by ceh123 in "The Mighty Simplex (2023)"]]></title><description><![CDATA[
<p>Exactly! It's n+1 points in n dimensions (when finite). Another way to think about it (the way that I know because it extends into general Banach spaces and not just n dimensional spaces) is that each point inside is the unique weighted average of the extreme points (corners). So in 2d, if you have a square you can get that middle point by averaging all the corners, or averaging two opposing corners, so it's not a simplex.</p>
]]></description><pubDate>Sat, 15 Nov 2025 19:06:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=45939768</link><dc:creator>ceh123</dc:creator><comments>https://news.ycombinator.com/item?id=45939768</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45939768</guid></item><item><title><![CDATA[New comment by ceh123 in "The Mighty Simplex (2023)"]]></title><description><![CDATA[
<p>On the topic of simplices! I did my PhD in dynamical systems and the space of invariant measures [0] is (in the compact setting) always a simplex and the extreme points are the ergodic measures. It's because of this that you can kind of assume your system is ergodic do work there and frequently be able to generalize to the non-ergodic case (through ergodic decomposition).<p>But the real thing I wanted to mention here was the Poulsen Simplex [1]. This is the unique Choquet simplex [2] for which the extreme points are dense. This means that it's like an uncountably infinite dimensional triangle where no matter where you are inside the triangle, you're arbitrarily close to a corner. It's my favorite shape and absolutely wild and impossible to conceptualize (even though I worked with it daily for years!)<p>[0] <a href="https://en.wikipedia.org/wiki/Invariant_measure" rel="nofollow">https://en.wikipedia.org/wiki/Invariant_measure</a><p>[1] <a href="https://eudml.org/doc/74350" rel="nofollow">https://eudml.org/doc/74350</a><p>[2] <a href="https://en.wikipedia.org/wiki/Choquet_theory" rel="nofollow">https://en.wikipedia.org/wiki/Choquet_theory</a></p>
]]></description><pubDate>Sat, 15 Nov 2025 17:19:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=45938938</link><dc:creator>ceh123</dc:creator><comments>https://news.ycombinator.com/item?id=45938938</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45938938</guid></item><item><title><![CDATA[New comment by ceh123 in "Source-Optimal Training Is Transfer-Suboptimal"]]></title><description><![CDATA[
<p>This paper is a theoretical analysis showing that the ridge regularization that optimizes the source task almost never optimizes transfer performance. Interestingly, in high SNR regimes (low noise) the optimal regularization for pre-training is higher than the task specific optimal regularization, and in low SNR regimes (high noise) it’s better to regularize less than you would if you were just optimizing for that task.<p>Although the proofs are in the world of (L2-SP) ridge regression, experiments were run using an MLP on MNIST and CNN on CIFAR-10 and suggest the SNR-regularization relationship persists in non-linear networks.</p>
]]></description><pubDate>Thu, 13 Nov 2025 15:28:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=45916024</link><dc:creator>ceh123</dc:creator><comments>https://news.ycombinator.com/item?id=45916024</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45916024</guid></item><item><title><![CDATA[Source-Optimal Training Is Transfer-Suboptimal]]></title><description><![CDATA[
<p>Article URL: <a href="https://arxiv.org/abs/2511.08401">https://arxiv.org/abs/2511.08401</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45916020">https://news.ycombinator.com/item?id=45916020</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Thu, 13 Nov 2025 15:27:59 +0000</pubDate><link>https://arxiv.org/abs/2511.08401</link><dc:creator>ceh123</dc:creator><comments>https://news.ycombinator.com/item?id=45916020</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45916020</guid></item><item><title><![CDATA[New comment by ceh123 in "Claude for Excel"]]></title><description><![CDATA[
<p>I think my main point is just because an LLM can lie, doesn’t necessarily mean an LLM generated slide is fraud. It could very easily be correct and verified/certified by the accountant and not fraud. Just cuz the text was generated first by an LLM doesn’t mean fraud.<p>That being said, oh for sure this will lead to more incidental fraud (and deliberate fraud) and I’m sure it already has. Would be curious to see the prevalence of em-dash’s in 10k’s over the years.</p>
]]></description><pubDate>Tue, 28 Oct 2025 01:06:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=45728242</link><dc:creator>ceh123</dc:creator><comments>https://news.ycombinator.com/item?id=45728242</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45728242</guid></item><item><title><![CDATA[New comment by ceh123 in "Claude for Excel"]]></title><description><![CDATA[
<p>US v Simon 1969, see [0] for a review.<p>Establishes that accountants who certify financials are liable if they are incorrect. In particular, if they have a reason to believe they might not be accurate and they certify anyway they are liable. And at this stage of development it’s pretty clear that you need to double check LLM generated numbers.<p>Obviously no clue if this would hold up with today’s court, but I also wasn’t making a legal statement before. I’m not a lawyer and I’m not trying to pretend to be one.<p>[0] <a href="https://scholarship.law.stjohns.edu/cgi/viewcontent.cgi?article=1363&context=lawreview" rel="nofollow">https://scholarship.law.stjohns.edu/cgi/viewcontent.cgi?arti...</a></p>
]]></description><pubDate>Tue, 28 Oct 2025 00:56:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=45728196</link><dc:creator>ceh123</dc:creator><comments>https://news.ycombinator.com/item?id=45728196</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45728196</guid></item><item><title><![CDATA[New comment by ceh123 in "Claude for Excel"]]></title><description><![CDATA[
<p>Presenting false data to investors is fraud, doesn't matter how it was generated. In fact, humans are quite good at "generating plausible looking data", doesn't mean human generated spreadsheets are fraud.<p>On the other hand, presenting truthful data to investors is distinctly not fraud, and this again does not depend on the generation method.</p>
]]></description><pubDate>Mon, 27 Oct 2025 20:48:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=45726098</link><dc:creator>ceh123</dc:creator><comments>https://news.ycombinator.com/item?id=45726098</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45726098</guid></item><item><title><![CDATA[New comment by ceh123 in "Sheafification – The optimal path to mathematical mastery: The fast track (2022)"]]></title><description><![CDATA[
<p>Even one of these topics I would say it would take most PhDs at least 2-3 years to “master”. I feel like at the end of my math PhD (5 years, 3 focused solely on my research area) I had <i>just</i> scratched the surface of mastery in my sub field, and that’s with 3 published papers.<p>I guess you’re right though, defining “mastery” is the key missing point here.</p>
]]></description><pubDate>Sun, 31 Aug 2025 15:24:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=45083896</link><dc:creator>ceh123</dc:creator><comments>https://news.ycombinator.com/item?id=45083896</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45083896</guid></item><item><title><![CDATA[New comment by ceh123 in "GPT-5"]]></title><description><![CDATA[
<p>Right but for self improving AI, training new models does have a real world bottleneck: energy and hardware. (Even if the data bottleneck is solved too)</p>
]]></description><pubDate>Fri, 08 Aug 2025 13:04:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=44836517</link><dc:creator>ceh123</dc:creator><comments>https://news.ycombinator.com/item?id=44836517</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44836517</guid></item><item><title><![CDATA[New comment by ceh123 in "Ask HN: Who is hiring? (June 2025)"]]></title><description><![CDATA[
<p>ClearStride AI | Founding Software Engineer - Full Stack | Remote (US), Bay Area Preferred | Part Time | Equity Comp | clearstride.ai<p>ClearStride AI is building a comprehensive AI/ML powered platform for diagnostic radiology. Our initial focus is on equine radiographs, specifically targeting the unique needs of sports horse practitioners.<p>We are using deep learning to build a comprehensive diagnostic assistance platform that will enhance veterinary workflows and improve diagnostic accuracy. Our mission is to revolutionize the field of veterinary diagnostics, starting with automated annotations of radiographs and report generation.<p>We are looking for a founding SWE to help us finalize and deploy our MVP. The team is remote and based between CO and NY.<p>If you are interested please reach out to us through founders at clearstride dot ai.</p>
]]></description><pubDate>Tue, 03 Jun 2025 17:10:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=44172256</link><dc:creator>ceh123</dc:creator><comments>https://news.ycombinator.com/item?id=44172256</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44172256</guid></item><item><title><![CDATA[New comment by ceh123 in "Claude's system prompt is over 24k tokens with tools"]]></title><description><![CDATA[
<p>I'm not sure if this really says the truth is more complex? It is still doing next-token prediction, but it's prediction method is sufficiently complicated in terms of conditional probabilities that it recognizes that if you need to rhyme, you need to get to some future state, which then impacts the probabilities of the intermediate states.<p>At least in my view it's still inherently a next-token predictor, just with really good conditional probability understandings.</p>
]]></description><pubDate>Wed, 07 May 2025 13:20:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=43915310</link><dc:creator>ceh123</dc:creator><comments>https://news.ycombinator.com/item?id=43915310</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43915310</guid></item><item><title><![CDATA[New comment by ceh123 in "AI, but at What Cost? Breakdown of AI's Carbon Footprint"]]></title><description><![CDATA[
<p>To add some extra numbers here just to showcase how little energy usage this is.<p>This means it's adding about 0.012% additional energy consumption to those users energy consumption.<p>From another angle: Average US house energy consumption is around 30kWh per day. 0.012% of that is 3.75 watt hours of energy per day. This is the equivalent amount of energy as streaming HD video to your iPhone on a 4G network for 1.5 seconds. [0]<p>So in other words, a 15s youtube ad you are forced to watch on your phone before watching the video you were going to watch anyway takes an order of magnitude more energy than the average AI user according to this article.<p>[0] <a href="https://www.statista.com/statistics/1109623/electricity-consumption-video-streaming-by-device-globally/?__sso_cookie_checker=failed" rel="nofollow">https://www.statista.com/statistics/1109623/electricity-cons...</a></p>
]]></description><pubDate>Tue, 28 Jan 2025 13:40:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=42852166</link><dc:creator>ceh123</dc:creator><comments>https://news.ycombinator.com/item?id=42852166</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42852166</guid></item><item><title><![CDATA[New comment by ceh123 in "Adventures in Probability"]]></title><description><![CDATA[
<p>We don't need first principals thinking every time, but having an understanding of why you can't just test 100 variations of your hypothesis and accept p=0.05 as "statistically significant" is important.<p>Additionally it's quite useful to have the background to understand the differences between Pearson correlation and Spearman rank, or why you might want to use Welch's t-test vs students, etc.<p>Not that you should know all of these things off the top of your head necessarily, but you should have the foundation to be able to quickly learn them, and you should know what assumptions the tests you're using actually make.</p>
]]></description><pubDate>Mon, 11 Nov 2024 18:13:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=42109191</link><dc:creator>ceh123</dc:creator><comments>https://news.ycombinator.com/item?id=42109191</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42109191</guid></item><item><title><![CDATA[New comment by ceh123 in "Boeing whistleblower: MAX 9 production line has "enormous volume of defects""]]></title><description><![CDATA[
<p>Important correction, it’s not DD/MM/YYYY, but DD Month YYYY. (At least what I saw in the article)<p>This format is common in heavily regulated industries and frequently a regulatory requirement since it’s fully unambiguous. I (American) worked in clinical research/pharma for a bit and still write my dates like 23Jan2024.</p>
]]></description><pubDate>Tue, 23 Jan 2024 14:35:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=39103809</link><dc:creator>ceh123</dc:creator><comments>https://news.ycombinator.com/item?id=39103809</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39103809</guid></item><item><title><![CDATA[New comment by ceh123 in "Mathematical Introduction to Deep Learning: Methods, Implementations, and Theory"]]></title><description><![CDATA[
<p>As someone that’s in the later stages of a PhD in math, given the title starts with “Mathematical Introduction…”, the notation feels pretty reasonable for someone with a background in math.<p>Sure I might want some slight changes to the notation I found skimming through on my phone, but everything they define and the notation they choose feels pretty familiar and I understand why they did what they did.<p>Mirroring what someone else said, this is exactly the kind of intro I’ve been looking for for deep learning.</p>
]]></description><pubDate>Mon, 01 Jan 2024 22:40:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=38835982</link><dc:creator>ceh123</dc:creator><comments>https://news.ycombinator.com/item?id=38835982</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38835982</guid></item><item><title><![CDATA[New comment by ceh123 in "It’s infuriatingly hard to understand how closed models train on their input"]]></title><description><![CDATA[
<p>Would love a link to this if anyone knows the paper?</p>
]]></description><pubDate>Sun, 04 Jun 2023 22:10:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=36190071</link><dc:creator>ceh123</dc:creator><comments>https://news.ycombinator.com/item?id=36190071</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36190071</guid></item><item><title><![CDATA[New comment by ceh123 in "Ask HN: What sub $200 product improved your 2022"]]></title><description><![CDATA[
<p>Also a subscriber to Kagi.<p>In my experience it's at least marginally better, but one of the really nice features that Kagi has (and probably the main reason I subscribe) is you can extremely easily block domains. So whenever I hit a SEO garbage site, I just go back, block it, and I never worry about it again. In the areas you regularly search, this quickly gets you to a result page that is substantially higher quality than google.</p>
]]></description><pubDate>Fri, 06 Jan 2023 16:32:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=34276827</link><dc:creator>ceh123</dc:creator><comments>https://news.ycombinator.com/item?id=34276827</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34276827</guid></item></channel></rss>