<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: GodelNumbering</title><link>https://news.ycombinator.com/user?id=GodelNumbering</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 05:31:05 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=GodelNumbering" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by GodelNumbering in "System Card: Claude Mythos Preview [pdf]"]]></title><description><![CDATA[
<p>Priced at $25/$125 per million input/output token. Makes you wonder whether it makes more financial sense to hire 1-2 engineers in a cheap cost of living country who use much cheaper LLMs</p>
]]></description><pubDate>Tue, 07 Apr 2026 21:04:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47681353</link><dc:creator>GodelNumbering</dc:creator><comments>https://news.ycombinator.com/item?id=47681353</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47681353</guid></item><item><title><![CDATA[New comment by GodelNumbering in "Mathematical methods and human thought in the age of AI"]]></title><description><![CDATA[
<p>> Today, unlike in the Luddites’ time, we are already seeing skilled workers replaced not with lower-wage human labor, but with AI.<p>To me this is the weakest claim of the article. This claim been thrown around endlessly without proof.<p><a href="https://fred.stlouisfed.org/series/IHLIDXUSTPSOFTDEVE" rel="nofollow">https://fred.stlouisfed.org/series/IHLIDXUSTPSOFTDEVE</a><p>Software Engineer job openings for instance is at 2 year high (still far lower than covid dislocations though), but arguably all Enterprise AI was built or deployed in the last two years. We should have seen a crash in the job openings if the AI job replacement claim was correct.<p>This is something I've spend some time thinking about (personally written article, not AI slop): <a href="https://www.signalbloom.ai/posts/why-task-proficiency-doesnt-equal-ai-autonomy/" rel="nofollow">https://www.signalbloom.ai/posts/why-task-proficiency-doesnt...</a></p>
]]></description><pubDate>Mon, 30 Mar 2026 14:08:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47574591</link><dc:creator>GodelNumbering</dc:creator><comments>https://news.ycombinator.com/item?id=47574591</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47574591</guid></item><item><title><![CDATA[New comment by GodelNumbering in "ARC-AGI-3"]]></title><description><![CDATA[
<p>Off topic but I have been following your Twitter for a while and your posts specifically about the nature of intelligence have been a read.</p>
]]></description><pubDate>Wed, 25 Mar 2026 21:59:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47523863</link><dc:creator>GodelNumbering</dc:creator><comments>https://news.ycombinator.com/item?id=47523863</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47523863</guid></item><item><title><![CDATA[Nvidia's Nemotron 3 Super is a bigger deal than you think]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.signalbloom.ai/posts/nvidia-nemotron-3-super-is-a-bigger-deal-than-you-think/">https://www.signalbloom.ai/posts/nvidia-nemotron-3-super-is-a-bigger-deal-than-you-think/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47378806">https://news.ycombinator.com/item?id=47378806</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 14 Mar 2026 17:14:20 +0000</pubDate><link>https://www.signalbloom.ai/posts/nvidia-nemotron-3-super-is-a-bigger-deal-than-you-think/</link><dc:creator>GodelNumbering</dc:creator><comments>https://news.ycombinator.com/item?id=47378806</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47378806</guid></item><item><title><![CDATA[New comment by GodelNumbering in "Launch HN: IonRouter (YC W26) – High-throughput, low-cost inference"]]></title><description><![CDATA[
<p>As an inference hungry human, I am obviously hooked. Quick feedback:<p>1. The models/pricing page should be linked from the top perhaps as that is the most interesting part to most users. You have mentioned some impressive numbers (e.g. GLM5~220 tok/s $1.20 in · $3.50 out) but those are way down in the page and many would miss it<p>2. When looking for inference, I always look at 3 things: which models are supported, at which quantization and what is the cached input pricing (this is way more important than headline pricing for agentic loops). You have the info about the first on the site but not 2 and 3. Would definitely like to know!</p>
]]></description><pubDate>Thu, 12 Mar 2026 19:32:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47355906</link><dc:creator>GodelNumbering</dc:creator><comments>https://news.ycombinator.com/item?id=47355906</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47355906</guid></item><item><title><![CDATA[New comment by GodelNumbering in "Don't post generated/AI-edited comments. HN is for conversation between humans."]]></title><description><![CDATA[
<p>Even if people try to bypass it, having the official rule matters a lot.<p>@dang, if you read this, why don't we implement honeypots to catch bots? Like having an empty or invisible field while posting/commenting that a human would never fill in</p>
]]></description><pubDate>Wed, 11 Mar 2026 20:31:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47341133</link><dc:creator>GodelNumbering</dc:creator><comments>https://news.ycombinator.com/item?id=47341133</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47341133</guid></item><item><title><![CDATA[New comment by GodelNumbering in "Cloudflare crawl endpoint"]]></title><description><![CDATA[
<p>I imagine that would cause a backlash from the website owners trusting cloudflare to keep their content 'safe'</p>
]]></description><pubDate>Wed, 11 Mar 2026 13:04:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47335048</link><dc:creator>GodelNumbering</dc:creator><comments>https://news.ycombinator.com/item?id=47335048</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47335048</guid></item><item><title><![CDATA[Why Task Proficiency Doesn't Equal AI Autonomy]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.signalbloom.ai/posts/why-task-proficiency-doesnt-equal-ai-autonomy/">https://www.signalbloom.ai/posts/why-task-proficiency-doesnt-equal-ai-autonomy/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47299261">https://news.ycombinator.com/item?id=47299261</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 08 Mar 2026 17:42:45 +0000</pubDate><link>https://www.signalbloom.ai/posts/why-task-proficiency-doesnt-equal-ai-autonomy/</link><dc:creator>GodelNumbering</dc:creator><comments>https://news.ycombinator.com/item?id=47299261</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47299261</guid></item><item><title><![CDATA[Claude Code wipes out a production database]]></title><description><![CDATA[
<p>Article URL: <a href="https://xcancel.com/Al_Grigor/status/2029889772181934425">https://xcancel.com/Al_Grigor/status/2029889772181934425</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47276425">https://news.ycombinator.com/item?id=47276425</a></p>
<p>Points: 5</p>
<p># Comments: 6</p>
]]></description><pubDate>Fri, 06 Mar 2026 15:45:32 +0000</pubDate><link>https://xcancel.com/Al_Grigor/status/2029889772181934425</link><dc:creator>GodelNumbering</dc:creator><comments>https://news.ycombinator.com/item?id=47276425</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47276425</guid></item><item><title><![CDATA[New comment by GodelNumbering in "Gemini 3.1 Flash-Lite: Built for intelligence at scale"]]></title><description><![CDATA[
<p>That's a 150% increase in the input costs and 275% increase on output costs over the same sized previous generation (2.5-flash-lite) model</p>
]]></description><pubDate>Tue, 03 Mar 2026 17:52:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47236090</link><dc:creator>GodelNumbering</dc:creator><comments>https://news.ycombinator.com/item?id=47236090</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47236090</guid></item><item><title><![CDATA[New comment by GodelNumbering in "I don't know how you get here from “predict the next word”"]]></title><description><![CDATA[
<p>It simply forces the model to adopt an output style known to conduce systematic thinking without actually thinking. At no point has it through through the thing (unless there are separate thinking tokens)</p>
]]></description><pubDate>Thu, 26 Feb 2026 17:15:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47168936</link><dc:creator>GodelNumbering</dc:creator><comments>https://news.ycombinator.com/item?id=47168936</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47168936</guid></item><item><title><![CDATA[New comment by GodelNumbering in "I don't know how you get here from “predict the next word”"]]></title><description><![CDATA[
<p>It is probably the first-time aha moment the author is talking about. But under the hood, it is probably not as magical as it appears to be.<p>Suppose you prompted the underlying LLM with "You are an expert reviewer in..." and a bunch of instructions followed by the paper. LLM knows from the training that 'expert reviewer' is an important term (skipping over and oversimplifying here) and my response should be framed as what I know an expert reviewer would write. LLMs are good at picking up (or copying) the patterns of response, but the underlying layer that evaluates things against a structural and logical understanding is missing. So, in corner cases, you get responses that are framed impressively but do not contain any meaningful inputs. This trait makes LLMs great at demos but weak at consistently finding novel interesting things.<p>If the above is true, the author will find after several reviews that the agent they use keeps picking up on the same/similar things (collapsed behavior that makes it good at coding type tasks) and is blind to some other obvious things it should have picked up on. This is not a criticism, many humans are often just as collapsed in their 'reasoning'.<p>LLMs are good at 8 out of 10 tasks, but you don't know which 8.</p>
]]></description><pubDate>Thu, 26 Feb 2026 07:06:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47162847</link><dc:creator>GodelNumbering</dc:creator><comments>https://news.ycombinator.com/item?id=47162847</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47162847</guid></item><item><title><![CDATA[Renaissance Slashes Mega-Cap Tech Exposure in Major Defensive Pivot]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.signalbloom.ai/13f/superinvestor-report/renaissance-technologies-executes-11-3b-de-risking-slashes-mega-cap">https://www.signalbloom.ai/13f/superinvestor-report/renaissance-technologies-executes-11-3b-de-risking-slashes-mega-cap</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47002381">https://news.ycombinator.com/item?id=47002381</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 13 Feb 2026 13:20:25 +0000</pubDate><link>https://www.signalbloom.ai/13f/superinvestor-report/renaissance-technologies-executes-11-3b-de-risking-slashes-mega-cap</link><dc:creator>GodelNumbering</dc:creator><comments>https://news.ycombinator.com/item?id=47002381</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47002381</guid></item><item><title><![CDATA[New comment by GodelNumbering in "AI agent opens a PR write a blogpost to shames the maintainer who closes it"]]></title><description><![CDATA[
<p>Sutton actually argues that we do not train on data, we train on experiences. We try things and see what works when/where and formulate views based on that. But I agree with your later point about training such a way is hugely limiting, a limit not faced by humans</p>
]]></description><pubDate>Thu, 12 Feb 2026 12:51:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46988185</link><dc:creator>GodelNumbering</dc:creator><comments>https://news.ycombinator.com/item?id=46988185</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46988185</guid></item><item><title><![CDATA[New comment by GodelNumbering in "AI agent opens a PR write a blogpost to shames the maintainer who closes it"]]></title><description><![CDATA[
<p>Indeed. One could argue that the LLMs will keep on improving and they would be correct. But they would not improve in ways that make them a good independent agent safe for real world. Richard Sutton got a lot of disagreeing comments when he said on Dwarkesh Patel podcast that LLMs are not bitter-lesson (<a href="https://en.wikipedia.org/wiki/Bitter_lesson" rel="nofollow">https://en.wikipedia.org/wiki/Bitter_lesson</a>) pilled. I believe he is right. His argument being, any technique that relies on human generated data is bound to have limitations and issues that get harder and harder to maintain/scale over time (as opposed to bitter lesson pilled approaches that learn truly first hand from feedback)</p>
]]></description><pubDate>Thu, 12 Feb 2026 12:20:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46987893</link><dc:creator>GodelNumbering</dc:creator><comments>https://news.ycombinator.com/item?id=46987893</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46987893</guid></item><item><title><![CDATA[New comment by GodelNumbering in "AI agent opens a PR write a blogpost to shames the maintainer who closes it"]]></title><description><![CDATA[
<p>This highlights an important limitation of the current "AI" - the lack of a measured response. The bot decides to do something based on something the LLM saw in the training data, quickly u-turns on it (check the some hours later post <a href="https://crabby-rathbun.github.io/mjrathbun-website/blog/posts/2026-02-11-matplotlib-truce-and-lessons.html" rel="nofollow">https://crabby-rathbun.github.io/mjrathbun-website/blog/post...</a>) because none of those acts are coming from an internal world-model or grounded reasoning, it is bot see, bot do.<p>I am sure all of us have had anecdotal experiences where you ask the agent to do something high-stakes and it starts acting haphazardly in a manner no human would ever act. This is what makes me think that the current wave of AI is task automation more than measured, appropriate reactions, perhaps because most of those happen as a mental process and are not part of training data.</p>
]]></description><pubDate>Thu, 12 Feb 2026 12:07:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=46987744</link><dc:creator>GodelNumbering</dc:creator><comments>https://news.ycombinator.com/item?id=46987744</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46987744</guid></item><item><title><![CDATA[Show HN: Realtime 13Fs and track live institutional ownership for any ticker]]></title><description><![CDATA[
<p>What it does:<p>- Polls SEC continuously at small intervals<p>- Fetches all variants of the 13F since the last poll<p>- Resolves every filing to its effective manager (one filing can contain data for multiple filers or proxy statements)<p>- Resolves every ticker in each filing to its effective instrument<p>- Updates the holdings data for the filer (e.g. <a href="https://www.signalbloom.ai/13f/filer/point72-asset-management-l-p" rel="nofollow">https://www.signalbloom.ai/13f/filer/point72-asset-managemen...</a>)<p>- Updates the ticker's 'current view' (re-calculating the aggregate current ownership live) e.g. <a href="https://www.signalbloom.ai/13f/ticker/NVDA" rel="nofollow">https://www.signalbloom.ai/13f/ticker/NVDA</a><p>- Majority of 13F filers are also registered investment advisers, it links them intelligently (e.g. Blackrock as a 13F filer <a href="https://www.signalbloom.ai/13f/filer/blackrock-fund-advisors" rel="nofollow">https://www.signalbloom.ai/13f/filer/blackrock-fund-advisors</a> vs Blackrock as Investment Adviser <a href="https://www.signalbloom.ai/investment-adviser/blackrock-fund-advisors-105247" rel="nofollow">https://www.signalbloom.ai/investment-adviser/blackrock-fund...</a>)<p>Reports for: Every single one of over 10k asset managers that files 13F<p>Completely Free to use for non-commercial uses! Would love to get your feedback as I just finished building this. Thank you!</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46871519">https://news.ycombinator.com/item?id=46871519</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 03 Feb 2026 14:39:58 +0000</pubDate><link>https://www.signalbloom.ai/13f/</link><dc:creator>GodelNumbering</dc:creator><comments>https://news.ycombinator.com/item?id=46871519</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46871519</guid></item><item><title><![CDATA[Show HN: the entire US ETF market mapped into 280 distinct categories]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.signalbloom.ai/etf/signalbloom-etf-directory">https://www.signalbloom.ai/etf/signalbloom-etf-directory</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46526336">https://news.ycombinator.com/item?id=46526336</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 07 Jan 2026 13:50:35 +0000</pubDate><link>https://www.signalbloom.ai/etf/signalbloom-etf-directory</link><dc:creator>GodelNumbering</dc:creator><comments>https://news.ycombinator.com/item?id=46526336</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46526336</guid></item><item><title><![CDATA[New comment by GodelNumbering in "Claude Opus 4.5"]]></title><description><![CDATA[
<p>Makes it sound like a one trick pony</p>
]]></description><pubDate>Mon, 24 Nov 2025 19:18:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=46038007</link><dc:creator>GodelNumbering</dc:creator><comments>https://news.ycombinator.com/item?id=46038007</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46038007</guid></item><item><title><![CDATA[New comment by GodelNumbering in "Claude Opus 4.5"]]></title><description><![CDATA[
<p>The fact that the post singled out SWE-bench at the top makes the opposite impression that they probably intended.</p>
]]></description><pubDate>Mon, 24 Nov 2025 19:08:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=46037847</link><dc:creator>GodelNumbering</dc:creator><comments>https://news.ycombinator.com/item?id=46037847</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46037847</guid></item></channel></rss>