<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: curious_cat_163</title><link>https://news.ycombinator.com/user?id=curious_cat_163</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 16 Apr 2026 18:20:53 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=curious_cat_163" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by curious_cat_163 in "Melvyn Bragg steps down from presenting In Our Time"]]></title><description><![CDATA[
<p>That makes me sad. I will miss his voice. I loved how he interrupted his guests and kept them honest and on point. I loved the casual offer for tea/coffee at the end. I would love how it had this encore bit at the end, sometimes!<p>This podcast chose its listeners and kept it real. Thanks to everyone who makes it possible. Hope they get a fitting replacement for Melvyn and keep it going!</p>
]]></description><pubDate>Thu, 04 Sep 2025 21:42:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=45132549</link><dc:creator>curious_cat_163</dc:creator><comments>https://news.ycombinator.com/item?id=45132549</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45132549</guid></item><item><title><![CDATA[New comment by curious_cat_163 in "My boss fired me over WhatsApp while he was on vacation in Honolulu"]]></title><description><![CDATA[
<p>Give it time, it should [1] go down to where it belongs. :-)<p>[1] <a href="https://news.ycombinator.com/item?id=1781013">https://news.ycombinator.com/item?id=1781013</a></p>
]]></description><pubDate>Sat, 16 Aug 2025 17:57:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=44925565</link><dc:creator>curious_cat_163</dc:creator><comments>https://news.ycombinator.com/item?id=44925565</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44925565</guid></item><item><title><![CDATA[New comment by curious_cat_163 in "NSF and Nvidia award Ai2 $152M to support building an open AI ecosystem"]]></title><description><![CDATA[
<p>We don't really have very many open source models. We have "open weights" models. Ai2 is one of the very few labs that actually make their entire training/inference code AND datasets AND training run details public. So, that this investment is happening is a welcome step.<p>Congratulations to the team at Ai2!</p>
]]></description><pubDate>Thu, 14 Aug 2025 17:38:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=44903319</link><dc:creator>curious_cat_163</dc:creator><comments>https://news.ycombinator.com/item?id=44903319</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44903319</guid></item><item><title><![CDATA[New comment by curious_cat_163 in "Genie 3: A new frontier for world models"]]></title><description><![CDATA[
<p>> So what is final state here for us? Return to menial not-yet-automated work? And when this would be eventually automated, what's left? Plug our brains to personalized autogenerated worlds that are tailored to trigger related neuronal circuitry for producing ever increasing dopamine levels and finally burn our brains out (which is arguably already happening with tiktok-style leasure)? And how you are supposed to pay for that, if all work is automated? How economics of that is supposed to work?<p>Wow. What a picture! Here's an optimistic take, fwiw: Whenever we have had a paradigm shift in our ability to process information, we have grappled with it by shifting to higher-level tasks.<p>We tend to "invent" new work as we grapple with the technology. The job of a UX designer did not exist in 1970s (at least not as a separate category employing 1000s of people; now I want to be careful this is HN, so there might be someone on here who was doing that in the 70s!).<p>And there is capitalism -- if everyone has access to the best-in-class model, then no one has true edge in a competition. That is not a state that capitalism likes. The economics _will_ ultimately kick in. We just need this recent S-curve to settle for a bit.</p>
]]></description><pubDate>Tue, 05 Aug 2025 17:24:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=44801142</link><dc:creator>curious_cat_163</dc:creator><comments>https://news.ycombinator.com/item?id=44801142</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44801142</guid></item><item><title><![CDATA[New comment by curious_cat_163 in "Intel CEO Letter to Employees"]]></title><description><![CDATA[
<p>That's an interesting way to look at it.<p>Aren't layoffs a version of that? Are we seeing any evidence that folks who have been let go from Intel have resulted in spin-offs and startups?<p>I know at least one person who went to work at Nvidia from Intel but that is neither of those things.</p>
]]></description><pubDate>Thu, 24 Jul 2025 21:40:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=44676573</link><dc:creator>curious_cat_163</dc:creator><comments>https://news.ycombinator.com/item?id=44676573</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44676573</guid></item><item><title><![CDATA[New comment by curious_cat_163 in "Intel CEO Letter to Employees"]]></title><description><![CDATA[
<p>Oh, I don't know. Maybe build chips that do things 10x more efficiently and sell them a lower cost to compete?<p>It _is_ a hype bubble but it is also an S-curve. Intel has missed the AI boat so far, if they are trying to catch up, I would encourage them to try. Building marginally better x86 chips might not cut it anymore.</p>
]]></description><pubDate>Thu, 24 Jul 2025 21:34:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=44676500</link><dc:creator>curious_cat_163</dc:creator><comments>https://news.ycombinator.com/item?id=44676500</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44676500</guid></item><item><title><![CDATA[New comment by curious_cat_163 in "The rise of AI as a threat to the S&P 500 [pdf]"]]></title><description><![CDATA[
<p>And, so what?</p>
]]></description><pubDate>Thu, 17 Jul 2025 16:01:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=44594809</link><dc:creator>curious_cat_163</dc:creator><comments>https://news.ycombinator.com/item?id=44594809</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44594809</guid></item><item><title><![CDATA[New comment by curious_cat_163 in "Reflections on OpenAI"]]></title><description><![CDATA[
<p>> It is fairly rare to see an ex-employee put a positive spin on their work experience.<p>I liked my jobs and bosses!</p>
]]></description><pubDate>Tue, 15 Jul 2025 21:47:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=44576150</link><dc:creator>curious_cat_163</dc:creator><comments>https://news.ycombinator.com/item?id=44576150</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44576150</guid></item><item><title><![CDATA[New comment by curious_cat_163 in "Ask HN: Is it time to fork HN into AI/LLM and "Everything else/other?""]]></title><description><![CDATA[
<p>Maybe you'll want to try Techne: <a href="https://techne.app" rel="nofollow">https://techne.app</a>.</p>
]]></description><pubDate>Tue, 15 Jul 2025 19:47:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=44575096</link><dc:creator>curious_cat_163</dc:creator><comments>https://news.ycombinator.com/item?id=44575096</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44575096</guid></item><item><title><![CDATA[New comment by curious_cat_163 in "The upcoming GPT-3 moment for RL"]]></title><description><![CDATA[
<p>> Rather than fine-tuning models on a small number of environments, we expect the field will shift toward massive-scale training across thousands of diverse environments.<p>This is a great hypothesis for you to prove one way or the other.<p>> Doing this effectively will produce RL models with strong few-shot, task-agnostic abilities capable of quickly adapting to entirely new tasks.<p>I am not sure if I buy that, frankly. Even if you were to develop radically efficient means to create "effective and comprehensive" test suites that power replication training, it is not at all a given that it will translate to entirely new tasks. Yes, there is the bitter lesson and all that but we don't know if this is _the_ right hill to climb. Again, at best, this is a hypothesis.<p>> But achieving this will require training environments at a scale and diversity that dwarf anything currently available.<p>Yes. You should try it. Let us know if it works. All the best!</p>
]]></description><pubDate>Sun, 13 Jul 2025 17:35:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=44552012</link><dc:creator>curious_cat_163</dc:creator><comments>https://news.ycombinator.com/item?id=44552012</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44552012</guid></item><item><title><![CDATA[New comment by curious_cat_163 in "Measuring the impact of AI on experienced open-source developer productivity"]]></title><description><![CDATA[
<p>100% agreed. It is all about removing friction for me. Case in point: I would not have touched React in my previous career without the assist that LLMs now provide. The barrier to entry just _felt_ to be too large and one always has the instinct to stick with what one knows.<p>However, it is _fun_ to go over the barrier if it is chatting with a model to get a quick tutorial and produce working code for a prototype (for your specific needs) where the understanding that you just developed is applied. The alternative (without LLMs) is to first do the ground work of learning via tutorials in text/video form and then do the cognitive mapping of applying the learning to one's prototype. I would make a lot of mistakes that expert/intermediate React developers don't make on this path.<p>One could argue that it shortcuts some learning and perhaps the old way results in better retention. But, our field changes so fast... and when it remains static for too long, projects die. I think of all this as accelerant for progress in adoption of new ways of thinking about software and diffusing that more quickly across the developer population globally. Code is always fungible, anyway. The job is about all the other things that one needs to do besides coding.</p>
]]></description><pubDate>Fri, 11 Jul 2025 14:45:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=44532723</link><dc:creator>curious_cat_163</dc:creator><comments>https://news.ycombinator.com/item?id=44532723</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44532723</guid></item><item><title><![CDATA[New comment by curious_cat_163 in "Define policy forbidding use of AI code generators"]]></title><description><![CDATA[
<p>That’s very conservative.</p>
]]></description><pubDate>Thu, 26 Jun 2025 00:01:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=44382968</link><dc:creator>curious_cat_163</dc:creator><comments>https://news.ycombinator.com/item?id=44382968</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44382968</guid></item><item><title><![CDATA[New comment by curious_cat_163 in "Show HN: Claude Code Usage Monitor – real-time tracker to dodge usage cut-offs"]]></title><description><![CDATA[
<p>Rituals define a school of thought (or a religion). These are rituals of folks who want to prevent catastrophe through conservation. To each their own.<p>Ultimately, individual habits do add up. But with climate, one would be hard pressed to find evidence that conservation is the path forward. It does not work, unfortunately.</p>
]]></description><pubDate>Thu, 19 Jun 2025 16:35:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=44320217</link><dc:creator>curious_cat_163</dc:creator><comments>https://news.ycombinator.com/item?id=44320217</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44320217</guid></item><item><title><![CDATA[New comment by curious_cat_163 in "The Rise of Reasoning Machines"]]></title><description><![CDATA[
<p>Nathan Lambert provides a counterpoint to the recent "The Illusion of Thinking" paper by Apple [1]:<p>"On one of these toy problems, the Tower of Hanoi, the models structurally cannot output enough tokens to solve the problem — the authors still took this as a claim that “these models cannot reason” or “they cannot generalize.” This is a small scientific error."<p>"it appears that a majority of critiques of AI reasoning are based in a fear of no longer being special rather than a fact-based analysis of behaviors."<p>[1]: <a href="https://www.arxiv.org/pdf/2506.06941" rel="nofollow">https://www.arxiv.org/pdf/2506.06941</a></p>
]]></description><pubDate>Thu, 12 Jun 2025 18:16:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=44260898</link><dc:creator>curious_cat_163</dc:creator><comments>https://news.ycombinator.com/item?id=44260898</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44260898</guid></item><item><title><![CDATA[The Rise of Reasoning Machines]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.interconnects.ai/p/the-rise-of-reasoning-machines">https://www.interconnects.ai/p/the-rise-of-reasoning-machines</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44260779">https://news.ycombinator.com/item?id=44260779</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Thu, 12 Jun 2025 18:08:43 +0000</pubDate><link>https://www.interconnects.ai/p/the-rise-of-reasoning-machines</link><dc:creator>curious_cat_163</dc:creator><comments>https://news.ycombinator.com/item?id=44260779</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44260779</guid></item><item><title><![CDATA[New comment by curious_cat_163 in "Reinforcement Pre-Training"]]></title><description><![CDATA[
<p>I am not sure why this ought to require "pump another $100 Billion". Could you elaborate?<p>Yes, the more recent generation of GPUs optimize for attention math. But they are still fairly "general-purpose" accelerators as well. So when I see papers like this (interesting idea, btw!), my mental model for costs suggests that the CapEx to buy up the GPUs and build out the data centers would get re-used for this and 100s of other ideas and experiments.<p>And then the hope is that the best ideas will occupy more of the available capacity...</p>
]]></description><pubDate>Tue, 10 Jun 2025 18:06:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=44239586</link><dc:creator>curious_cat_163</dc:creator><comments>https://news.ycombinator.com/item?id=44239586</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44239586</guid></item><item><title><![CDATA[New comment by curious_cat_163 in "The Illusion of Thinking: Strengths and limitations of reasoning models [pdf]"]]></title><description><![CDATA[
<p>It is hard to compare models with humans so not sure how to answer it for both. :)<p>But, for models, this is an interesting finding because a lot of LRMs are LLMs with a _bunch_ of post-training done on top. We know this about DeepSeek R1 (one of the models evaluated in the Apple paper) for sure. They write extensively about how they took DeepSeek-V3-Base and made R1 with it. [1]<p>If the post-training is resulting in lower performance on simpler tasks then it ought to inspire more research on how to make it so that it doesn't -- i.e., with more training (of any kind), we should be gaining more capabilities. This has been a problem with DNNs historically, btw. We had these issues when fine-tuning text/image classifiers as well. Some weight changes can be destructive. So, it has to be done with a _lot_ of care. And, I am sure folks are working on it, to be honest. Maybe some of them will say something here. :-)<p>[1] <a href="https://github.com/deepseek-ai/DeepSeek-R1">https://github.com/deepseek-ai/DeepSeek-R1</a></p>
]]></description><pubDate>Sun, 08 Jun 2025 16:25:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=44217919</link><dc:creator>curious_cat_163</dc:creator><comments>https://news.ycombinator.com/item?id=44217919</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44217919</guid></item><item><title><![CDATA[New comment by curious_cat_163 in "The Illusion of Thinking: Strengths and limitations of reasoning models [pdf]"]]></title><description><![CDATA[
<p>> Rather than standard benchmarks (e.g., math problems), we adopt controllable puzzle environments that let us vary complexity systematically<p>Very clever, I must say. Kudos to folks who made this particular choice.<p>> we identify three performance regimes: (1) low complexity tasks where standard models surprisingly outperform LRMs, (2) medium-complexity tasks where additional thinking in LRMs demonstrates advantage, and (3) high-complexity tasks where both models experience complete collapse.<p>This is fascinating! We need more "mapping" of regimes like this!<p>What I would love to see (not sure if someone on here has seen anything to this effect) is how these complexity regimes might map to economic value of the task.<p>For that, the eval needs to go beyond puzzles but the complexity of the tasks still need to be controllable.</p>
]]></description><pubDate>Sat, 07 Jun 2025 00:34:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=44206428</link><dc:creator>curious_cat_163</dc:creator><comments>https://news.ycombinator.com/item?id=44206428</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44206428</guid></item><item><title><![CDATA[New comment by curious_cat_163 in "Show HN: Cloud-Ready Postgres MCP Server"]]></title><description><![CDATA[
<p>That’s a good example of a worst case scenario. This is why we would still need humans loitering about.<p>The question is do they still need 10? Or 2 would suffice? How about 5?<p>This does not need to be a debate about the absolutes.</p>
]]></description><pubDate>Sun, 30 Mar 2025 13:41:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=43524126</link><dc:creator>curious_cat_163</dc:creator><comments>https://news.ycombinator.com/item?id=43524126</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43524126</guid></item><item><title><![CDATA[New comment by curious_cat_163 in "Improving recommendation systems and search in the age of LLMs"]]></title><description><![CDATA[
<p>To me, it reads like a survey paper intended for (and maybe written by) a researcher about to start a new project. I am not a researcher in this space but I have dabbled elsewhere, so it is somewhat accessible. The degree to which one leverages existing jargon in their writing is a choice, of course.<p>I am curious -- what would have made it more effective at conveying information to you? Different people learn differently but I wonder how people get beyond the hurdles of jargon.</p>
]]></description><pubDate>Sun, 23 Mar 2025 15:02:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=43453382</link><dc:creator>curious_cat_163</dc:creator><comments>https://news.ycombinator.com/item?id=43453382</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43453382</guid></item></channel></rss>