<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: redlock</title><link>https://news.ycombinator.com/user?id=redlock</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 05:59:15 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=redlock" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by redlock in "Show HN: A Claude Code plugin that catch destructive Git and filesystem commands"]]></title><description><![CDATA[
<p>I hope this isn't Opus 4.5</p>
]]></description><pubDate>Tue, 30 Dec 2025 05:14:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=46429777</link><dc:creator>redlock</dc:creator><comments>https://news.ycombinator.com/item?id=46429777</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46429777</guid></item><item><title><![CDATA[New comment by redlock in "Reflections on AI at the End of 2025"]]></title><description><![CDATA[
<p>Not true anymore since Gemini 2.5 pro<p>I have quizzed it with three books (total more than 1500 pages) and it gave great answers.<p>Initially yes when they released 2 million context with Gemini 1.5 it wasn’t effective.<p>Try it with Gemini 3 pro/flash now.</p>
]]></description><pubDate>Tue, 23 Dec 2025 09:11:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=46363727</link><dc:creator>redlock</dc:creator><comments>https://news.ycombinator.com/item?id=46363727</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46363727</guid></item><item><title><![CDATA[New comment by redlock in "Reflections on AI at the End of 2025"]]></title><description><![CDATA[
<p>“But clearly the difference between LLMs in 2025 and 2023 is not as large as between 2023 and 2021.”<p>This is a ridiculous statement. A simple example of the huge difference is context size.<p>GPT-4 was, what, 8K? Now we’re in the millions with good retention. And this is just context size, let alone reasoning, multimodality, etc.</p>
]]></description><pubDate>Sun, 21 Dec 2025 12:20:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46344294</link><dc:creator>redlock</dc:creator><comments>https://news.ycombinator.com/item?id=46344294</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46344294</guid></item><item><title><![CDATA[New comment by redlock in "Defeating Nondeterminism in LLM Inference"]]></title><description><![CDATA[
<p>Easier to debug deterministic inference</p>
]]></description><pubDate>Thu, 11 Sep 2025 09:53:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=45209678</link><dc:creator>redlock</dc:creator><comments>https://news.ycombinator.com/item?id=45209678</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45209678</guid></item><item><title><![CDATA[New comment by redlock in "OpenAI claims gold-medal performance at IMO 2025"]]></title><description><![CDATA[
<p>Nope<p><a href="https://x.com/polynoamial/status/1946478249187377206?s=46&t=eTe6EMTslOxUfPDx9s6IIQ" rel="nofollow">https://x.com/polynoamial/status/1946478249187377206?s=46&t=...</a></p>
]]></description><pubDate>Sat, 19 Jul 2025 12:13:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=44614870</link><dc:creator>redlock</dc:creator><comments>https://news.ycombinator.com/item?id=44614870</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44614870</guid></item><item><title><![CDATA[New comment by redlock in "Nobel Laureate Daron Acemoglu: Don't Believe the AI Hype"]]></title><description><![CDATA[
<p>Really? Mansplain? Why bring gender wars terms into this.</p>
]]></description><pubDate>Fri, 30 May 2025 06:35:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=44133520</link><dc:creator>redlock</dc:creator><comments>https://news.ycombinator.com/item?id=44133520</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44133520</guid></item><item><title><![CDATA[New comment by redlock in "Yann LeCun, Pioneer of AI, Thinks Today's LLM's Are Nearly Obsolete"]]></title><description><![CDATA[
<p>Doesn't OpenRouter ranking include pricing?<p>Not really a good measure of quality or performance but of cost effectiveness</p>
]]></description><pubDate>Sat, 05 Apr 2025 18:50:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=43595793</link><dc:creator>redlock</dc:creator><comments>https://news.ycombinator.com/item?id=43595793</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43595793</guid></item><item><title><![CDATA[New comment by redlock in "Dow Slides Another 1k Points. Nasdaq on Pace to Enter Bear Market"]]></title><description><![CDATA[
<p>I wouldn't call the inflation that ravaged the world and real estate prices beyond the means of the middle class "saving".<p>His quantitative easing and zero interest rate policy only saved asset managers and corporate balance sheets.</p>
]]></description><pubDate>Fri, 04 Apr 2025 19:33:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=43586760</link><dc:creator>redlock</dc:creator><comments>https://news.ycombinator.com/item?id=43586760</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43586760</guid></item><item><title><![CDATA[New comment by redlock in "Google’s two-year frenzy to catch up with OpenAI"]]></title><description><![CDATA[
<p>Noam Shazeer is not any guy. And I would bet the latest jump in Gemini capability is a result of him coming back.</p>
]]></description><pubDate>Sat, 29 Mar 2025 03:34:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=43512494</link><dc:creator>redlock</dc:creator><comments>https://news.ycombinator.com/item?id=43512494</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43512494</guid></item><item><title><![CDATA[New comment by redlock in "Wall Street sell-off turns 'ugly' as US recession fears grow"]]></title><description><![CDATA[
<p>This is my observation as well. Recent market gains doesn't reflect how inflation eroded most people purchasing power.<p>People today are worse off than before Covid even though the market is much higher.<p>Debt fueled growth (through quantitative easing and deficit spending) is not healthy and always has a bad ending.<p>Real economic growth is what was seen between 1950 and 1970 where purchasing power increased and most of the gains went to the middle class.</p>
]]></description><pubDate>Mon, 10 Mar 2025 18:52:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=43324370</link><dc:creator>redlock</dc:creator><comments>https://news.ycombinator.com/item?id=43324370</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43324370</guid></item><item><title><![CDATA[New comment by redlock in "A bear case: My predictions regarding AI progress"]]></title><description><![CDATA[
<p>Moore's law is exponential</p>
]]></description><pubDate>Mon, 10 Mar 2025 15:23:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=43321617</link><dc:creator>redlock</dc:creator><comments>https://news.ycombinator.com/item?id=43321617</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43321617</guid></item><item><title><![CDATA[New deepseek paper: Natively Trainable Sparse Attention mechanism]]></title><description><![CDATA[
<p>Article URL: <a href="https://twitter.com/deepseek_ai/status/1891745487071609327">https://twitter.com/deepseek_ai/status/1891745487071609327</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43087663">https://news.ycombinator.com/item?id=43087663</a></p>
<p>Points: 5</p>
<p># Comments: 1</p>
]]></description><pubDate>Tue, 18 Feb 2025 09:18:39 +0000</pubDate><link>https://twitter.com/deepseek_ai/status/1891745487071609327</link><dc:creator>redlock</dc:creator><comments>https://news.ycombinator.com/item?id=43087663</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43087663</guid></item><item><title><![CDATA[New comment by redlock in "If you believe in "Artificial Intelligence", take five minutes to ask it"]]></title><description><![CDATA[
<p>Parent poster did say they are aware they don’t know but can’t express it.<p>I am guessing he is referring to mechanistic interpretability research like these:<p><a href="https://arxiv.org/abs/2405.16908" rel="nofollow">https://arxiv.org/abs/2405.16908</a><p><a href="https://arxiv.org/abs/2407.03282" rel="nofollow">https://arxiv.org/abs/2407.03282</a><p>You are claiming they are statistical parrots, which I don’t think the parent poster meant.<p>The “statistical parrots” argument might have been compelling with GPT-3, but not with today’s models and the results of mechanistic interpretability research, which show internal representations and rudimentary world models.</p>
]]></description><pubDate>Sat, 15 Feb 2025 12:50:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=43058131</link><dc:creator>redlock</dc:creator><comments>https://news.ycombinator.com/item?id=43058131</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43058131</guid></item><item><title><![CDATA[New comment by redlock in "The impact of competition and DeepSeek on Nvidia"]]></title><description><![CDATA[
<p>The issue here is that, even with a lot of VRAM, you may be able to run the model, but with a large context, it will still be too slow. (For example, running LLaMA 70B with a 30k+ context prompt takes minutes to process.)</p>
]]></description><pubDate>Mon, 27 Jan 2025 21:06:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=42845624</link><dc:creator>redlock</dc:creator><comments>https://news.ycombinator.com/item?id=42845624</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42845624</guid></item><item><title><![CDATA[New comment by redlock in "A layoff fundamentally changed how I perceive work"]]></title><description><![CDATA[
<p>What did he do with the dumb and energetic?</p>
]]></description><pubDate>Mon, 27 Jan 2025 16:56:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=42843116</link><dc:creator>redlock</dc:creator><comments>https://news.ycombinator.com/item?id=42843116</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42843116</guid></item><item><title><![CDATA[New comment by redlock in "Meta announces 5% cuts in preparation for 'intense year'"]]></title><description><![CDATA[
<p>He was always athletic. I believe he was captain of the fencing team in high school.</p>
]]></description><pubDate>Wed, 15 Jan 2025 03:35:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=42707026</link><dc:creator>redlock</dc:creator><comments>https://news.ycombinator.com/item?id=42707026</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42707026</guid></item><item><title><![CDATA[New comment by redlock in "Common misconceptions about the complexity in robotics vs. AI (2024)"]]></title><description><![CDATA[
<p>Do we know how human understanding works? It could be just statistical mapping as you have framed it. You can’t say llms don’t understand when you don’t have a measurable definition for understanding.<p>Also, humans hallucinate/confabulate all the time. Llms even forget in the same way humans do (strong recall in the start and end of the text but weaker in the middle)</p>
]]></description><pubDate>Sat, 11 Jan 2025 12:38:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=42665476</link><dc:creator>redlock</dc:creator><comments>https://news.ycombinator.com/item?id=42665476</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42665476</guid></item><item><title><![CDATA[New comment by redlock in "I am rich and have no idea what to do"]]></title><description><![CDATA[
<p>Until it killed him (ignoring doctors advice on cancer)</p>
]]></description><pubDate>Fri, 03 Jan 2025 09:59:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=42584255</link><dc:creator>redlock</dc:creator><comments>https://news.ycombinator.com/item?id=42584255</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42584255</guid></item><item><title><![CDATA[New comment by redlock in "I am rich and have no idea what to do"]]></title><description><![CDATA[
<p>What does loom have anything to do with AI? It is a nifty way to share video recordings easily and quickly. In that they did a great job</p>
]]></description><pubDate>Fri, 03 Jan 2025 09:47:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=42584196</link><dc:creator>redlock</dc:creator><comments>https://news.ycombinator.com/item?id=42584196</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42584196</guid></item><item><title><![CDATA[New comment by redlock in "Performance of LLMs on Advent of Code 2024"]]></title><description><![CDATA[
<p>I don't care much for the concept of AGI since it's ill defined.<p>However, out of distribution generation and generalization are a better more useful metric. In these Yann Lecun has argued that interpolation is  meaningless in the high dimensional spaces these "curves" are embedded in.(<a href="https://arxiv.org/abs/2110.09485" rel="nofollow">https://arxiv.org/abs/2110.09485</a>)<p>ARC has proven they can generalize, and Alpha Go (not an llm but a deep network) has proven it can generate novel/creative solutions. We don't need AI to have a sense of self "I" for them to beat us at every human skill and activity. Infact it might be detrimental and not useful for us if AI developed a sense of self.</p>
]]></description><pubDate>Wed, 01 Jan 2025 01:36:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=42563271</link><dc:creator>redlock</dc:creator><comments>https://news.ycombinator.com/item?id=42563271</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42563271</guid></item></channel></rss>