<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: tempusalaria</title><link>https://news.ycombinator.com/user?id=tempusalaria</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 16:49:38 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=tempusalaria" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by tempusalaria in "Video game union workers rally against $55B private acquisition of EA"]]></title><description><![CDATA[
<p>Most of EA’s revenue comes from franchise games that are way below typical AAA standard. EA’s value is from IP not talent</p>
]]></description><pubDate>Thu, 16 Oct 2025 17:54:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=45608555</link><dc:creator>tempusalaria</dc:creator><comments>https://news.ycombinator.com/item?id=45608555</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45608555</guid></item><item><title><![CDATA[New comment by tempusalaria in "Claude Haiku 4.5"]]></title><description><![CDATA[
<p>Lots of situations, here are 2 I’ve faced recently (cannot give too much detail for privacy reasons, but should be clear enough)<p>1) low latency desired, long user prompt 
2) function runs many parallel requests, but is not fired with common prefix very often. OpenAI was very inconsistent about properly caching the prefix for use across all requests, but with Anthropic it’s very easy to pre-fire</p>
]]></description><pubDate>Thu, 16 Oct 2025 17:33:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=45608253</link><dc:creator>tempusalaria</dc:creator><comments>https://news.ycombinator.com/item?id=45608253</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45608253</guid></item><item><title><![CDATA[New comment by tempusalaria in "Claude Skills"]]></title><description><![CDATA[
<p>All these things are designed to create lock in for companies. They don’t really fundamentally add to the functionality of LLMs. Devs should focus on working directly with model generate apis and not using all the decoration.</p>
]]></description><pubDate>Thu, 16 Oct 2025 17:27:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=45608155</link><dc:creator>tempusalaria</dc:creator><comments>https://news.ycombinator.com/item?id=45608155</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45608155</guid></item><item><title><![CDATA[New comment by tempusalaria in "Claude Haiku 4.5"]]></title><description><![CDATA[
<p>I vastly prefer the manual caching. There are several aspects of automatic caching that are suboptimal, with only moderately less developer burden. I don’t use Anthropic much but I wish the others had manual cache options</p>
]]></description><pubDate>Wed, 15 Oct 2025 17:45:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=45596090</link><dc:creator>tempusalaria</dc:creator><comments>https://news.ycombinator.com/item?id=45596090</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45596090</guid></item><item><title><![CDATA[New comment by tempusalaria in "DeepMind and OpenAI win gold at ICPC"]]></title><description><![CDATA[
<p>A lot of the current code and science capabilities do not come from NTP training.<p>Indeed in seems in most language model RL there is not even process supervision, so a long way from NTP</p>
]]></description><pubDate>Wed, 17 Sep 2025 19:14:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=45280166</link><dc:creator>tempusalaria</dc:creator><comments>https://news.ycombinator.com/item?id=45280166</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45280166</guid></item><item><title><![CDATA[New comment by tempusalaria in "Mistral raises 1.7B€, partners with ASML"]]></title><description><![CDATA[
<p>Cerebras has very limited scale. Mistral has very few users so they can use cerebra’s in inference whereas OpenAI and Anthropic cannot. If mistral grows a lot they will stop using cerebras</p>
]]></description><pubDate>Tue, 09 Sep 2025 09:31:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=45179668</link><dc:creator>tempusalaria</dc:creator><comments>https://news.ycombinator.com/item?id=45179668</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45179668</guid></item><item><title><![CDATA[New comment by tempusalaria in "F1 in Hungary: Strategy and fast tire changes make all the difference"]]></title><description><![CDATA[
<p>Fast tire changes only matter a very limited amount of the time (pretty much only if the extra time drops you a place, so there has to be 1 car/20 in a specific 1 second window on what is typically a 90s lap for 3s (a slow stop) vs 2s (a fast stop) to matter. Maybe 20% of the time a slow stop happens, it costs a driver.<p>Strategy matters a lot and good strategy is worth at least a few positions in a race.</p>
]]></description><pubDate>Mon, 01 Sep 2025 23:06:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=45097394</link><dc:creator>tempusalaria</dc:creator><comments>https://news.ycombinator.com/item?id=45097394</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45097394</guid></item><item><title><![CDATA[New comment by tempusalaria in "Building A16Z's Personal AI Workstation"]]></title><description><![CDATA[
<p>I imagine it runs civ 2 pretty well</p>
]]></description><pubDate>Sat, 23 Aug 2025 17:09:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=44997392</link><dc:creator>tempusalaria</dc:creator><comments>https://news.ycombinator.com/item?id=44997392</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44997392</guid></item><item><title><![CDATA[New comment by tempusalaria in "Mark Zuckerberg freezes AI hiring amid bubble fears"]]></title><description><![CDATA[
<p>WhatsApp is certainly worth less today than what they paid for it plus the extra funding it has required over time. Let alone producing anything close to ROI. Has lost them more money than the metaverse stuff.<p>Insta was a huge hit for sure but since then Meta Capital allocation has been a disaster including a lot of badly timed buybacks</p>
]]></description><pubDate>Thu, 21 Aug 2025 12:15:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=44971797</link><dc:creator>tempusalaria</dc:creator><comments>https://news.ycombinator.com/item?id=44971797</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44971797</guid></item><item><title><![CDATA[New comment by tempusalaria in "Dispelling misconceptions about RLHF"]]></title><description><![CDATA[
<p>SFT is part of the classic RLHF process though</p>
]]></description><pubDate>Sun, 17 Aug 2025 13:17:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=44931367</link><dc:creator>tempusalaria</dc:creator><comments>https://news.ycombinator.com/item?id=44931367</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44931367</guid></item><item><title><![CDATA[New comment by tempusalaria in "Best Practices for Building Agentic AI Systems"]]></title><description><![CDATA[
<p>Yes this write-up is not about agents.<p>In fact it’s a great illustration of why the hype around agents is misplaced!</p>
]]></description><pubDate>Sat, 16 Aug 2025 08:54:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=44921540</link><dc:creator>tempusalaria</dc:creator><comments>https://news.ycombinator.com/item?id=44921540</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44921540</guid></item><item><title><![CDATA[New comment by tempusalaria in "Best Practices for Building Agentic AI Systems"]]></title><description><![CDATA[
<p>I understand that calling it ‘agentic’ is nice for marketing, but most of what is described in this blog post is not related to agents. The design patterns you describe are explicitly non-agentic. Many of the use cases described are better handled by a single LLM call rather than an agent.<p>Finally, saying that agents can have predictable behavior is wrong (except on simple tasks where you shouldn’t be using an agent anyway). Agents loop and compound their input, making them highly non-deterministic even for the same prompt.</p>
]]></description><pubDate>Sat, 16 Aug 2025 08:51:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=44921521</link><dc:creator>tempusalaria</dc:creator><comments>https://news.ycombinator.com/item?id=44921521</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44921521</guid></item><item><title><![CDATA[New comment by tempusalaria in "Terence Tao on the suspension of UCLA grants"]]></title><description><![CDATA[
<p>They may not be acting in good faith but there is extremely clear evidence that UCLA has engaged in illegal racial hiring and admissions practices and has supported antisemitism on campus. UCLA chose to give them that ammunition.</p>
]]></description><pubDate>Sat, 02 Aug 2025 12:22:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=44767010</link><dc:creator>tempusalaria</dc:creator><comments>https://news.ycombinator.com/item?id=44767010</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44767010</guid></item><item><title><![CDATA[New comment by tempusalaria in "Researchers value null results, but struggle to publish them"]]></title><description><![CDATA[
<p>if you are p testing this isn’t the case. A positive result is a much stronger assertion</p>
]]></description><pubDate>Sat, 26 Jul 2025 08:23:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=44692396</link><dc:creator>tempusalaria</dc:creator><comments>https://news.ycombinator.com/item?id=44692396</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44692396</guid></item><item><title><![CDATA[New comment by tempusalaria in "AI Market Clarity"]]></title><description><![CDATA[
<p>The term agent is just way overloaded. This guy defines it completely differently the the big labs, and I’ve seen half a dozen different definitions in the last few months.<p>In the long run the definition used by OpenAI, Anthropic et al will win out so can we just all switch to that?</p>
]]></description><pubDate>Tue, 22 Jul 2025 19:52:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=44652210</link><dc:creator>tempusalaria</dc:creator><comments>https://news.ycombinator.com/item?id=44652210</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44652210</guid></item><item><title><![CDATA[New comment by tempusalaria in "I don't think AGI is right around the corner"]]></title><description><![CDATA[
<p>Even as someone who is skeptical about LLMs, I’m not sure how anyone can look at what was achieved in AlphaGo and not at least consider the possibility that NNs could be superhuman in basically every domain at some point</p>
]]></description><pubDate>Mon, 07 Jul 2025 00:30:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=44485493</link><dc:creator>tempusalaria</dc:creator><comments>https://news.ycombinator.com/item?id=44485493</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44485493</guid></item><item><title><![CDATA[New comment by tempusalaria in "Gemini-2.5-pro-preview-06-05"]]></title><description><![CDATA[
<p>I agree I find claude easily the best model, at least for programming which is the only thing I use LLMs for</p>
]]></description><pubDate>Thu, 05 Jun 2025 17:31:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=44193822</link><dc:creator>tempusalaria</dc:creator><comments>https://news.ycombinator.com/item?id=44193822</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44193822</guid></item><item><title><![CDATA[New comment by tempusalaria in "DeepSeek Open Infra: Open-Sourcing 5 AI Repos in 5 Days"]]></title><description><![CDATA[
<p>SemiAnalysis has made up many things.<p>They claim that a small Chinese hedge fund could acquire $1bln in GPUs, with no state support, including many sanctioned chips, then trained a model optimized for a far smaller server compute size, and that they have a source at this very small fund who is willing to admit to export violations. A 40bln param active model is exactly the size you would expect from a server of the size they claim.<p>What’s more likely - that semianalysis made it up like they have a bunch of other things, or that all the above is true?</p>
]]></description><pubDate>Fri, 21 Feb 2025 10:48:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=43126138</link><dc:creator>tempusalaria</dc:creator><comments>https://news.ycombinator.com/item?id=43126138</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43126138</guid></item><item><title><![CDATA[New comment by tempusalaria in "Grok3 Launch [video]"]]></title><description><![CDATA[
<p>SemiAnalysis is wrong. They just made their numbers up (among many other things they have invented - they are not to be trusted). I have observed many errors of understanding, analysis and calculation in their writing.<p>Deep Seek R1 is literally an open weight model. It has <40bln active parameters. We know that for a fact. That size of model is definitely roughly optimally trained over the time period and server times claimed. In fact, the 70bln parameter Llama 3 model used almost exactly the same compute as the DeepSeek V3/R1 claims (which makes sense, as you would expect a bit less efficiency for the H800 and  for the complex DeepSeek MoE architecture).</p>
]]></description><pubDate>Tue, 18 Feb 2025 17:53:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=43092855</link><dc:creator>tempusalaria</dc:creator><comments>https://news.ycombinator.com/item?id=43092855</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43092855</guid></item><item><title><![CDATA[New comment by tempusalaria in "OpenAI says it has evidence DeepSeek used its model to train competitor"]]></title><description><![CDATA[
<p>DeepSeek v3 (where the training cost claims come from) was announced a month ago and it had no impact outside of a small circle</p>
]]></description><pubDate>Wed, 29 Jan 2025 16:52:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=42867537</link><dc:creator>tempusalaria</dc:creator><comments>https://news.ycombinator.com/item?id=42867537</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42867537</guid></item></channel></rss>