<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: spprashant</title><link>https://news.ycombinator.com/user?id=spprashant</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 13 Apr 2026 08:08:12 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=spprashant" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by spprashant in "The peril of laziness lost"]]></title><description><![CDATA[
<p>At this point, I almost feel bad that people are piling on Garry Tan. Almost.</p>
]]></description><pubDate>Sun, 12 Apr 2026 21:47:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47744906</link><dc:creator>spprashant</dc:creator><comments>https://news.ycombinator.com/item?id=47744906</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47744906</guid></item><item><title><![CDATA[New comment by spprashant in "Bitcoin miners are losing on every coin produced as difficulty drops"]]></title><description><![CDATA[
<p>And how does BTC play in this scenario?</p>
]]></description><pubDate>Sun, 12 Apr 2026 15:22:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47740785</link><dc:creator>spprashant</dc:creator><comments>https://news.ycombinator.com/item?id=47740785</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47740785</guid></item><item><title><![CDATA[New comment by spprashant in "How We Broke Top AI Agent Benchmarks: And What Comes Next"]]></title><description><![CDATA[
<p>I tend to prefer the ARC-AGI benchmarks for the most part. But it's always interesting when a new version drops, all the frontier models drop less than 20% or something. And then in the next few releases they get all they way up to 80%+. If you use the models it doesn't feel like those models are that much more generally intelligent.<p>Most frontier models are terrible at AGI-3 right now.<p>These models are already great no question, but are they really going be that much more intelligent when we hit 80% again?</p>
]]></description><pubDate>Sun, 12 Apr 2026 00:11:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47735071</link><dc:creator>spprashant</dc:creator><comments>https://news.ycombinator.com/item?id=47735071</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47735071</guid></item><item><title><![CDATA[New comment by spprashant in "Amazon Is Pulling Support for Kindles from 2012 or Earlier"]]></title><description><![CDATA[
<p>I have a Kindle from 2014 still going strong. I guess it ll be bricked in a couple of years.</p>
]]></description><pubDate>Thu, 09 Apr 2026 19:33:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47708637</link><dc:creator>spprashant</dc:creator><comments>https://news.ycombinator.com/item?id=47708637</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47708637</guid></item><item><title><![CDATA[New comment by spprashant in "Claude mixes up who said what"]]></title><description><![CDATA[
<p>The problem is once you accept that it is needed, you can no longer push AI as general intelligence that has superior understanding of the language we speak.<p>A structured LLM query is a programming language and then you have to accept you need software engineers for sufficiently complex structured queries. This goes against everything the technocrats have been saying.</p>
]]></description><pubDate>Thu, 09 Apr 2026 12:05:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47702581</link><dc:creator>spprashant</dc:creator><comments>https://news.ycombinator.com/item?id=47702581</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47702581</guid></item><item><title><![CDATA[New comment by spprashant in "Muse Spark: Scaling towards personal superintelligence"]]></title><description><![CDATA[
<p>Sounds like a good effort. They are choosing to focus on multi-modality - perhaps they are taking a different route here to Anthropic.<p>I don't like that I need to login to my FB/Instagram account to access this.</p>
]]></description><pubDate>Wed, 08 Apr 2026 20:03:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47695551</link><dc:creator>spprashant</dc:creator><comments>https://news.ycombinator.com/item?id=47695551</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47695551</guid></item><item><title><![CDATA[New comment by spprashant in "Muse Spark: Scaling towards personal superintelligence"]]></title><description><![CDATA[
<p>In Multimodal yes, but Opus is definitely edging out in Text/Reasoning and Agentic benchmarks.<p>I think the general skepticism is because they are late to race, and they are releasing a Opus-4.6-equivalent model now, when Anthropic is teasing Mythos.</p>
]]></description><pubDate>Wed, 08 Apr 2026 19:59:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47695515</link><dc:creator>spprashant</dc:creator><comments>https://news.ycombinator.com/item?id=47695515</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47695515</guid></item><item><title><![CDATA[New comment by spprashant in "Muse Spark: Scaling towards personal superintelligence"]]></title><description><![CDATA[
<p>I wonder if this is why the tech cartel is buying up all the hardware?<p>If the average user gets convinced they could run LLMs for cheap at home, you cannot trap users in your walled garden anymore.</p>
]]></description><pubDate>Wed, 08 Apr 2026 18:45:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47694484</link><dc:creator>spprashant</dc:creator><comments>https://news.ycombinator.com/item?id=47694484</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47694484</guid></item><item><title><![CDATA[New comment by spprashant in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>We final have the answer to the question, when do these labs stop giving away intelligence to the general public for $20 a month?<p>Selling shovels in now worth less than taking all the gold for themselves.</p>
]]></description><pubDate>Wed, 08 Apr 2026 01:15:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47683528</link><dc:creator>spprashant</dc:creator><comments>https://news.ycombinator.com/item?id=47683528</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47683528</guid></item><item><title><![CDATA[New comment by spprashant in "USD Purchasing Power in Real Time Since 2000"]]></title><description><![CDATA[
<p>There's a little explained if you hit the (?) at the bottom. They are taking the monthly inflation value and calculate it per tick.</p>
]]></description><pubDate>Tue, 07 Apr 2026 23:00:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47682427</link><dc:creator>spprashant</dc:creator><comments>https://news.ycombinator.com/item?id=47682427</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47682427</guid></item><item><title><![CDATA[New comment by spprashant in "USD Purchasing Power in Real Time Since 2000"]]></title><description><![CDATA[
<p>You know what, it's not as bad as I was thinking.</p>
]]></description><pubDate>Tue, 07 Apr 2026 22:38:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47682261</link><dc:creator>spprashant</dc:creator><comments>https://news.ycombinator.com/item?id=47682261</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47682261</guid></item><item><title><![CDATA[New comment by spprashant in "System Card: Claude Mythos Preview [pdf]"]]></title><description><![CDATA[
<p>Well the important thing is they have a lot more data of people actually using their models. They have read billions more lines of private repos and implemented millions of patches, all of which is feeding into the newer models.<p>More importantly it understand what behaviour people tend to appreciate and what changes are more likely to get approved. This real world usage data is invaluable.</p>
]]></description><pubDate>Tue, 07 Apr 2026 22:30:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47682176</link><dc:creator>spprashant</dc:creator><comments>https://news.ycombinator.com/item?id=47682176</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47682176</guid></item><item><title><![CDATA[New comment by spprashant in "A forecast of the fair market value of SpaceX's businesses"]]></title><description><![CDATA[
<p>I am not smart with stock legal-ese but I pasting something I found in a different article here.<p>> To balance index integrity and investability, Nasdaq proposes a new approach for including and weighting low-float securities (those below 20% free float). Each low-float security’s weight will be adjusted to five times its free float percentage, capped at 100%. Securities with more than 20% free float will continue to be weighted at full, eligible listed market capitalization, while those below 20% free float will be weighted proportionally to preserve investability.<p>> The rule reportedly includes a 5x float multiplier for low-float stocks, which would require passive vehicles to treat SpaceX as if it had significantly more tradable shares than actually exist, essentially forcing funds to chase the price.<p>It sounds to me like a way to increase demand for low float stocks by treating the float higher than it actually is. Glad to hear the explanations about this.</p>
]]></description><pubDate>Thu, 02 Apr 2026 17:42:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47617623</link><dc:creator>spprashant</dc:creator><comments>https://news.ycombinator.com/item?id=47617623</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47617623</guid></item><item><title><![CDATA[New comment by spprashant in "IPv6 address, as a sentence you can remember"]]></title><description><![CDATA[
<p>You are not supposed worry about the mapping. You trust the website to help decode it. You just remember the sentence. It's a little like what3words for coordinates.<p>The rationale being you are more likely to remember grammatical cogent sentence, than a random string of alphanumeric characters. Although I will agree that the generated sentences don't seem easy to remember. So I doubt it's utility.</p>
]]></description><pubDate>Thu, 02 Apr 2026 01:10:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47608821</link><dc:creator>spprashant</dc:creator><comments>https://news.ycombinator.com/item?id=47608821</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47608821</guid></item><item><title><![CDATA[Ask HN: Is consumer AI boxes a viable idea?]]></title><description><![CDATA[
<p>I suspect at some point LLM in its current form will be deemed good enough for general research and coding tasks. I don't get why we need to continue with a de-facto cloud-based approach. Cloud in my opinion solves operational complexity, which is worth paying a premium for. But it seems it isn't quite all that complex to get an open source model running locally as long as you have the hardware. Over time I suspect the models get better and cheaper.<p>Is there a future where we can expect people to just buy "AI" from BestBuy, like a TV set? It ll probably come with some model preloaded - cheaper if open-source, premium pricing for frontier lab models. The hardware is basically a bunch of GPUs enough for local inference.<p>Take it home and plug it into your home network and you can open a chat instance by going to the IP on any local device. You can give it access to internet if you want. Maybe it can also receive OTA updates.<p>Curious how others think about this - does local-first AI feel like a possibility? What are the economic and social challenges with this?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47546796">https://news.ycombinator.com/item?id=47546796</a></p>
<p>Points: 5</p>
<p># Comments: 1</p>
]]></description><pubDate>Fri, 27 Mar 2026 18:57:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47546796</link><dc:creator>spprashant</dc:creator><comments>https://news.ycombinator.com/item?id=47546796</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47546796</guid></item><item><title><![CDATA[New comment by spprashant in "ARC-AGI-3"]]></title><description><![CDATA[
<p>Its simple, but its not easy is what I would say. Once you figure out the meta, you can work out most of it.</p>
]]></description><pubDate>Wed, 25 Mar 2026 22:12:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47523973</link><dc:creator>spprashant</dc:creator><comments>https://news.ycombinator.com/item?id=47523973</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47523973</guid></item><item><title><![CDATA[New comment by spprashant in "ARC-AGI-3"]]></title><description><![CDATA[
<p>I played the demo, but it definitely took me a minute to grok the rules.<p>I don't know if this is how we want to measure AGI.<p>In general I believe the we should probably stop this pursuit for human equivalent intelligence that encourages people to think of these models as human replacements. LLMs are clearly good at a lot of things, lets focus on how we can augment and empower the existing workforce.</p>
]]></description><pubDate>Wed, 25 Mar 2026 20:36:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47522855</link><dc:creator>spprashant</dc:creator><comments>https://news.ycombinator.com/item?id=47522855</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47522855</guid></item><item><title><![CDATA[New comment by spprashant in "LaGuardia pilots raised safety alarms months before deadly runway crash"]]></title><description><![CDATA[
<p>Feel bad for the ATC officer. I hope they can find it in them to forgive themselves.</p>
]]></description><pubDate>Tue, 24 Mar 2026 19:37:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47507918</link><dc:creator>spprashant</dc:creator><comments>https://news.ycombinator.com/item?id=47507918</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47507918</guid></item><item><title><![CDATA[New comment by spprashant in "Ask HN: How is AI-assisted coding going for you professionally?"]]></title><description><![CDATA[
<p>I only just started using it at work in the last month.<p>I am a data engineer maintaining a big data Spark cluster as well as a dozen Postgres instances - all self hosted.<p>I must confess it has made me extremely productive if we measure in terms of writing code. I don't even do a lot of special AGENTS.md/CLAUDE.md shenanigans, I just prompt CC, work on a plan, and then manually review the changes as it implements it.<p>Needless to say this process only works well because:
A) I understand my code base. 
B) I have a mental structure of how I want to implement it.<p>Hence it is easy to keep the model and me in sync about what's happening.<p>For other aspects of my job I occasionally run questions by GPT/Gemini as a brainstorming partner, but it seems a lot less reliable. I only use it as a sounding board. I does not seem to make me any more effective at my job than simply reading documents or browsing github issues/stack overflow myself.</p>
]]></description><pubDate>Sun, 15 Mar 2026 20:32:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47391569</link><dc:creator>spprashant</dc:creator><comments>https://news.ycombinator.com/item?id=47391569</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47391569</guid></item><item><title><![CDATA[New comment by spprashant in "Elon Musk pushes out more xAI founders as AI coding effort falters"]]></title><description><![CDATA[
<p>He is re-building a company that he himself built less than 3 years ago?</p>
]]></description><pubDate>Fri, 13 Mar 2026 19:55:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47368988</link><dc:creator>spprashant</dc:creator><comments>https://news.ycombinator.com/item?id=47368988</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47368988</guid></item></channel></rss>