<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: nsingh2</title><link>https://news.ycombinator.com/user?id=nsingh2</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 05:23:18 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=nsingh2" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by nsingh2 in "System Card: Claude Mythos Preview [pdf]"]]></title><description><![CDATA[
<p>It's going to be expensive to serve (also not generally available), considering they said it's the largest model they've ever trained.<p>I suspect it's going to be used to train/distill lighter models. The exciting part for me is the improvement in those lighter models.</p>
]]></description><pubDate>Tue, 07 Apr 2026 18:41:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47679560</link><dc:creator>nsingh2</dc:creator><comments>https://news.ycombinator.com/item?id=47679560</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47679560</guid></item><item><title><![CDATA[New comment by nsingh2 in "OpenAI closes funding round at an $852B valuation"]]></title><description><![CDATA[
<p>> They were years ahead.<p>Considering how fast competitors caught up to them, I'm not convinced that OpenAI was years ahead. LLMs and transformers were known technology, it's just that OpenAI accidentally productized it before others did (ChatGPT). This is not an advantage measured in years. Google, for example, could have caught up to them pretty easily (they invented the transformer architecture), I think it mostly came down to mismanagement that they flopped so hard with Bard. The biggest cost was high quality data, Google certainly had that, and a budget for huge training runs. I really don't think OpenAI had any special sauce that made them years ahead.<p>One confounder here is that LLM scaling has started to hit diminished returns recently, no more GPT3 -> GPT4/o1 jumps in recent times, making it easier to catch up to the SOTA.<p>That schism within the OpenAI leadership was ugly. And Sam Altman does seem to be a bit snakey to me. But I have no illusions about any company in this space, including Antropic. None of these companies are moral, given what data these models are trained on.<p>> their competitors caught up and then passed them by<p>The different models are more capable in different aspects, but they are close enough together that only in a few months they leapfrog each other.<p>> OpenAI is floundering and can't sustain their own burn rate.  Their competitors are thriving.<p>Google is thriving, sure, but not because of Gemini, it's because of their existing ads business. I would not say that about Anthropic, they seem to be struggling to provide enough compute (with the recent usage limit changes). Hard to know whats happening funding wise in these companies. Saying that their competitors are thriving is a stretch. And again, if the AI bubble pops, Antropic is gonna hurt along with OpenAI. Just not clear to what extent.</p>
]]></description><pubDate>Thu, 02 Apr 2026 02:16:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47609249</link><dc:creator>nsingh2</dc:creator><comments>https://news.ycombinator.com/item?id=47609249</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47609249</guid></item><item><title><![CDATA[New comment by nsingh2 in "The Anti-Intellectualism of Silicon Valley Elites"]]></title><description><![CDATA[
<p>Seems like HN is doing something to combat this, considering how many [dead] comments I see in every post (which you can enable by setting `showdead` in your user profile).<p>I've only recently enabled it so I don't know how frequent dead comments were before the LLM era.</p>
]]></description><pubDate>Thu, 02 Apr 2026 01:59:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47609166</link><dc:creator>nsingh2</dc:creator><comments>https://news.ycombinator.com/item?id=47609166</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47609166</guid></item><item><title><![CDATA[New comment by nsingh2 in "My son pleasured himself on Gemini Live. Entire family's Google accounts banned"]]></title><description><![CDATA[
<p>Really uncharitable take. I did stupid things at 14, and had more unrestricted internet access too.<p>> absent parent more concerned with his business than his son<p>I don't know how you came to this conclusion from the post.</p>
]]></description><pubDate>Wed, 01 Apr 2026 02:57:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47596241</link><dc:creator>nsingh2</dc:creator><comments>https://news.ycombinator.com/item?id=47596241</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47596241</guid></item><item><title><![CDATA[New comment by nsingh2 in "OpenAI closes funding round at an $852B valuation"]]></title><description><![CDATA[
<p>This a big exaggeration. Codex is probably one of the top two LLM programming tools, along with Claude Code. GPT-5.4 models are strong, unlike the initial GPT-5 ones, which were comparatively bad, and can hold up against Opus 4.6. In my experience, they are better at analytical work.<p>I cannot really see how they are "far behind," or how some plugin for Claude Code is a "last desperate bid." The tools are close enough to each other that I regularly use Codex one month and Claude Code the next without much disruption, just to try out any new models or features that might be available.<p>I do not have much visibility into the non-code applications, so maybe it is stickier there.<p>If/when the AI bubble pops and takes OpenAI down with it, I would not expect Anthropic to come out unscathed either.</p>
]]></description><pubDate>Tue, 31 Mar 2026 21:50:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47593957</link><dc:creator>nsingh2</dc:creator><comments>https://news.ycombinator.com/item?id=47593957</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47593957</guid></item><item><title><![CDATA[New comment by nsingh2 in "Claude loses its >99% uptime in Q1 2026"]]></title><description><![CDATA[
<p>Gemini CLI has been broken for the past 2-3 days, with no response from Google. Really embarrassing for a multi-trillion dollar company. At this point Codex is the only reliable CLI app, out of the big three.<p><a href="https://www.reddit.com/r/GeminiCLI/comments/1s49pag/this_is_taking_a_bit_longer_were_still_on_it_esc/" rel="nofollow">https://www.reddit.com/r/GeminiCLI/comments/1s49pag/this_is_...</a></p>
]]></description><pubDate>Fri, 27 Mar 2026 15:26:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47543854</link><dc:creator>nsingh2</dc:creator><comments>https://news.ycombinator.com/item?id=47543854</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47543854</guid></item><item><title><![CDATA[New comment by nsingh2 in "Claude Code adjusting down 5hr limits"]]></title><description><![CDATA[
<p>This morning I hit 100% 5hr usage on a task that took ~10% in the past. Looks like they are still testing the limits, but it seems over-tuned to me.<p>Also not great that they communicate this now, since people have been complaining about sudden and strange usage spikes for a few days with no response from Anthropic.</p>
]]></description><pubDate>Thu, 26 Mar 2026 20:41:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47535431</link><dc:creator>nsingh2</dc:creator><comments>https://news.ycombinator.com/item?id=47535431</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47535431</guid></item><item><title><![CDATA[New comment by nsingh2 in "The bridge to wealth is being pulled up with AI"]]></title><description><![CDATA[
<p>>> More free time?<p>> Yes! Time we can reclaim from the mundane chores of life to do with as we choose! How could you not want that?<p>We already had a huge productivity boom these past decades, but wages flat-lined and the vast majority of the profits and surplus went to the top. Housing, education, and healthcare became less affordable, not more. History points against your simple view.<p>I'm not convinced that AI breaks that pattern. If anything, the concentration is worse this time. The capital required is huge, the technology is controlled by a handful of companies, and the most applications are about replacing labor. That last part further erodes the already meager worker bargaining power.<p>We do need a serious systemic change to get to the world you're envisioning. One where that congealed wealth needs to start flowing again.</p>
]]></description><pubDate>Tue, 24 Mar 2026 17:46:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47506481</link><dc:creator>nsingh2</dc:creator><comments>https://news.ycombinator.com/item?id=47506481</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47506481</guid></item><item><title><![CDATA[New comment by nsingh2 in "Cursor Composer 2 is just Kimi K2.5 with RL"]]></title><description><![CDATA[
<p>The majority of Ask/Debug mode can be reproduced using skills. For copying code references, if you're using VS Code, you can look at plugins like [1], or even make your own.<p>Cursor's auto mode is flaky because you don't know which model they're routing you to, and it could be a smaller, worse model.<p>It's hard to see why paying a middleman for access to models would be cheaper than going directly to the model providers. I was a heavy Cursor user, and I've completely switched to Codex CLI or Claude Code. I don't have to deal with an older, potentially buggier version of VS Code, and I also have the option of not using VS Code at all.<p>One nice thing about Cursor is its code and documentation embedding. I don't know how much code embedding really helps, but documentation embedding is useful.<p>[1] <a href="https://marketplace.visualstudio.com/items?itemName=ezforo.copy-relative-path-and-line-numbers" rel="nofollow">https://marketplace.visualstudio.com/items?itemName=ezforo.c...</a></p>
]]></description><pubDate>Fri, 20 Mar 2026 15:25:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47455950</link><dc:creator>nsingh2</dc:creator><comments>https://news.ycombinator.com/item?id=47455950</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47455950</guid></item><item><title><![CDATA[New comment by nsingh2 in "Roblox is minting teen millionaires"]]></title><description><![CDATA[
<p>From [1] (2022 numbers), the median creator earned around 50 Robux per year, which is ~19 cents with the current DevEx rate, and the average was 13,500 Robux.<p>Out of ~7.5 million creators in 2022, only 11,000 qualified for cashing out.<p>The distribution is brutal, realistically you have to stick with it for years before getting a hit, if ever. Not to mention the stats probably look worse in the LLM era. You definitely have to like doing it as a hobby.<p>One caveat is that the creator total likely includes a lot of casual experimentation. If many users make one or two games and then stop (I can see most kids doing this), the 7.5 million figure may overstate how many people are seriously trying to make money from it.<p>[1] <a href="https://about.roblox.com/newsroom/2023/07/vision-roblox-economy" rel="nofollow">https://about.roblox.com/newsroom/2023/07/vision-roblox-econ...</a></p>
]]></description><pubDate>Tue, 10 Mar 2026 22:48:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47329727</link><dc:creator>nsingh2</dc:creator><comments>https://news.ycombinator.com/item?id=47329727</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47329727</guid></item><item><title><![CDATA[New comment by nsingh2 in "GPT-5.4"]]></title><description><![CDATA[
<p>From what I've read online it's not necessarily a unquantized version, it seems to go through longer reasoning traces and runs multiple reasoning traces at once. Probably overkill for most tasks.</p>
]]></description><pubDate>Thu, 05 Mar 2026 18:59:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47265743</link><dc:creator>nsingh2</dc:creator><comments>https://news.ycombinator.com/item?id=47265743</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47265743</guid></item><item><title><![CDATA[New comment by nsingh2 in "UK infants ill after drinking contaminated baby formula of Nestle and Danone"]]></title><description><![CDATA[
<p>> greed is normal and expected in a free market economy<p>OK, technically true, just like saying "water flows downhill" when someone's house is flooding. It isn't productive, the fact is well known.<p>"The system incentivizes this" and "this is good/bad" are two entirely different statements. One doesn't address the other [1], until you make a moral judgement about the outcome.<p>> You say this as if it's some deviant behaviour that needs correcting.<p>Is it moral and correct for infants to be fed contaminated baby formula? The mismatch between what <i>is</i> and what <i>ought</i> to be is deviance.<p>[1] <a href="https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem" rel="nofollow">https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem</a></p>
]]></description><pubDate>Sat, 07 Feb 2026 18:46:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=46926370</link><dc:creator>nsingh2</dc:creator><comments>https://news.ycombinator.com/item?id=46926370</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46926370</guid></item><item><title><![CDATA[New comment by nsingh2 in "Claude Opus 4.6"]]></title><description><![CDATA[
<p>Most of the value of Claude Code comes from the model, and that's not running on your device.<p>The Claude Code TUI itself is a front end, and should not be taking 3-4 seconds to load. That kind of loading time is around what VSCode takes on my machine, and VSCode is a full blown editor.</p>
]]></description><pubDate>Fri, 06 Feb 2026 16:43:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=46915073</link><dc:creator>nsingh2</dc:creator><comments>https://news.ycombinator.com/item?id=46915073</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46915073</guid></item><item><title><![CDATA[New comment by nsingh2 in "A few random notes from Claude coding quite a bit last few weeks"]]></title><description><![CDATA[
<p>This stuff is a little messy and opaque, but the performance of the same model in different harnesses depends a lot on how context is managed. The last time I tried Copilot, it performed markedly worse for similar tasks compared to Claude Code. I suspect that Copilot was being very aggressive in compressing context to save on token cost, but I'm not 100% certain about this.<p>Also note that with Claude models, Copilot might allocate a different number of thinking tokens compared to Claude Code.<p>Things may have changed now compared to when I tried it out, these tools are in constant flux. In general I've found that harnesses created by the model providers (OpenAI/Codex CLI, Anthropic/Claude Code, Google/Gemini CLI) tend to be better than generalist harnesses (cheaper too, since you're not paying a middleman).</p>
]]></description><pubDate>Tue, 27 Jan 2026 20:27:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46786060</link><dc:creator>nsingh2</dc:creator><comments>https://news.ycombinator.com/item?id=46786060</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46786060</guid></item><item><title><![CDATA[New comment by nsingh2 in "Is OpenAI Dead Yet?"]]></title><description><![CDATA[
<p>In my own testing these models sill have a different flavor to them<p>- Opus 4.5 for software development. Works faster, and tends to write cleaner code.<p>- GPT 5.2 xHigh for mathematical analysis, and analysis in general (e.g. code review, planning, double checks), it's very meticulous.<p>- Gemini 3.0 Pro for image understanding, though this one I haven't played around with much.</p>
]]></description><pubDate>Tue, 27 Jan 2026 16:30:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46782208</link><dc:creator>nsingh2</dc:creator><comments>https://news.ycombinator.com/item?id=46782208</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46782208</guid></item><item><title><![CDATA[New comment by nsingh2 in "Project Cybersyn"]]></title><description><![CDATA[
<p>There is an argument to be made that that companies like Walmart and Amazon operate as planned economies. They use the same cybernetic principles, real time data monitoring and feedback loops, to solve logistics and planning. These implementations do give credibility Beer's ideas.<p>There is even a section about this in the wiki article:<p><a href="https://en.wikipedia.org/wiki/Project_Cybersyn#Contemporary_relevance" rel="nofollow">https://en.wikipedia.org/wiki/Project_Cybersyn#Contemporary_...</a><p><a href="https://en.wikipedia.org/wiki/The_People%27s_Republic_of_Walmart" rel="nofollow">https://en.wikipedia.org/wiki/The_People%27s_Republic_of_Wal...</a></p>
]]></description><pubDate>Mon, 19 Jan 2026 18:15:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46682512</link><dc:creator>nsingh2</dc:creator><comments>https://news.ycombinator.com/item?id=46682512</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46682512</guid></item><item><title><![CDATA[New comment by nsingh2 in "Trump says Venezuela’s Maduro captured after strikes"]]></title><description><![CDATA[
<p>> Let freedom ring<p>What happens if the Venezuelan people decide they want their oil profits to stay in Venezuela rather than flowing into oil company coffers? Will they have the "freedom" to choose that?<p>Don't get me wrong, Maduro being toppled is a positive in isolation, but it's still wait-and-see regarding what he gets replaced with.<p>"We’re going to have our very large US oil companies, the biggest anywhere in the world, go in, spend billions of dollars, fix the badly broken infrastructure, the oil infrastructure, and start making money for the country and we are ready to stage a second and much larger attack if we need to do so" [1]<p>[1] <a href="https://www.theguardian.com/us-news/2026/jan/03/trump-venezuela-oil-industry" rel="nofollow">https://www.theguardian.com/us-news/2026/jan/03/trump-venezu...</a></p>
]]></description><pubDate>Sat, 03 Jan 2026 20:08:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46481008</link><dc:creator>nsingh2</dc:creator><comments>https://news.ycombinator.com/item?id=46481008</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46481008</guid></item><item><title><![CDATA[New comment by nsingh2 in "Trump says Venezuela’s Maduro captured after strikes"]]></title><description><![CDATA[
<p>Reasoning like this is part of the reason why history keep repeating itself. Completely ignoring how previous US led decapitations turned out, and just hoping this time will be different.<p>It should not be contentious at this point, the US only cares about the geopolitical value of Venezuela, and if supporting another dictator helps towards this end, then that's what will happen.</p>
]]></description><pubDate>Sat, 03 Jan 2026 17:38:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=46479335</link><dc:creator>nsingh2</dc:creator><comments>https://news.ycombinator.com/item?id=46479335</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46479335</guid></item><item><title><![CDATA[New comment by nsingh2 in "C100 Developer Terminal"]]></title><description><![CDATA[
<p>That's quite the long esc key. An ortholinear layout would be nice too.</p>
]]></description><pubDate>Thu, 27 Nov 2025 00:16:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46063876</link><dc:creator>nsingh2</dc:creator><comments>https://news.ycombinator.com/item?id=46063876</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46063876</guid></item><item><title><![CDATA[New comment by nsingh2 in "Fara-7B: An efficient agentic model for computer use"]]></title><description><![CDATA[
<p>One quick way to estimate a lower bound is to take the number of parameters and multiply it with the bits per parameter. So a model with 7 billion parameters running with float8 types would be ~7 GB to load at a <i>minimum</i>. The attention mechanism would require more on top of that, and depends on the size of the context window.<p>You'll also need to load inputs (images in this case) onto the GPU memory, and that depends on the image resolution and batch size.</p>
]]></description><pubDate>Wed, 26 Nov 2025 23:13:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=46063402</link><dc:creator>nsingh2</dc:creator><comments>https://news.ycombinator.com/item?id=46063402</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46063402</guid></item></channel></rss>