<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: sarthaksaxena</title><link>https://news.ycombinator.com/user?id=sarthaksaxena</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 15 Apr 2026 11:38:06 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=sarthaksaxena" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by sarthaksaxena in "Show HN: Llmpm – NPM for LLMs"]]></title><description><![CDATA[
<p>Hi HN! We have built llmpm, a CLI package manager for open source LLMs. You can now download and run 10,000+ free models using single command.<p>The idea came from the friction we kept seeing when trying to run models locally. Setting up models often involves downloading weights, configuring runtimes, and figuring out which models are actually good for a given task.<p>With llmpm the goal is to make models installable like packages:<p>llmpm install llama3<p>llmpm run llama3<p>We’ve also been working on the ability to package models with applications, so projects can declare model dependencies and reproduce the same setup easily.<p>Alongside the CLI, we’ve been experimenting with a model ranking and benchmarking tool on the website to help developers compare models across benchmarks and choose the right one before installing.<p>Check out rankings at: <a href="https://llmpm.co/rankings" rel="nofollow">https://llmpm.co/rankings</a></p>
]]></description><pubDate>Mon, 09 Mar 2026 16:32:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47311319</link><dc:creator>sarthaksaxena</dc:creator><comments>https://news.ycombinator.com/item?id=47311319</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47311319</guid></item><item><title><![CDATA[Show HN: Llmpm – NPM for LLMs]]></title><description><![CDATA[
<p>npm for LLMs — install, run, and share AI models.<p>We’ve built llmpm, a CLI tool that makes open-source LLMs installable like packages.<p>llmpm install llama3
llmpm run llama3<p>You can also package models with your projects so others can reproduce the same setup easily.<p>Website: <a href="https://llmpm.co" rel="nofollow">https://llmpm.co</a><p>GitHub:<a href="https://github.com/llmpm/llmpm-dev" rel="nofollow">https://github.com/llmpm/llmpm-dev</a></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47310770">https://news.ycombinator.com/item?id=47310770</a></p>
<p>Points: 6</p>
<p># Comments: 2</p>
]]></description><pubDate>Mon, 09 Mar 2026 15:58:18 +0000</pubDate><link>https://www.llmpm.co/</link><dc:creator>sarthaksaxena</dc:creator><comments>https://news.ycombinator.com/item?id=47310770</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47310770</guid></item><item><title><![CDATA[New comment by sarthaksaxena in "Ask HN: What Are You Working On? (March 2026)"]]></title><description><![CDATA[
<p>I am building npm for LLM Models, you can now install run and ship AI Models.<p>It’s a CLI tool that makes open-source LLMs installable like packages.<p>llmpm install llama3<p>llmpm run llama3<p>You can also package models with your projects so others can reproduce the same setup easily.<p>Website: <a href="https://llmpm.co" rel="nofollow">https://llmpm.co</a><p>GitHub: <a href="https://github.com/llmpm/llmpm-dev" rel="nofollow">https://github.com/llmpm/llmpm-dev</a></p>
]]></description><pubDate>Mon, 09 Mar 2026 15:51:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47310671</link><dc:creator>sarthaksaxena</dc:creator><comments>https://news.ycombinator.com/item?id=47310671</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47310671</guid></item><item><title><![CDATA[Ask HN: How are you adapting your career in this AI era?]]></title><description><![CDATA[

<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47308653">https://news.ycombinator.com/item?id=47308653</a></p>
<p>Points: 12</p>
<p># Comments: 6</p>
]]></description><pubDate>Mon, 09 Mar 2026 13:15:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47308653</link><dc:creator>sarthaksaxena</dc:creator><comments>https://news.ycombinator.com/item?id=47308653</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47308653</guid></item><item><title><![CDATA[New comment by sarthaksaxena in "Ask HN: What Are You Working On? (March 2026)"]]></title><description><![CDATA[
<p>I am building a command line package manager for AI models. It’ll make installing and running models locally incredibly easy.<p>Checkout:  <a href="https://llmpm.co" rel="nofollow">https://llmpm.co</a></p>
]]></description><pubDate>Mon, 09 Mar 2026 12:29:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47308195</link><dc:creator>sarthaksaxena</dc:creator><comments>https://news.ycombinator.com/item?id=47308195</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47308195</guid></item><item><title><![CDATA[New comment by sarthaksaxena in "Ask HN: What Are You Working On? (March 2026)"]]></title><description><![CDATA[
<p>Yes indeed there is, run `llmpm serve <model_name>`, which will expose an API endpoint http://localhost:8080/v1/chat/completions & also host a chat UI where you can interact with the local running model https://localhost:8080/chat.<p>Follow the docs here: <a href="https://www.llmpm.co/docs" rel="nofollow">https://www.llmpm.co/docs</a><p>Pro tip for your use case: Checkout the `llmpm serve` section</p>
]]></description><pubDate>Mon, 09 Mar 2026 08:27:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47306216</link><dc:creator>sarthaksaxena</dc:creator><comments>https://news.ycombinator.com/item?id=47306216</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47306216</guid></item><item><title><![CDATA[New comment by sarthaksaxena in "Ask HN: What Are You Working On? (March 2026)"]]></title><description><![CDATA[
<p>Yeah :)! Just run `llmpm init` to start packaging your models along with your code.</p>
]]></description><pubDate>Mon, 09 Mar 2026 08:13:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47306122</link><dc:creator>sarthaksaxena</dc:creator><comments>https://news.ycombinator.com/item?id=47306122</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47306122</guid></item><item><title><![CDATA[New comment by sarthaksaxena in "Ask HN: What Are You Working On? (March 2026)"]]></title><description><![CDATA[
<p><a href="https://llmpm.co" rel="nofollow">https://llmpm.co</a><p>I have built npm for LLM models, which lets you install & run 10,000+ open sourced large language models within seconds. The idea is to make models installable like packages in your code:<p>llmpm install llama3<p>llmpm run llama3<p>You can also package large language models together with your code so projects can reproduce the same setup easily.<p>Github: <a href="https://github.com/llmpm/llmpm-dev" rel="nofollow">https://github.com/llmpm/llmpm-dev</a></p>
]]></description><pubDate>Mon, 09 Mar 2026 04:24:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47304863</link><dc:creator>sarthaksaxena</dc:creator><comments>https://news.ycombinator.com/item?id=47304863</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47304863</guid></item></channel></rss>