<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: adam_patarino</title><link>https://news.ycombinator.com/user?id=adam_patarino</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 02:49:28 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=adam_patarino" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by adam_patarino in "A Claude Code skill that makes Claude talk like a caveman, cutting token use"]]></title><description><![CDATA[
<p>Its batteries included. No config.<p>We also fine tuned and did RL on our model, developed a custom context engine, trained an embedding model, and modified MLX to improve inference.<p>Everything is built to work with each other. So it’s more like an apple product than Linux. Less config but better optimized for the task.</p>
]]></description><pubDate>Sun, 05 Apr 2026 15:55:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47650704</link><dc:creator>adam_patarino</dc:creator><comments>https://news.ycombinator.com/item?id=47650704</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47650704</guid></item><item><title><![CDATA[New comment by adam_patarino in "Talk like caveman"]]></title><description><![CDATA[
<p>Or you could use a local model where you’re not constrained by tokens. Like rig.ai</p>
]]></description><pubDate>Sun, 05 Apr 2026 12:33:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47648721</link><dc:creator>adam_patarino</dc:creator><comments>https://news.ycombinator.com/item?id=47648721</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47648721</guid></item><item><title><![CDATA[New comment by adam_patarino in "Ollama is now powered by MLX on Apple Silicon in preview"]]></title><description><![CDATA[
<p>Tell me more!
Thanks for the waitlist</p>
]]></description><pubDate>Tue, 31 Mar 2026 15:06:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47588429</link><dc:creator>adam_patarino</dc:creator><comments>https://news.ycombinator.com/item?id=47588429</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47588429</guid></item><item><title><![CDATA[New comment by adam_patarino in "Ollama is now powered by MLX on Apple Silicon in preview"]]></title><description><![CDATA[
<p>We think so too! That’s why we are building rig.ai
 With how token intensive coding tasks can be, local allows for unlimited inference. Much better fit than sending back and forth to a third party. Not to mention the privacy and security benefits.</p>
]]></description><pubDate>Tue, 31 Mar 2026 12:51:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47586623</link><dc:creator>adam_patarino</dc:creator><comments>https://news.ycombinator.com/item?id=47586623</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47586623</guid></item><item><title><![CDATA[New comment by adam_patarino in "iPhone 17 Pro Demonstrated Running a 400B LLM"]]></title><description><![CDATA[
<p>I’ve seen this story making the rounds and I’m not just why it’s gotten so much traction. Is it just a good write up?</p>
]]></description><pubDate>Tue, 24 Mar 2026 11:59:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47501372</link><dc:creator>adam_patarino</dc:creator><comments>https://news.ycombinator.com/item?id=47501372</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47501372</guid></item><item><title><![CDATA[New comment by adam_patarino in "If AI writes code, should the session be part of the commit?"]]></title><description><![CDATA[
<p>You check the plan files into git? Don’t you end up with dozens of md files?<p>I’ve been copying and pasting the plan into the linear issue or PR to save it, but keep my codebase clean.</p>
]]></description><pubDate>Mon, 02 Mar 2026 13:44:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47217880</link><dc:creator>adam_patarino</dc:creator><comments>https://news.ycombinator.com/item?id=47217880</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47217880</guid></item><item><title><![CDATA[New comment by adam_patarino in "Qwen3.5 122B and 35B models offer Sonnet 4.5 performance on local computers"]]></title><description><![CDATA[
<p>The biggest gaps are not in hardware or model size. There is a lot of logical fallacy in the industry. Most people believe bigger is better. For model size, compute, tools, etc.<p>The reality in ML is that small models can perform better at a narrow problem set than large ones.<p>The key is the narrow problem set. Opus can write you a poem, create a shopping list, and analyze your massive code base.<p>We trained our model to only focus on coding with our specific agent harness, tools, and context engine. And it’s small enough to fit on an M2 16GB. It’s as good as sonnet 4.5 and way better than qwen3.5:35b-a3b<p>Our beta will be out soon / rig.ai</p>
]]></description><pubDate>Sun, 01 Mar 2026 12:48:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47206236</link><dc:creator>adam_patarino</dc:creator><comments>https://news.ycombinator.com/item?id=47206236</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47206236</guid></item><item><title><![CDATA[New comment by adam_patarino in "Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory"]]></title><description><![CDATA[
<p>This is not local. This is a wrapper. Rig.ai is local model and local execution</p>
]]></description><pubDate>Sun, 08 Feb 2026 13:00:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46933845</link><dc:creator>adam_patarino</dc:creator><comments>https://news.ycombinator.com/item?id=46933845</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46933845</guid></item><item><title><![CDATA[New comment by adam_patarino in "The Codex App"]]></title><description><![CDATA[
<p>AI has more training on web apps  
I think the real answer is that these guys are told they need to ship in 2 months and have a huge team of web devs</p>
]]></description><pubDate>Tue, 03 Feb 2026 14:39:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=46871514</link><dc:creator>adam_patarino</dc:creator><comments>https://news.ycombinator.com/item?id=46871514</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46871514</guid></item><item><title><![CDATA[New comment by adam_patarino in "Clawdbot - open source personal AI assistant"]]></title><description><![CDATA[
<p>I feel like this is the silent majority. All the twitter hype is not representative of the real world.</p>
]]></description><pubDate>Tue, 27 Jan 2026 16:08:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=46781844</link><dc:creator>adam_patarino</dc:creator><comments>https://news.ycombinator.com/item?id=46781844</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46781844</guid></item><item><title><![CDATA[New comment by adam_patarino in "Clawdbot - open source personal AI assistant"]]></title><description><![CDATA[
<p>Yeah but you're still using anthropic's subscription and tokens. That's not really an alternative. That's why we're shipping our own model with cortex.build</p>
]]></description><pubDate>Tue, 27 Jan 2026 16:04:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=46781787</link><dc:creator>adam_patarino</dc:creator><comments>https://news.ycombinator.com/item?id=46781787</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46781787</guid></item><item><title><![CDATA[New comment by adam_patarino in "Clawdbot - open source personal AI assistant"]]></title><description><![CDATA[
<p>I am trying so hard to understand wtf people are excited about. I have failed. Claude Code can run over-night or while I'm out.  
Clawdbot looks like a great way to set tokens on fire.</p>
]]></description><pubDate>Tue, 27 Jan 2026 16:02:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=46781763</link><dc:creator>adam_patarino</dc:creator><comments>https://news.ycombinator.com/item?id=46781763</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46781763</guid></item><item><title><![CDATA[New comment by adam_patarino in "Qwen3-Max-Thinking"]]></title><description><![CDATA[
<p>ahem ... cortex.build<p>Current test version runs in 8GB @ 60tks. Lmk if you want to join our early tester group!</p>
]]></description><pubDate>Tue, 27 Jan 2026 15:55:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46781669</link><dc:creator>adam_patarino</dc:creator><comments>https://news.ycombinator.com/item?id=46781669</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46781669</guid></item><item><title><![CDATA[New comment by adam_patarino in "OSS ChatGPT WebUI – 530 Models, MCP, Tools, Gemini RAG, Image/Audio Gen"]]></title><description><![CDATA[
<p>Yeah, I’ve hit this too. Once you do real agentic work or TDD, you’re optimizing context instead of code. That frustration is why we built Cortex: flat cost, no turn limits, runs locally, and git-aware context so you can just keep going. cortex.build</p>
]]></description><pubDate>Tue, 27 Jan 2026 15:49:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=46781574</link><dc:creator>adam_patarino</dc:creator><comments>https://news.ycombinator.com/item?id=46781574</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46781574</guid></item><item><title><![CDATA[New comment by adam_patarino in "Unrolling the Codex agent loop"]]></title><description><![CDATA[
<p>That's interesting. I use those moments to show it what not to do. Does it not just repeat the mistakes?</p>
]]></description><pubDate>Mon, 26 Jan 2026 15:04:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46766487</link><dc:creator>adam_patarino</dc:creator><comments>https://news.ycombinator.com/item?id=46766487</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46766487</guid></item><item><title><![CDATA[New comment by adam_patarino in "Unrolling the Codex agent loop"]]></title><description><![CDATA[
<p>Can’t you use git for that? I do that often and just revert changes. It does require me to commit often but that’s probably good anyways.</p>
]]></description><pubDate>Sat, 24 Jan 2026 16:39:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=46745010</link><dc:creator>adam_patarino</dc:creator><comments>https://news.ycombinator.com/item?id=46745010</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46745010</guid></item><item><title><![CDATA[New comment by adam_patarino in "Unrolling the Codex agent loop"]]></title><description><![CDATA[
<p>I’ve never understood checkpoints / forks. 
When do you use them?</p>
]]></description><pubDate>Sat, 24 Jan 2026 14:59:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=46744122</link><dc:creator>adam_patarino</dc:creator><comments>https://news.ycombinator.com/item?id=46744122</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46744122</guid></item><item><title><![CDATA[New comment by adam_patarino in "AI coding assistants are getting worse?"]]></title><description><![CDATA[
<p>This is why we HAVE to have a local option and why we're building cortex.build. 
It's based on a small language model we trained exlusively for coding. We combine it with tools and a context graph designed to be more consistent than what's available today, especially with large codebases.</p>
]]></description><pubDate>Mon, 19 Jan 2026 15:23:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=46680003</link><dc:creator>adam_patarino</dc:creator><comments>https://news.ycombinator.com/item?id=46680003</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46680003</guid></item><item><title><![CDATA[New comment by adam_patarino in "ASCII characters are not pixels: a deep dive into ASCII rendering"]]></title><description><![CDATA[
<p>Tell me someone has turned this into a library we can use</p>
]]></description><pubDate>Sat, 17 Jan 2026 12:29:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46657536</link><dc:creator>adam_patarino</dc:creator><comments>https://news.ycombinator.com/item?id=46657536</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46657536</guid></item><item><title><![CDATA[New comment by adam_patarino in "Claude Cowork exfiltrates files"]]></title><description><![CDATA[
<p>What frustrates me is that Anthropic brags they built cowork in 10 days. They don’t show the seriousness or care required for a product that has access to my data.</p>
]]></description><pubDate>Thu, 15 Jan 2026 13:22:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=46632110</link><dc:creator>adam_patarino</dc:creator><comments>https://news.ycombinator.com/item?id=46632110</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46632110</guid></item></channel></rss>