<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: DrAwdeOccarim</title><link>https://news.ycombinator.com/user?id=DrAwdeOccarim</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 29 Apr 2026 08:06:47 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=DrAwdeOccarim" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by DrAwdeOccarim in "Can I run AI locally?"]]></title><description><![CDATA[
<p>Totally doing this today! Have you tried OpenJarvis or NemoClaw (is it out yet?). I want to use my computer “through” the LLM.</p>
]]></description><pubDate>Sat, 14 Mar 2026 14:46:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47377225</link><dc:creator>DrAwdeOccarim</dc:creator><comments>https://news.ycombinator.com/item?id=47377225</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47377225</guid></item><item><title><![CDATA[New comment by DrAwdeOccarim in "Personal Computer by Perplexity"]]></title><description><![CDATA[
<p>I want this, but using Nemotron Super 3 running local (128gb M5 Max macbook pro) that I use the computer “through”. Does Goose AI aspire/do this? I just started working on this yesterday.</p>
]]></description><pubDate>Thu, 12 Mar 2026 10:44:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47348876</link><dc:creator>DrAwdeOccarim</dc:creator><comments>https://news.ycombinator.com/item?id=47348876</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47348876</guid></item><item><title><![CDATA[New comment by DrAwdeOccarim in "Ask HN: How many of you hold an amateur radio license in your country?"]]></title><description><![CDATA[
<p>QSL! Boston, MA</p>
]]></description><pubDate>Fri, 06 Mar 2026 11:24:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47273624</link><dc:creator>DrAwdeOccarim</dc:creator><comments>https://news.ycombinator.com/item?id=47273624</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47273624</guid></item><item><title><![CDATA[New comment by DrAwdeOccarim in "Ask HN: Local models to support home network infrastructure?"]]></title><description><![CDATA[
<p>OK, I'll look around. Thanks!</p>
]]></description><pubDate>Tue, 20 Jan 2026 17:07:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46694466</link><dc:creator>DrAwdeOccarim</dc:creator><comments>https://news.ycombinator.com/item?id=46694466</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46694466</guid></item><item><title><![CDATA[Ask HN: Local models to support home network infrastructure?]]></title><description><![CDATA[
<p>I have had a blast getting Claude Code to manage my home infrastructure. I have been against the cloud forever, so I have had to build a home setup that does a lot of cloud stuff. Like, I run Resillio Sync for all my family iOS photo backups, and a local NAS to host my legally downloaded and owned movies and tv shows, I also use a bunch of raspberry pis, doing things like running local Home Assistant z-wave and zigbee sensors. The router, switches, and APs are all UniFI, same with all the cameras, door bells, and VoIP. Again, all local first (except Talk, obvs).<p>As you can imagine, maintaining entropy for all these disparate systems takes time, of which I have less now that I have young kids. So when Claude Code was released, I took to it like a fish to water. We mapped my entire network, I created accounts on all the devices so it can SSH into everything and configure everything (including the Ubiquiti Dream Machine Pro!). I have been blow away at how well it troubleshoots and fixes everything.<p>I have a DGX Spark AI workstation (128gb of memory), and I really want to now hand off the work to a local model, either using Opencode or Claude Code harnesses and simply pointing at a vLLM instantiated model accessable by API (just point Opencode or Claude Code at the local IP and API endpoint).<p>It works, except I tried Qwen3-coder just now and it's refusing to help due to security concerns. Ugh. I then tried GLM-4.7-Flash, but vLLM doesn't support yet and so before I rebuild (ask Claude Code to rebuild and deploy) to try GLM.4-7-Flash with some other inference provider, does anyone have a model they use for infrastructure maintenance that isn't a little bitch? I will probably eventually go to an abliterated model if none of the open source ones will help.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46690846">https://news.ycombinator.com/item?id=46690846</a></p>
<p>Points: 6</p>
<p># Comments: 3</p>
]]></description><pubDate>Tue, 20 Jan 2026 11:49:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46690846</link><dc:creator>DrAwdeOccarim</dc:creator><comments>https://news.ycombinator.com/item?id=46690846</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46690846</guid></item><item><title><![CDATA[New comment by DrAwdeOccarim in "Floppy disks turn out to be the greatest TV remote for kids"]]></title><description><![CDATA[
<p><a href="https://thepihut.com/products/highpi-raspberry-pi-b-plus2-case" rel="nofollow">https://thepihut.com/products/highpi-raspberry-pi-b-plus2-ca...</a></p>
]]></description><pubDate>Mon, 12 Jan 2026 17:52:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=46591844</link><dc:creator>DrAwdeOccarim</dc:creator><comments>https://news.ycombinator.com/item?id=46591844</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46591844</guid></item><item><title><![CDATA[New comment by DrAwdeOccarim in "Floppy disks turn out to be the greatest TV remote for kids"]]></title><description><![CDATA[
<p>I love this! I really wanted to go down this road when my kids were younger, but the paucity of floppys and the low storage space made me go down the Avery business card print outs with RFID stickers on the back and a raspberry pi with an RFID reader inside. Of course, the author is using the floppys as hooks instead of as storage media...what a great idea. The tactile response and the art you can stick to them makes them ideal for this purpose.</p>
]]></description><pubDate>Mon, 12 Jan 2026 14:29:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=46588984</link><dc:creator>DrAwdeOccarim</dc:creator><comments>https://news.ycombinator.com/item?id=46588984</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46588984</guid></item><item><title><![CDATA[New comment by DrAwdeOccarim in "A guide to local coding models"]]></title><description><![CDATA[
<p>I use Opus 4.5 and GPT 5.2-Codex through VS Code all day long, and the closest I've come is Devstral-Small-2-24B-Instruct-2512 inferring on a DGX Spark hosting with vLLM as an "Open AI Compatible" API endpoint I use to power the Cline VS Code extension.<p>It works, but it's slow. Much more like set it up and come back in an hour and it's done. I am incredibly impressed by it. There are quantized GGUFs and MLXs of the 123B, which can fit on my M3 36GB Macbook that I haven't tried yet.<p>But overall, it feels about about 50% too slow, which blows my mind because we are probably 9 months away from a local model that is fast and good enough for my script kiddie work.</p>
]]></description><pubDate>Mon, 22 Dec 2025 12:06:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46353514</link><dc:creator>DrAwdeOccarim</dc:creator><comments>https://news.ycombinator.com/item?id=46353514</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46353514</guid></item><item><title><![CDATA[New comment by DrAwdeOccarim in "Qwen3-VL can scan two-hour videos and pinpoint nearly every detail"]]></title><description><![CDATA[
<p>Does anyone know how this actually was done? Like, did they export every frame as a PNG and then run them each one by one through the model? Or did they somehow "load" the video into the model directly (which then internally somehow steps through each frame?)</p>
]]></description><pubDate>Wed, 03 Dec 2025 15:45:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46135811</link><dc:creator>DrAwdeOccarim</dc:creator><comments>https://news.ycombinator.com/item?id=46135811</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46135811</guid></item><item><title><![CDATA[New comment by DrAwdeOccarim in "Radios, how do they work? (2024)"]]></title><description><![CDATA[
<p>Do yourself a favor and study for both your technician and general at the same time (I’m assuming you live in the US). HF is exponentially more fun than just VHF/UHF.</p>
]]></description><pubDate>Thu, 23 Oct 2025 10:18:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=45680252</link><dc:creator>DrAwdeOccarim</dc:creator><comments>https://news.ycombinator.com/item?id=45680252</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45680252</guid></item><item><title><![CDATA[New comment by DrAwdeOccarim in "Inflammation now predicts heart disease more strongly than cholesterol"]]></title><description><![CDATA[
<p>I’m surprised by your strong wording.
>”… severe side-effects and likely consequences of long-term statin use, even low doses.”<p>Could you provide a few reputable clinical outcomes studies to support your statements? I was unaware of these risks.</p>
]]></description><pubDate>Wed, 01 Oct 2025 10:23:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=45436137</link><dc:creator>DrAwdeOccarim</dc:creator><comments>https://news.ycombinator.com/item?id=45436137</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45436137</guid></item><item><title><![CDATA[New comment by DrAwdeOccarim in "RNA structure prediction is hard. How much does that matter?"]]></title><description><![CDATA[
<p>I don’t disagree with your point, but I just would like to point out that there are over 100 known post-transcriptional modified RNA bases [1]. In fact, tRNA are more modified bases than canonical if taken as a whole. AND! the ribosome can’t function without all of its modifications. If I were to put money toward “targeting an RNA to make a drug” rRNA is where I’d aim…<p>Source: PhD in RNA modifications<p>[1] <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC9073955/" rel="nofollow">https://pmc.ncbi.nlm.nih.gov/articles/PMC9073955/</a></p>
]]></description><pubDate>Sat, 27 Sep 2025 00:33:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=45392418</link><dc:creator>DrAwdeOccarim</dc:creator><comments>https://news.ycombinator.com/item?id=45392418</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45392418</guid></item><item><title><![CDATA[New comment by DrAwdeOccarim in "iPhone Air"]]></title><description><![CDATA[
<p>LM Studio lets you run a model as a local API (OpenAI-compatible REST server).</p>
]]></description><pubDate>Thu, 11 Sep 2025 10:27:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=45209896</link><dc:creator>DrAwdeOccarim</dc:creator><comments>https://news.ycombinator.com/item?id=45209896</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45209896</guid></item><item><title><![CDATA[New comment by DrAwdeOccarim in "iPhone Air"]]></title><description><![CDATA[
<p>Yes, I use LM Studio daily with Qwen 3 30b a3b. I can't believe how good it is locally.</p>
]]></description><pubDate>Wed, 10 Sep 2025 18:35:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=45201864</link><dc:creator>DrAwdeOccarim</dc:creator><comments>https://news.ycombinator.com/item?id=45201864</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45201864</guid></item><item><title><![CDATA[New comment by DrAwdeOccarim in "Experimenting with Local LLMs on macOS"]]></title><description><![CDATA[
<p>Yes. LM Studio acts like an OAi endpoint when you turn the server on.</p>
]]></description><pubDate>Tue, 09 Sep 2025 22:13:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=45190056</link><dc:creator>DrAwdeOccarim</dc:creator><comments>https://news.ycombinator.com/item?id=45190056</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45190056</guid></item><item><title><![CDATA[New comment by DrAwdeOccarim in "Experimenting with Local LLMs on macOS"]]></title><description><![CDATA[
<p>LM Studio. I just vibe code the nodeJS code.</p>
]]></description><pubDate>Tue, 09 Sep 2025 12:57:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=45181329</link><dc:creator>DrAwdeOccarim</dc:creator><comments>https://news.ycombinator.com/item?id=45181329</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45181329</guid></item><item><title><![CDATA[New comment by DrAwdeOccarim in "Experimenting with Local LLMs on macOS"]]></title><description><![CDATA[
<p>I adore Qwen 3 30b a3b 2507. Pretty easy to write an MCP to let us search the web with Brave API key. I run it on my Macbook Pro M3 Pro 36 GB.</p>
]]></description><pubDate>Mon, 08 Sep 2025 22:52:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=45175163</link><dc:creator>DrAwdeOccarim</dc:creator><comments>https://news.ycombinator.com/item?id=45175163</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45175163</guid></item><item><title><![CDATA[New comment by DrAwdeOccarim in "Experimenting with Local LLMs on macOS"]]></title><description><![CDATA[
<p>Minstral small 3.2 Q4_K_M and Gemma 3 12b 4 bit are amazing. I run both in LM Studio on a Macbook Pro M3 Pro with 36GB of RAM.</p>
]]></description><pubDate>Mon, 08 Sep 2025 22:50:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=45175153</link><dc:creator>DrAwdeOccarim</dc:creator><comments>https://news.ycombinator.com/item?id=45175153</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45175153</guid></item><item><title><![CDATA[New comment by DrAwdeOccarim in "Cline and LM Studio: the local coding stack with Qwen3 Coder 30B"]]></title><description><![CDATA[
<p>The author says 36GB unified ram in the article. I run the same memory M3 Pro and LM Studio daily with various models up to the 30b parameter one listed and it flies. Can’t differentiate between my OAi chats vs locals aside from modern context, though I have puppeteer MCP which works well for web search and site-reading.</p>
]]></description><pubDate>Sun, 31 Aug 2025 23:46:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=45088127</link><dc:creator>DrAwdeOccarim</dc:creator><comments>https://news.ycombinator.com/item?id=45088127</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45088127</guid></item><item><title><![CDATA[New comment by DrAwdeOccarim in "Ask HN: The government of my country blocked VPN access. What should I use?"]]></title><description><![CDATA[
<p>I’m not sure that’s super feasible any longer with the advent of cheap SDRs. Over-the-horizon HF broadcast can be heard with a simple speaker wire antenna inside your house. If anyone is interested in trying to deploy such an idea, I’d love to participate as an avid ham.</p>
]]></description><pubDate>Thu, 28 Aug 2025 23:33:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=45058217</link><dc:creator>DrAwdeOccarim</dc:creator><comments>https://news.ycombinator.com/item?id=45058217</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45058217</guid></item></channel></rss>