<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: abhikul0</title><link>https://news.ycombinator.com/user?id=abhikul0</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 15:31:40 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=abhikul0" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by abhikul0 in "A True Life Hack: What Physical 'Life Force' Turns Biology's Wheels?"]]></title><description><![CDATA[
<p>Relevant Smarter Every Day video: <a href="https://www.youtube.com/watch?v=VPSm9gJkPxU" rel="nofollow">https://www.youtube.com/watch?v=VPSm9gJkPxU</a></p>
]]></description><pubDate>Thu, 23 Apr 2026 06:32:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47872782</link><dc:creator>abhikul0</dc:creator><comments>https://news.ycombinator.com/item?id=47872782</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47872782</guid></item><item><title><![CDATA[New comment by abhikul0 in "Qwen3.6-35B-A3B: Agentic Coding Power, Now Open to All"]]></title><description><![CDATA[
<p>I'll try to use that, but llama-server has mmap on by default and the model still takes up the size of the model in RAM, not sure what's going on.</p>
]]></description><pubDate>Thu, 16 Apr 2026 15:09:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47794323</link><dc:creator>abhikul0</dc:creator><comments>https://news.ycombinator.com/item?id=47794323</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47794323</guid></item><item><title><![CDATA[New comment by abhikul0 in "Qwen3.6-35B-A3B: Agentic coding power, now open to all"]]></title><description><![CDATA[
<p>Of course the swap is there for fallback but I hate using it lol as I don't want to degrade SSD longevity.</p>
]]></description><pubDate>Thu, 16 Apr 2026 14:39:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47793672</link><dc:creator>abhikul0</dc:creator><comments>https://news.ycombinator.com/item?id=47793672</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47793672</guid></item><item><title><![CDATA[New comment by abhikul0 in "Qwen3.6-35B-A3B: Agentic coding power, now open to all"]]></title><description><![CDATA[
<p>For the 9B model, I can use the full context with Q8_0 KV. This uses around ~16GB, while still leaving a comfortable headroom.<p>Output after I exit the llama-server command:<p><pre><code>  llama_memory_breakdown_print: | memory breakdown [MiB]  | total    free     self   model   context   compute    unaccounted |
  llama_memory_breakdown_print: |   - MTL0 (Apple M3 Pro) | 28753 = 14607 + (14145 =  6262 +    4553 +    3329) +           0 |
  llama_memory_breakdown_print: |   - Host                |                   2779 =   666 +       0 +    2112                |</code></pre></p>
]]></description><pubDate>Thu, 16 Apr 2026 14:36:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47793607</link><dc:creator>abhikul0</dc:creator><comments>https://news.ycombinator.com/item?id=47793607</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47793607</guid></item><item><title><![CDATA[New comment by abhikul0 in "Qwen3.6-35B-A3B: Agentic coding power, now open to all"]]></title><description><![CDATA[
<p>Mac has unified memory, so 36GB is 36GB for everything- gpu,cpu.</p>
]]></description><pubDate>Thu, 16 Apr 2026 14:23:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47793413</link><dc:creator>abhikul0</dc:creator><comments>https://news.ycombinator.com/item?id=47793413</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47793413</guid></item><item><title><![CDATA[New comment by abhikul0 in "Qwen3.6-35B-A3B: Agentic coding power, now open to all"]]></title><description><![CDATA[
<p>A usable quant, Q5_KM imo, takes up ~26GB[0], which leaves around ~6-7GB for context and running other programs which is not much.<p>[0] <a href="https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF?show_file_info=Qwen3.5-35B-A3B-Q5_K_M.gguf" rel="nofollow">https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF?show_fil...</a></p>
]]></description><pubDate>Thu, 16 Apr 2026 14:20:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47793359</link><dc:creator>abhikul0</dc:creator><comments>https://news.ycombinator.com/item?id=47793359</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47793359</guid></item><item><title><![CDATA[New comment by abhikul0 in "Qwen3.6-35B-A3B: Agentic coding power, now open to all"]]></title><description><![CDATA[
<p>I hope the other sizes are coming too(9B for me). Can't fit much context with this on a 36GB mac.</p>
]]></description><pubDate>Thu, 16 Apr 2026 14:06:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47793176</link><dc:creator>abhikul0</dc:creator><comments>https://news.ycombinator.com/item?id=47793176</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47793176</guid></item><item><title><![CDATA[New comment by abhikul0 in "The local LLM ecosystem doesn’t need Ollama"]]></title><description><![CDATA[
<p>Have you ever tried going to the model registry and seeing that the model was recently updated? What updated? What changed? Should I re-download this 20GB file?<p>I guess if you're not frustrated with things like this then sure, no need to stop using it.</p>
]]></description><pubDate>Thu, 16 Apr 2026 10:39:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47791168</link><dc:creator>abhikul0</dc:creator><comments>https://news.ycombinator.com/item?id=47791168</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47791168</guid></item><item><title><![CDATA[New comment by abhikul0 in "The local LLM ecosystem doesn’t need Ollama"]]></title><description><![CDATA[
<p><a href="https://en.wikipedia.org/wiki/List_of_generic_and_genericized_trademarks" rel="nofollow">https://en.wikipedia.org/wiki/List_of_generic_and_genericize...</a></p>
]]></description><pubDate>Thu, 16 Apr 2026 10:27:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47791090</link><dc:creator>abhikul0</dc:creator><comments>https://news.ycombinator.com/item?id=47791090</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47791090</guid></item><item><title><![CDATA[New comment by abhikul0 in "Costasiella kuroshimae"]]></title><description><![CDATA[
<p>Real Science video on the slug: <a href="https://www.youtube.com/watch?v=IH_uv4h2xYM" rel="nofollow">https://www.youtube.com/watch?v=IH_uv4h2xYM</a></p>
]]></description><pubDate>Thu, 16 Apr 2026 08:38:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47790316</link><dc:creator>abhikul0</dc:creator><comments>https://news.ycombinator.com/item?id=47790316</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47790316</guid></item><item><title><![CDATA[New comment by abhikul0 in "Ask HN: Can't view public pull requests without logging in (GitHub)"]]></title><description><![CDATA[
<p>I thought github was being wonky but yeah, getting 401 Unauthorized error.<p>Edit: Can't view discussions as well.</p>
]]></description><pubDate>Wed, 15 Apr 2026 14:53:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47779907</link><dc:creator>abhikul0</dc:creator><comments>https://news.ycombinator.com/item?id=47779907</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47779907</guid></item><item><title><![CDATA[New comment by abhikul0 in "God Sleeps in the Minerals"]]></title><description><![CDATA[
<p>And they say: <a href="https://www.youtube.com/watch?v=IfpMknlL-pg" rel="nofollow">https://www.youtube.com/watch?v=IfpMknlL-pg</a></p>
]]></description><pubDate>Wed, 15 Apr 2026 14:29:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47779513</link><dc:creator>abhikul0</dc:creator><comments>https://news.ycombinator.com/item?id=47779513</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47779513</guid></item><item><title><![CDATA[New comment by abhikul0 in "In the brain, objects seen and imagined follow the same neural path"]]></title><description><![CDATA[
<p>>Scientists believe people with aphantasia use words or concepts to recall what they've seen.<p>Like the text-encoder assisted Diffusion models?</p>
]]></description><pubDate>Tue, 14 Apr 2026 15:41:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47767120</link><dc:creator>abhikul0</dc:creator><comments>https://news.ycombinator.com/item?id=47767120</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47767120</guid></item><item><title><![CDATA[New comment by abhikul0 in "Tell HN: Reddit now demands to know why you won't use their app"]]></title><description><![CDATA[
<p>When the app only shows posts that are more than 10 hours old even when sorting by "hot" and shoving down the algorithmic feed on the app's home page, how are people still using the app?<p>Lately I've only been visiting a few subs that I'm interested in and keeping them open in safari with ublock; it's been a far better experience. This has drastically cut my reddit time now and if I do want to mindlessly scroll, I just use redlib(hosted in docker or one of their public instances)[0]. It has the same "sort" that's used on the desktop site.<p>[0] <a href="https://github.com/redlib-org/redlib" rel="nofollow">https://github.com/redlib-org/redlib</a></p>
]]></description><pubDate>Mon, 13 Apr 2026 06:37:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47748438</link><dc:creator>abhikul0</dc:creator><comments>https://news.ycombinator.com/item?id=47748438</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47748438</guid></item><item><title><![CDATA[New comment by abhikul0 in "Floating point from scratch: Hard Mode"]]></title><description><![CDATA[
<p>Worked with Retiming using DC Compiler in an ASIC implementation. Remember a lot of back & forth, sometimes the tool just doesn't add enough registers to meet the constraint, had to test variable register depths; this was a design that used Synopsys DesignWare for FP ops lol.</p>
]]></description><pubDate>Tue, 07 Apr 2026 15:09:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47676591</link><dc:creator>abhikul0</dc:creator><comments>https://news.ycombinator.com/item?id=47676591</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47676591</guid></item><item><title><![CDATA[New comment by abhikul0 in "Every GPU That Mattered"]]></title><description><![CDATA[
<p>The 9400 GT mattered to me as it was my first gpu. Had bought NFS Carbon only to find that the home pc only had a CD drive not DVD lol, so finally with that drive upgrade also came the 9400 GT and fun ensued.</p>
]]></description><pubDate>Tue, 07 Apr 2026 11:30:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47673552</link><dc:creator>abhikul0</dc:creator><comments>https://news.ycombinator.com/item?id=47673552</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47673552</guid></item><item><title><![CDATA[New comment by abhikul0 in "Are We Idiocracy Yet?"]]></title><description><![CDATA[
<p>More at <a href="https://www.reddit.com/r/tragedeigh/" rel="nofollow">https://www.reddit.com/r/tragedeigh/</a></p>
]]></description><pubDate>Tue, 07 Apr 2026 11:07:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47673362</link><dc:creator>abhikul0</dc:creator><comments>https://news.ycombinator.com/item?id=47673362</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47673362</guid></item><item><title><![CDATA[New comment by abhikul0 in "Google releases Gemma 4 open models"]]></title><description><![CDATA[
<p>Thanks for this release! Any reason why 12B variant was skipped this time? Was looking forward for a competitor to Qwen3.5 9B as it allows for a good agentic flow without taking up a whole lotta vram. I guess E4B is taking its place.</p>
]]></description><pubDate>Thu, 02 Apr 2026 17:16:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47617245</link><dc:creator>abhikul0</dc:creator><comments>https://news.ycombinator.com/item?id=47617245</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47617245</guid></item><item><title><![CDATA[New comment by abhikul0 in "The Windows equivalents of the most used Linux commands"]]></title><description><![CDATA[
<p>Or you can just prepend `wsl` to the linux command you want to run; of course only if you have wsl setup.<p><a href="https://learn.microsoft.com/en-us/windows/wsl/filesystems#run-linux-tools-from-a-windows-command-line" rel="nofollow">https://learn.microsoft.com/en-us/windows/wsl/filesystems#ru...</a></p>
]]></description><pubDate>Thu, 02 Apr 2026 09:39:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47612104</link><dc:creator>abhikul0</dc:creator><comments>https://news.ycombinator.com/item?id=47612104</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47612104</guid></item><item><title><![CDATA[New comment by abhikul0 in "Tell HN: Litellm 1.82.7 and 1.82.8 on PyPI are compromised"]]></title><description><![CDATA[
<p>Well, I reinstalled and finally upgraded to Tahoe.</p>
]]></description><pubDate>Wed, 25 Mar 2026 05:34:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47513683</link><dc:creator>abhikul0</dc:creator><comments>https://news.ycombinator.com/item?id=47513683</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47513683</guid></item></channel></rss>