<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: NyxVox</title><link>https://news.ycombinator.com/user?id=NyxVox</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 01:35:33 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=NyxVox" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by NyxVox in "Show HN: I built a tiny LLM to demystify how language models work"]]></title><description><![CDATA[
<p>Hm, I can actually try the training on my GPU. One of the things I want to try next. Maybe a bit more complex than a fish :)</p>
]]></description><pubDate>Mon, 06 Apr 2026 03:47:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47656768</link><dc:creator>NyxVox</dc:creator><comments>https://news.ycombinator.com/item?id=47656768</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47656768</guid></item><item><title><![CDATA[New comment by NyxVox in "Show HN: Apfel – The free AI already on your Mac"]]></title><description><![CDATA[
<p>I have a project similar to this one, running on Windows 11 because I am not an Apple user. 3B model is actually OK for simple tasks and short contexts. And I strongly support completely local inference. I don't agree with some commentators that using cloud services is still fine.</p>
]]></description><pubDate>Sun, 05 Apr 2026 22:04:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47654387</link><dc:creator>NyxVox</dc:creator><comments>https://news.ycombinator.com/item?id=47654387</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47654387</guid></item></channel></rss>