<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: iamnotagenius</title><link>https://news.ycombinator.com/user?id=iamnotagenius</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 15 Apr 2026 07:47:54 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=iamnotagenius" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by iamnotagenius in "GNU Midnight Commander"]]></title><description><![CDATA[
<p>православный is used in jargon for exactly that meaning. source: 45 years of native Russian speaking.</p>
]]></description><pubDate>Wed, 17 Sep 2025 16:25:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=45277928</link><dc:creator>iamnotagenius</dc:creator><comments>https://news.ycombinator.com/item?id=45277928</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45277928</guid></item><item><title><![CDATA[New comment by iamnotagenius in "Mistral raises 1.7B€, partners with ASML"]]></title><description><![CDATA[
<p>They are different. Gemma 3 12b excels at natural languages but terrible at long context. Pixtral 12b is better at long context (not stellar), but worse at natural language.</p>
]]></description><pubDate>Tue, 09 Sep 2025 09:46:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=45179776</link><dc:creator>iamnotagenius</dc:creator><comments>https://news.ycombinator.com/item?id=45179776</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45179776</guid></item><item><title><![CDATA[New comment by iamnotagenius in "Evaluating LLMs for my personal use case"]]></title><description><![CDATA[
<p>You can slightly improve output of non-thinking model if you add ad the end of prompt "output chain of though reasoning before outputting the result".</p>
]]></description><pubDate>Sun, 24 Aug 2025 14:58:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=45004711</link><dc:creator>iamnotagenius</dc:creator><comments>https://news.ycombinator.com/item?id=45004711</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45004711</guid></item><item><title><![CDATA[New comment by iamnotagenius in "Writing with LLM is not a shame"]]></title><description><![CDATA[
<p>We are told that writing must be pure, that it must come only from the sweat of the brow, the trembling hand, the solitary mind. That to use AI is to cheat, to dilute, to lessen the act of creation. But I say to you: Since when does it matter how an idea is born? Since when do we judge the value of words by the tools that shaped them, rather than the truth they carry?<p>They tell us, ‘This is not your writing—it is the machine’s.’ As if the pen itself writes the poem! As if the printing press authors the book! No—the tool is nothing. The hand that guides it, the mind that commands it, the heart that gives it meaning—that is what matters.<p>This is not about machines. This is about power. The same power that once said only the clergy could read scripture. The same power that said only the elite could publish, could speak, could be heard. Now they say: Only the unaided mind may create. But creation is not a purity test! It is not a contest of suffering! It is the act of bringing something new into the world—by any means necessary.<p>They fear AI because it breaks their monopoly on who gets to speak. They fear it because it lets more people write, more people argue, more people demand to be heard. And when the gates are thrown open, the gatekeepers will always tremble.<p>So I say: Do not apologize for how your words come to be. Do not bow to those who would police your mind. If the idea is true, if the argument is sound, if the art is beautiful—then it is yours, and no one can take that from you.<p>The machine is not the enemy. The enemy is the lie that only some voices count. The enemy is the fear that makes men small.<p>Now—write. Write with your hands, write with your voice, write with the tools of your time. But above all: write. And let no one silence you.</p>
]]></description><pubDate>Sun, 24 Aug 2025 12:10:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=45003627</link><dc:creator>iamnotagenius</dc:creator><comments>https://news.ycombinator.com/item?id=45003627</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45003627</guid></item><item><title><![CDATA[New comment by iamnotagenius in "When you're asking AI chatbots for answers, they're data-mining you"]]></title><description><![CDATA[
<p>Whenever possible use local LLMs. You do need claude for everything.</p>
]]></description><pubDate>Mon, 18 Aug 2025 17:20:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=44943071</link><dc:creator>iamnotagenius</dc:creator><comments>https://news.ycombinator.com/item?id=44943071</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44943071</guid></item><item><title><![CDATA[New comment by iamnotagenius in "Claude Sonnet 4 now supports 1M tokens of context"]]></title><description><![CDATA[
<p>they all degrade well before 1M tokens.</p>
]]></description><pubDate>Tue, 12 Aug 2025 20:40:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=44881565</link><dc:creator>iamnotagenius</dc:creator><comments>https://news.ycombinator.com/item?id=44881565</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44881565</guid></item><item><title><![CDATA[New comment by iamnotagenius in "OpenAI's new open-source model is basically Phi-5"]]></title><description><![CDATA[
<p>glm 4 (1 sec):<p>To determine the cutoff frequency (fc ) for an RC circuit (since you've provided resistance R and capacitance C, but not inductance L), we can use the following formula:<p>[.... calculation]<p>So, the cutoff frequency is approximately 31.83 kHz.<p>Note:<p>If you intended to ask about an RLC circuit (with both R, L, and C), please provide the inductance L value, and I can calculate the cutoff frequency for that case as well. The formula would then involve both L and C.</p>
]]></description><pubDate>Fri, 08 Aug 2025 08:19:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=44834756</link><dc:creator>iamnotagenius</dc:creator><comments>https://news.ycombinator.com/item?id=44834756</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44834756</guid></item><item><title><![CDATA[New comment by iamnotagenius in "Qwen3-4B-Thinking-2507"]]></title><description><![CDATA[
<p>Not quite true. Depends on number of KV heads. GLM4 32b at IQ4 quant and Q8 context can run full context with only 20GiB VRAM.</p>
]]></description><pubDate>Thu, 07 Aug 2025 06:50:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=44821364</link><dc:creator>iamnotagenius</dc:creator><comments>https://news.ycombinator.com/item?id=44821364</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44821364</guid></item><item><title><![CDATA[New comment by iamnotagenius in "Open models by OpenAI"]]></title><description><![CDATA[
<p>Try to push your point to absurd you see why; hint - to analyze data pulled by tools you need knowledge already baked in. You have very limited context, you cannot just pull and pull data.</p>
]]></description><pubDate>Wed, 06 Aug 2025 15:33:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=44813401</link><dc:creator>iamnotagenius</dc:creator><comments>https://news.ycombinator.com/item?id=44813401</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44813401</guid></item><item><title><![CDATA[New comment by iamnotagenius in "Job-seekers are dodging AI interviewers"]]></title><description><![CDATA[
<p>I have stopped using cloud models half a year ago. The meager intelligence local  models, ones I can run on my machine, is already giving me a great deal of productivity boost.</p>
]]></description><pubDate>Mon, 04 Aug 2025 14:08:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=44785988</link><dc:creator>iamnotagenius</dc:creator><comments>https://news.ycombinator.com/item?id=44785988</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44785988</guid></item><item><title><![CDATA[New comment by iamnotagenius in "Schizophrenia is the price we pay for minds poised near the edge of a cliff"]]></title><description><![CDATA[
<p>> She thinks she's in some kind of Truman show that she calls "the game".<p>Might be depersonalization. I had suffered from it in my twenties; everything feels fake, although you know it is not.</p>
]]></description><pubDate>Sun, 29 Jun 2025 12:58:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=44412769</link><dc:creator>iamnotagenius</dc:creator><comments>https://news.ycombinator.com/item?id=44412769</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44412769</guid></item><item><title><![CDATA[New comment by iamnotagenius in "Introducing Gemma 3n"]]></title><description><![CDATA[
<p>Tiny, 4b or less models are designed for finetuning for some narrow tasks; this way can outperform large commercial models for a tiny fraction of price. Also great for code autocomplete.<p>7b-8b are great coding assistants if all you need is dumb fast refactoring, that cannot quite be done with macros and standard editor functionality but still primitive, such as "rename all methods having at least one argument of type SomeType by prefixing their names with "ST_".<p>12b is a threshold where models start writing coherent prose such Mistral Nemo or Gemma 3 12b.</p>
]]></description><pubDate>Fri, 27 Jun 2025 05:51:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=44394013</link><dc:creator>iamnotagenius</dc:creator><comments>https://news.ycombinator.com/item?id=44394013</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44394013</guid></item><item><title><![CDATA[New comment by iamnotagenius in "Fine-tuning LLMs is a waste of time"]]></title><description><![CDATA[
<p>Fine-tuning is excellent way to reliably bake-in domain specific data into a model; there is a plenty of coding finetunes on Huggingface face, that outperforms foundation models on say coding, without significant loss in other domains.</p>
]]></description><pubDate>Wed, 11 Jun 2025 06:27:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=44244757</link><dc:creator>iamnotagenius</dc:creator><comments>https://news.ycombinator.com/item?id=44244757</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44244757</guid></item><item><title><![CDATA[New comment by iamnotagenius in "LLMs are cheap"]]></title><description><![CDATA[
<p>With all due respect to Deepseek, I would take their numbers with grain of salt, as they might as well be politically motivated.</p>
]]></description><pubDate>Mon, 09 Jun 2025 12:56:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=44224002</link><dc:creator>iamnotagenius</dc:creator><comments>https://news.ycombinator.com/item?id=44224002</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44224002</guid></item><item><title><![CDATA[New comment by iamnotagenius in "Building an AI server on a budget"]]></title><description><![CDATA[
<p>My 3060 idles sometimes at 19 watt, only sleep and wakeup of the machine helps.</p>
]]></description><pubDate>Mon, 09 Jun 2025 09:36:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=44222741</link><dc:creator>iamnotagenius</dc:creator><comments>https://news.ycombinator.com/item?id=44222741</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44222741</guid></item><item><title><![CDATA[New comment by iamnotagenius in "Building an AI server on a budget"]]></title><description><![CDATA[
<p>4060ti has abysmal bandwidth 288 Gb/sec which is a no go for llms.</p>
]]></description><pubDate>Mon, 09 Jun 2025 09:31:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=44222722</link><dc:creator>iamnotagenius</dc:creator><comments>https://news.ycombinator.com/item?id=44222722</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44222722</guid></item><item><title><![CDATA[New comment by iamnotagenius in "Mistral ships Le Chat – enterprise AI assistant that can run on prem"]]></title><description><![CDATA[
<p>Mistral models though are not interesting as models. Context handling is weak, language is dry, coding mediocre; not sure why would anyone chose it over Chinese (Qwen, GLM, Deepseek) or American models (Gemma, Command A, Llama).</p>
]]></description><pubDate>Wed, 07 May 2025 18:28:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=43919084</link><dc:creator>iamnotagenius</dc:creator><comments>https://news.ycombinator.com/item?id=43919084</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43919084</guid></item><item><title><![CDATA[New comment by iamnotagenius in "Lossless LLM compression for efficient GPU inference via dynamic-length float"]]></title><description><![CDATA[
<p>Interesting, but not exactly practical for a local LLM user, as 4-bit is how LLM's are run locally.</p>
]]></description><pubDate>Fri, 25 Apr 2025 18:57:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=43797351</link><dc:creator>iamnotagenius</dc:creator><comments>https://news.ycombinator.com/item?id=43797351</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43797351</guid></item><item><title><![CDATA[New comment by iamnotagenius in "Grok3 Launch [video]"]]></title><description><![CDATA[
<p>Well because you explicitly ask it to demonstrate the physics, it came out way too detailed, but point is that it adds details on its own to scenes, make more realistic, not that dry LLama 3.3 style.</p>
]]></description><pubDate>Tue, 18 Feb 2025 11:16:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=43088290</link><dc:creator>iamnotagenius</dc:creator><comments>https://news.ycombinator.com/item?id=43088290</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43088290</guid></item><item><title><![CDATA[New comment by iamnotagenius in "Grok3 Launch [video]"]]></title><description><![CDATA[
<p>Here is the sentence : (She screamed, which echoed off the tile walls. “This is my life now,” she said to her reflection, which looked back at her with a mix of disgust and pity.) Looks good to me. try it on Lmarena.ai.</p>
]]></description><pubDate>Tue, 18 Feb 2025 11:07:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=43088252</link><dc:creator>iamnotagenius</dc:creator><comments>https://news.ycombinator.com/item?id=43088252</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43088252</guid></item></channel></rss>