<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: egorfine</title><link>https://news.ycombinator.com/user?id=egorfine</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 17:36:13 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=egorfine" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by egorfine in "US Bill Mandates On-Device Age Verification"]]></title><description><![CDATA[
<p>> interesting to see how the Linux hacker community reacts<p>We already saw that: some eagerly implemented this stuff, some rejected.</p>
]]></description><pubDate>Fri, 17 Apr 2026 15:59:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47807359</link><dc:creator>egorfine</dc:creator><comments>https://news.ycombinator.com/item?id=47807359</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47807359</guid></item><item><title><![CDATA[New comment by egorfine in "US Bill Mandates On-Device Age Verification"]]></title><description><![CDATA[
<p>ZK is detrimental to the true cause of these bills: mass surveillance.</p>
]]></description><pubDate>Fri, 17 Apr 2026 15:58:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47807330</link><dc:creator>egorfine</dc:creator><comments>https://news.ycombinator.com/item?id=47807330</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47807330</guid></item><item><title><![CDATA[New comment by egorfine in "Qwen3.6-35B-A3B: Agentic coding power, now open to all"]]></title><description><![CDATA[
<p>I was comparing various models at M5 Pro 48GB RAM MLX vs GGUF and found that MLX models have a higher time to first token (sometimes by an order of magnitude) while tokens/sec and memory usage is same as GGUF.<p>Gemma 3 27B q4:<p>* MLX: 16.7 t/s, 1220ms ttft<p>* GGUF: 16.4 t/s, 760ms ttft<p>Gemma 4 31B q8:<p>* MLX: 8.3 t/s, 25000ms ttft<p>* GGUF: 8.4 t/s, 1140ms ttft<p>Gemma 4 A4B q8:<p>* MLX: 52 t/s, 1790ms ttft<p>* GGUF: 51 t/s, 380ms ttft<p>All comparisons done in LM Studio, all versions of everything are the latest.</p>
]]></description><pubDate>Fri, 17 Apr 2026 11:11:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47804628</link><dc:creator>egorfine</dc:creator><comments>https://news.ycombinator.com/item?id=47804628</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47804628</guid></item><item><title><![CDATA[New comment by egorfine in "Claude Opus 4.7"]]></title><description><![CDATA[
<p>Good catch actually.<p>Okay maybe not exactly caveman dialect, but text compression using LLM is definitely possible to save on tokens in deep research.</p>
]]></description><pubDate>Thu, 16 Apr 2026 17:00:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47796294</link><dc:creator>egorfine</dc:creator><comments>https://news.ycombinator.com/item?id=47796294</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47796294</guid></item><item><title><![CDATA[New comment by egorfine in "Claude Opus 4.7"]]></title><description><![CDATA[
<p>They are indeed impractical in agentic coding.<p>However in deep research-like products you can have a pass with LLM to compress web page text into caveman speak, thus hugely compressing tokens.</p>
]]></description><pubDate>Thu, 16 Apr 2026 15:14:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47794404</link><dc:creator>egorfine</dc:creator><comments>https://news.ycombinator.com/item?id=47794404</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47794404</guid></item><item><title><![CDATA[New comment by egorfine in "Cal.com is going closed source"]]></title><description><![CDATA[
<p>No worries, someone else will do that for them. Just like they explained.<p>And given that they will not rewrite the whole codebase in the next few days it means that security vulnerabilities are still there to be discovered by someone willing to pay the AI tax.</p>
]]></description><pubDate>Thu, 16 Apr 2026 11:08:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47791361</link><dc:creator>egorfine</dc:creator><comments>https://news.ycombinator.com/item?id=47791361</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47791361</guid></item><item><title><![CDATA[New comment by egorfine in "The Gemini app is now on Mac"]]></title><description><![CDATA[
<p>Got it and I appreciate the feedback!<p>This comment was not meant to sound snarky (while the original one in fact was - am guilty).<p>Sometimes it feels like polite comment will go nowhere while being blunt may convey a more focused message. Not in this case though, I agree.</p>
]]></description><pubDate>Thu, 16 Apr 2026 11:06:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47791348</link><dc:creator>egorfine</dc:creator><comments>https://news.ycombinator.com/item?id=47791348</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47791348</guid></item><item><title><![CDATA[New comment by egorfine in "Darkbloom – Private inference on idle Macs"]]></title><description><![CDATA[
<p>I really want this to succeed</p>
]]></description><pubDate>Thu, 16 Apr 2026 08:31:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47790273</link><dc:creator>egorfine</dc:creator><comments>https://news.ycombinator.com/item?id=47790273</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47790273</guid></item><item><title><![CDATA[New comment by egorfine in "The Gemini app is now on Mac"]]></title><description><![CDATA[
<p>Understood.<p>1. Can we agree that this kind of corporate tradition does in fact exist at Google?<p>2. Can we agree that Google has a history of treating projects this way?</p>
]]></description><pubDate>Thu, 16 Apr 2026 08:23:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47790204</link><dc:creator>egorfine</dc:creator><comments>https://news.ycombinator.com/item?id=47790204</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47790204</guid></item><item><title><![CDATA[New comment by egorfine in "The Gemini app is now on Mac"]]></title><description><![CDATA[
<p>[flagged]</p>
]]></description><pubDate>Wed, 15 Apr 2026 22:33:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47786220</link><dc:creator>egorfine</dc:creator><comments>https://news.ycombinator.com/item?id=47786220</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47786220</guid></item><item><title><![CDATA[New comment by egorfine in "Cal.com is going closed source"]]></title><description><![CDATA[
<p>What's preventing cal.com to run the AI researcher over their own codebase and find their vulnerabilities before anyone else and patch them all by tomorrow morning?<p>That's right. Nothing.</p>
]]></description><pubDate>Wed, 15 Apr 2026 22:31:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47786203</link><dc:creator>egorfine</dc:creator><comments>https://news.ycombinator.com/item?id=47786203</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47786203</guid></item><item><title><![CDATA[New comment by egorfine in "I ran Gemma 4 as a local model in Codex CLI"]]></title><description><![CDATA[
<p>Thanks!</p>
]]></description><pubDate>Tue, 14 Apr 2026 17:21:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47768452</link><dc:creator>egorfine</dc:creator><comments>https://news.ycombinator.com/item?id=47768452</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47768452</guid></item><item><title><![CDATA[New comment by egorfine in "DaVinci Resolve – Photo"]]></title><description><![CDATA[
<p>Dehancer dev here.<p>I have just verified that Dehancer Pro for DaVinci Resolve works perfectly with the Photo mode of the new beta. So if you're on subscription - you can use both plugins and see what's best for you.<p>I <i>personally</i> didn't like the new Photo mode because it's clearly intended for video editors and not photographers at all.</p>
]]></description><pubDate>Tue, 14 Apr 2026 10:01:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47763496</link><dc:creator>egorfine</dc:creator><comments>https://news.ycombinator.com/item?id=47763496</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47763496</guid></item><item><title><![CDATA[New comment by egorfine in "Nothing Ever Happens: Polymarket bot that always buys No on non-sports markets"]]></title><description><![CDATA[
<p>Thanks to crypto, you can deposit and withdraw with full and complete disregard to local laws.</p>
]]></description><pubDate>Tue, 14 Apr 2026 08:32:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47762883</link><dc:creator>egorfine</dc:creator><comments>https://news.ycombinator.com/item?id=47762883</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47762883</guid></item><item><title><![CDATA[New comment by egorfine in "I ran Gemma 4 as a local model in Codex CLI"]]></title><description><![CDATA[
<p>> That too was broken in mlx-lm (it crashed), but has since been fixed on the main branch<p>Unfortunately I have got zero success running gemma with mlx-lm main branch. Can you point me out what is the right way? I have zero experience with mlx-lm.</p>
]]></description><pubDate>Mon, 13 Apr 2026 18:35:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47756152</link><dc:creator>egorfine</dc:creator><comments>https://news.ycombinator.com/item?id=47756152</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47756152</guid></item><item><title><![CDATA[New comment by egorfine in "I ran Gemma 4 as a local model in Codex CLI"]]></title><description><![CDATA[
<p>Not really: it's the same model size and it fits 24GB entirely.</p>
]]></description><pubDate>Mon, 13 Apr 2026 11:48:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47750675</link><dc:creator>egorfine</dc:creator><comments>https://news.ycombinator.com/item?id=47750675</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47750675</guid></item><item><title><![CDATA[New comment by egorfine in "I ran Gemma 4 as a local model in Codex CLI"]]></title><description><![CDATA[
<p>> As you have so much RAM I would suggest running Q8_0 directly<p>On the 48GB mac - absolutely. The 24GB one cannot run Q8, hence why the comparison.<p>> And just to be sure: you're are running the MLX version, right?<p>Nah, not yet. I have only tested in LM Studio and they don't have MLX versions recommended yet.<p>> but has since been fixed on the main branch<p>That's good to know, I will play around with it.</p>
]]></description><pubDate>Mon, 13 Apr 2026 09:17:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47749616</link><dc:creator>egorfine</dc:creator><comments>https://news.ycombinator.com/item?id=47749616</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47749616</guid></item><item><title><![CDATA[New comment by egorfine in "Apple Silicon and Virtual Machines: Beating the 2 VM Limit (2023)"]]></title><description><![CDATA[
<p>> Your hardware<p>They see it a bit differently.</p>
]]></description><pubDate>Mon, 13 Apr 2026 08:42:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47749414</link><dc:creator>egorfine</dc:creator><comments>https://news.ycombinator.com/item?id=47749414</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47749414</guid></item><item><title><![CDATA[New comment by egorfine in "Tell HN: OpenAI silently removed Study Mode from ChatGPT"]]></title><description><![CDATA[
<p>Yeah, I miss "Robot". It helps to add something along the lines of 
"Tell it like it is; don't sugar-coat responses. Get right to the point. Be concise." to custom instructions.</p>
]]></description><pubDate>Mon, 13 Apr 2026 08:38:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47749378</link><dc:creator>egorfine</dc:creator><comments>https://news.ycombinator.com/item?id=47749378</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47749378</guid></item><item><title><![CDATA[New comment by egorfine in "I ran Gemma 4 as a local model in Codex CLI"]]></title><description><![CDATA[
<p>Related: I have upgraded my M4 Pro 24GB to M5 Pro 48GB yesterday. The same Gemma 4 MoE model (Q4) runs about 8x more t/s on M5 Pro and loads 2x times faster from disk to memory.<p>Gonna run some more tests later today.</p>
]]></description><pubDate>Mon, 13 Apr 2026 08:33:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47749342</link><dc:creator>egorfine</dc:creator><comments>https://news.ycombinator.com/item?id=47749342</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47749342</guid></item></channel></rss>