<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: throwaway2027</title><link>https://news.ycombinator.com/user?id=throwaway2027</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 22 Apr 2026 18:17:32 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=throwaway2027" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by throwaway2027 in "ChatGPT Images 2.0"]]></title><description><![CDATA[
<p>I know people like to dunk on ChatGPT and Gemini and say Claude is or used to be better, but you can still use worse models when you're out of usage AND make use of Nano Banana and and ChatGPT Image generation with separate limits for your subscription. I think it could make it a more package as a whole for some people (non-programmers). I do like having the option and am excited for which improvements they've done to ChatGPT Image generation because in the past it had this yellow piss filter and 1.5 it sort of fixed it but made things really generic with Nano Banana beating it (altough Gemini also had a too aggressively tuned racial bias which they fixed), it seems the images ChatGPT generates have gotten better.</p>
]]></description><pubDate>Tue, 21 Apr 2026 19:11:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47853136</link><dc:creator>throwaway2027</dc:creator><comments>https://news.ycombinator.com/item?id=47853136</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47853136</guid></item><item><title><![CDATA[Opus 4.7 uses more thinking tokens, so we increased rate limits]]></title><description><![CDATA[
<p>Article URL: <a href="https://twitter.com/bcherny/status/2044839936235553167">https://twitter.com/bcherny/status/2044839936235553167</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47798409">https://news.ycombinator.com/item?id=47798409</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 16 Apr 2026 19:36:52 +0000</pubDate><link>https://twitter.com/bcherny/status/2044839936235553167</link><dc:creator>throwaway2027</dc:creator><comments>https://news.ycombinator.com/item?id=47798409</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47798409</guid></item><item><title><![CDATA[Ternary Bonsai: Top Intelligence at 1.58 Bits]]></title><description><![CDATA[
<p>Article URL: <a href="https://prismml.com/news/ternary-bonsai">https://prismml.com/news/ternary-bonsai</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47797455">https://news.ycombinator.com/item?id=47797455</a></p>
<p>Points: 4</p>
<p># Comments: 1</p>
]]></description><pubDate>Thu, 16 Apr 2026 18:24:53 +0000</pubDate><link>https://prismml.com/news/ternary-bonsai</link><dc:creator>throwaway2027</dc:creator><comments>https://news.ycombinator.com/item?id=47797455</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47797455</guid></item><item><title><![CDATA[New comment by throwaway2027 in "Claude Opus 4.7"]]></title><description><![CDATA[
<p>The same people that hyped up Claude will also hype up better alternatives or speak out against it, seems more like you're being disingenuous here.</p>
]]></description><pubDate>Thu, 16 Apr 2026 16:24:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47795756</link><dc:creator>throwaway2027</dc:creator><comments>https://news.ycombinator.com/item?id=47795756</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47795756</guid></item><item><title><![CDATA[New comment by throwaway2027 in "Claude Opus 4.7"]]></title><description><![CDATA[
<p>Gemini and Codex already scored higher on benchmarks than Opus 4.6 and they recently added a $100 tier with limited 2x limits, that's their answer and it seems people have caught on.</p>
]]></description><pubDate>Thu, 16 Apr 2026 16:20:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47795686</link><dc:creator>throwaway2027</dc:creator><comments>https://news.ycombinator.com/item?id=47795686</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47795686</guid></item><item><title><![CDATA[New comment by throwaway2027 in "Claude Opus 4.7"]]></title><description><![CDATA[
<p>Even using Mythos with their own benchmarks as a comparison that isn't available for most people to use, what a joke.</p>
]]></description><pubDate>Thu, 16 Apr 2026 16:17:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47795638</link><dc:creator>throwaway2027</dc:creator><comments>https://news.ycombinator.com/item?id=47795638</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47795638</guid></item><item><title><![CDATA[New comment by throwaway2027 in "Claude Opus 4.7"]]></title><description><![CDATA[
<p>You're better off subscribing to Codex for April and May of 2026.</p>
]]></description><pubDate>Thu, 16 Apr 2026 16:11:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47795520</link><dc:creator>throwaway2027</dc:creator><comments>https://news.ycombinator.com/item?id=47795520</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47795520</guid></item><item><title><![CDATA[New comment by throwaway2027 in "Too much discussion of the XOR swap trick"]]></title><description><![CDATA[
<p><a href="https://en.wikipedia.org/wiki/XOR_linked_list" rel="nofollow">https://en.wikipedia.org/wiki/XOR_linked_list</a></p>
]]></description><pubDate>Thu, 16 Apr 2026 10:30:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47791102</link><dc:creator>throwaway2027</dc:creator><comments>https://news.ycombinator.com/item?id=47791102</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47791102</guid></item><item><title><![CDATA[New comment by throwaway2027 in "Daily Claude outage is upon us. Waiting for Claude Status to update"]]></title><description><![CDATA[
<p><a href="https://github.com/ggml-org/llama.cpp" rel="nofollow">https://github.com/ggml-org/llama.cpp</a></p>
]]></description><pubDate>Wed, 15 Apr 2026 15:05:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47780137</link><dc:creator>throwaway2027</dc:creator><comments>https://news.ycombinator.com/item?id=47780137</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47780137</guid></item><item><title><![CDATA[New comment by throwaway2027 in "Nothing Ever Happens: Polymarket bot that always buys No on non-sports markets"]]></title><description><![CDATA[
<p>Already priced in.</p>
]]></description><pubDate>Mon, 13 Apr 2026 16:07:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47754146</link><dc:creator>throwaway2027</dc:creator><comments>https://news.ycombinator.com/item?id=47754146</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47754146</guid></item><item><title><![CDATA[New comment by throwaway2027 in "SFPD investigates apparent shooting near OpenAI CEO Sam Altman's home"]]></title><description><![CDATA[
<p>Thanks!</p>
]]></description><pubDate>Mon, 13 Apr 2026 03:27:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47747215</link><dc:creator>throwaway2027</dc:creator><comments>https://news.ycombinator.com/item?id=47747215</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47747215</guid></item><item><title><![CDATA[SFPD investigates apparent shooting near OpenAI CEO Sam Altman's home]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.sfchronicle.com/bayarea/article/sam-altman-openai-gunfire-22202648.php">https://www.sfchronicle.com/bayarea/article/sam-altman-openai-gunfire-22202648.php</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47747186">https://news.ycombinator.com/item?id=47747186</a></p>
<p>Points: 2</p>
<p># Comments: 2</p>
]]></description><pubDate>Mon, 13 Apr 2026 03:22:06 +0000</pubDate><link>https://www.sfchronicle.com/bayarea/article/sam-altman-openai-gunfire-22202648.php</link><dc:creator>throwaway2027</dc:creator><comments>https://news.ycombinator.com/item?id=47747186</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47747186</guid></item><item><title><![CDATA[New comment by throwaway2027 in "Pro Max 5x quota exhausted in 1.5 hours despite moderate usage"]]></title><description><![CDATA[
<p>I don't want a nudge. I want a clear RED WARNING with "You've gone away from your computer a bit too long and chatted too much at the coffee machine. You're better off starting a new context!"</p>
]]></description><pubDate>Sun, 12 Apr 2026 15:12:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47740672</link><dc:creator>throwaway2027</dc:creator><comments>https://news.ycombinator.com/item?id=47740672</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47740672</guid></item><item><title><![CDATA[New comment by throwaway2027 in "Pro Max 5x quota exhausted in 1.5 hours despite moderate usage"]]></title><description><![CDATA[
<p>Some claim that some of the recent smaller local models are as good as Sonnet 4.5 of last year and the bigger high-end models can be as almost as good as Claude, Gemini and Codex today, but some say they're benchmaxed and not representative.<p>To try things out you can use llama.cpp with Vulkan or even CPU and a small model like Gemma 4 26B-A4B or Gemma 4 31B or Qwen 3.5 35-A3B or Qwen3.5 27B. Some of the smaller quants fit within 16GB of GPU memory. The default people usually go with now is Q4_K_XL, a 4-bit quant for decent performance and size.<p><a href="https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF" rel="nofollow">https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF</a><p><a href="https://huggingface.co/unsloth/gemma-4-31B-it-GGUF" rel="nofollow">https://huggingface.co/unsloth/gemma-4-31B-it-GGUF</a><p><a href="https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF" rel="nofollow">https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF</a><p><a href="https://huggingface.co/unsloth/Qwen3.5-27B-GGUF" rel="nofollow">https://huggingface.co/unsloth/Qwen3.5-27B-GGUF</a><p>Get a second hand 3090/4090 or buy a new Intel Arc Pro B70. Use MoE models and offload to RAM for best bang for your buck. For speed try to find a model that fits entirely within VRAM. If you want to use multiple GPUs you might want to switch to vLLM or something else.<p>You can try any of the following models:<p>High-end: GLM 5.1, MiniMax 2.7<p>Medium: Gemma 4, Qwen 3.5<p><a href="https://unsloth.ai/docs/models/minimax-m27">https://unsloth.ai/docs/models/minimax-m27</a><p><a href="https://unsloth.ai/docs/models/glm-5.1">https://unsloth.ai/docs/models/glm-5.1</a><p><a href="https://unsloth.ai/docs/models/gemma-4">https://unsloth.ai/docs/models/gemma-4</a><p><a href="https://unsloth.ai/docs/models/qwen3.5">https://unsloth.ai/docs/models/qwen3.5</a><p><a href="https://github.com/ggml-org/llama.cpp" rel="nofollow">https://github.com/ggml-org/llama.cpp</a></p>
]]></description><pubDate>Sun, 12 Apr 2026 14:47:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47740368</link><dc:creator>throwaway2027</dc:creator><comments>https://news.ycombinator.com/item?id=47740368</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47740368</guid></item><item><title><![CDATA[New comment by throwaway2027 in "Pro Max 5x quota exhausted in 1.5 hours despite moderate usage"]]></title><description><![CDATA[
<p>They rolled out 1M context then they start doing this shit? I know Pro doesn't have access to the 1M context but what a joke.</p>
]]></description><pubDate>Sun, 12 Apr 2026 14:24:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47740060</link><dc:creator>throwaway2027</dc:creator><comments>https://news.ycombinator.com/item?id=47740060</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47740060</guid></item><item><title><![CDATA[New comment by throwaway2027 in "Pro Max 5x quota exhausted in 1.5 hours despite moderate usage"]]></title><description><![CDATA[
<p>Mistral isn't that great. Deepseek was good when they first had thinking. But most people just try something out and if that doesn't work on that model then it's bad and for Claude and Codex and Gemini they just are that much better now, but if they quantize or cut limits they destabilize and you're right you might as well just use something worse but reliable.</p>
]]></description><pubDate>Sun, 12 Apr 2026 14:09:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47739863</link><dc:creator>throwaway2027</dc:creator><comments>https://news.ycombinator.com/item?id=47739863</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47739863</guid></item><item><title><![CDATA[New comment by throwaway2027 in "Anthropic downgraded cache TTL on March 6th"]]></title><description><![CDATA[
<p>Claude is worse, they don't tell you when your experience has degraded and don't even let you use worse models if you run out any.</p>
]]></description><pubDate>Sun, 12 Apr 2026 13:42:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47739543</link><dc:creator>throwaway2027</dc:creator><comments>https://news.ycombinator.com/item?id=47739543</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47739543</guid></item><item><title><![CDATA[New comment by throwaway2027 in "Anthropic downgraded cache TTL on March 6th"]]></title><description><![CDATA[
<p>It's absolutely ridiculous how stupid Claude is now. I sometimes notice it and last year too but it feels like it's just last year before December model.</p>
]]></description><pubDate>Sun, 12 Apr 2026 10:09:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47737940</link><dc:creator>throwaway2027</dc:creator><comments>https://news.ycombinator.com/item?id=47737940</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47737940</guid></item><item><title><![CDATA[New comment by throwaway2027 in "Anthropic downgraded cache TTL on March 6th"]]></title><description><![CDATA[
<p>I also noticed this, just resuming something eats up your entire session. The past two weeks also felt like a substantial downgrade and made me regret renewing my subscription, it sucks because I wish I kept my Codex subscription instead and renewed that.</p>
]]></description><pubDate>Sun, 12 Apr 2026 09:35:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47737673</link><dc:creator>throwaway2027</dc:creator><comments>https://news.ycombinator.com/item?id=47737673</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47737673</guid></item><item><title><![CDATA[New comment by throwaway2027 in "Big-Endian Testing with QEMU"]]></title><description><![CDATA[
<p>Is there any benefit in edge cases to using big-endian these days?</p>
]]></description><pubDate>Fri, 03 Apr 2026 15:08:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47627531</link><dc:creator>throwaway2027</dc:creator><comments>https://news.ycombinator.com/item?id=47627531</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47627531</guid></item></channel></rss>