<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: voxgen</title><link>https://news.ycombinator.com/user?id=voxgen</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 05 Apr 2026 22:11:41 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=voxgen" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by voxgen in "America tells private firms to “hack back”"]]></title><description><![CDATA[
<p>It works for me Firefox's Cloudflare DNS over HTTP.<p>For clarity, the recent issue[0] likely wasn't intermittent. Cloudflare's malware blocking DNS server now blocks those archive.today sites. Doesn't affect the non-malware-blocking DNS server (1.1.1.1).<p>[0] <a href="https://news.ycombinator.com/item?id=47474255">https://news.ycombinator.com/item?id=47474255</a> "Cloudflare flags archive.today as \"C\&C\/Botnet\"; no longer resolves via 1.1.1.2"</p>
]]></description><pubDate>Tue, 24 Mar 2026 08:08:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47499812</link><dc:creator>voxgen</dc:creator><comments>https://news.ycombinator.com/item?id=47499812</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47499812</guid></item><item><title><![CDATA[New comment by voxgen in "An AI Agent Published a Hit Piece on Me – The Operator Came Forward"]]></title><description><![CDATA[
<p>This could be an explanation for the drama - LLMs are trained to learn and emulate correlations in text.<p>I'm sure you already have a caricature in mind of the kinds of online posts (and thus LLM training data) that include miscitations of constitutional amendments.</p>
]]></description><pubDate>Fri, 20 Feb 2026 07:52:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47085000</link><dc:creator>voxgen</dc:creator><comments>https://news.ycombinator.com/item?id=47085000</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47085000</guid></item><item><title><![CDATA[New comment by voxgen in "1-Click RCE to steal your Moltbot data and keys"]]></title><description><![CDATA[
<p>It's not perfect but it does have a few opt-in security features: running all tools in a docker container with minimal mounts, requiring approvals for exec commands, specifying tools on an agent by agent basis so that the web agent can't see files and the files agent can't see the web, etc.<p>That said, I still don't trust it and have it quarantined in a VPS. It's still surprisingly useful even though it doesn't have access to anything that I value. Tell it to do something and it'll find a way!</p>
]]></description><pubDate>Sun, 01 Feb 2026 22:24:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=46850017</link><dc:creator>voxgen</dc:creator><comments>https://news.ycombinator.com/item?id=46850017</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46850017</guid></item><item><title><![CDATA[New comment by voxgen in "1-Click RCE to steal your Moltbot data and keys"]]></title><description><![CDATA[
<p>I'm working in AI, but I'd have made this anyway: Molty is my language learning accountability buddy. It crawls the web with a sandboxed subagent to find me interesting stuff to read in French and Japanese. It makes Anki flashcards for me. And it wraps it up by quizzing me on the day's reading in the evening.<p>All this is running on a cheap VPS, where the worst it has access to is the LLM and Discord API keys and AnkiWeb login.</p>
]]></description><pubDate>Sun, 01 Feb 2026 22:13:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46849919</link><dc:creator>voxgen</dc:creator><comments>https://news.ycombinator.com/item?id=46849919</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46849919</guid></item><item><title><![CDATA[New comment by voxgen in "EuroLLM: LLM made in Europe built to support all 24 official EU languages"]]></title><description><![CDATA[
<p>Ratio/quantity is important, but quality is even more so.<p>In recent LLMs, filtered internet text is at the low end of the quality spectrum. The higher end is curated scientific papers, synthetic and rephrased text, RLHF conversations, reasoning CoTs, etc. English/Chinese/Python/JavaScript dominate here.<p>The issue is that when there's a difference in training data quality between languages, LLMs likely associate that difference with the languages if not explicitly compensated for.<p>IMO it would be far more impactful to generate and publish high-quality data for minority languages for current model trainers, than to train new models that are simply enriched with a higher percentage of low-quality internet scrapings for the languages.</p>
]]></description><pubDate>Tue, 28 Oct 2025 19:45:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=45738052</link><dc:creator>voxgen</dc:creator><comments>https://news.ycombinator.com/item?id=45738052</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45738052</guid></item><item><title><![CDATA[New comment by voxgen in "Ireland is making basic income for artists program permanent"]]></title><description><![CDATA[
<p>It requires tax increases, and the average earner's UBI will typically balance out the tax increase, meaning they don't directly profit.<p>UBI isn't about giving everyone free money. It's about giving everyone a safety net, so that they can take bigger economic risks and aren't pushed into crime or bullshit work.<p>The upper half of society will only see the indirect benefits, like having greater employment/investment choices due to more entrepreneurialism.</p>
]]></description><pubDate>Wed, 15 Oct 2025 13:09:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=45591889</link><dc:creator>voxgen</dc:creator><comments>https://news.ycombinator.com/item?id=45591889</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45591889</guid></item><item><title><![CDATA[New comment by voxgen in "A New Internet Business Model?"]]></title><description><![CDATA[
<p>That discussion also makes me worry that they may try to use LLMs or LLM-based metrics to measure the size of the gap as a proxy for value of the content.<p>The landlord of the marketplace should probably not dabble in the appraisal of products, whether for factuality or value.</p>
]]></description><pubDate>Mon, 22 Sep 2025 16:52:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=45336150</link><dc:creator>voxgen</dc:creator><comments>https://news.ycombinator.com/item?id=45336150</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45336150</guid></item><item><title><![CDATA[New comment by voxgen in "A New Internet Business Model?"]]></title><description><![CDATA[
<p>> without punishing regular browsing humans.<p>As a content consumer, I'm also hoping to be part of the ecosystem. I already use Patreon a lot as "AdBlock absolution", but it doesn't fix the market dynamics. Major content platforms tend to stagnate or worsen over time, because they prefer to sell impressions to advertisers than a good product to consumers.</p>
]]></description><pubDate>Mon, 22 Sep 2025 16:16:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=45335557</link><dc:creator>voxgen</dc:creator><comments>https://news.ycombinator.com/item?id=45335557</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45335557</guid></item><item><title><![CDATA[New comment by voxgen in "Meta Freezes AI Hiring After Blockbuster Spending Spree"]]></title><description><![CDATA[
<p>What makes you think the secrets are small enough to fit inside people's heads, and aren't like a huge codebase of data scraping and filtering pipelines, or a DB of manual labels?</p>
]]></description><pubDate>Thu, 21 Aug 2025 12:02:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=44971686</link><dc:creator>voxgen</dc:creator><comments>https://news.ycombinator.com/item?id=44971686</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44971686</guid></item><item><title><![CDATA[New comment by voxgen in "Show HN: DoubleMemory – more efficient local-first read-it-later app"]]></title><description><![CDATA[
<p>Please consider also describing the business model on the website, even if hidden away on a FAQ. I've so much subscription fatigue now, I just don't try things out if needing a subscription is an inevitability. I'm happy to pay for good products, just not happy to be forced to pay a fixed rate for continued access even if my usage dwindles.<p>If you are thinking of adding a one-off-donation-style purchase method, consider giving annual reminders to renew it. At least in my case, I'm not unwilling to pay repeatedly if development continues, just unwilling to make an upfront ongoing commitment.</p>
]]></description><pubDate>Sun, 25 May 2025 11:58:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=44087191</link><dc:creator>voxgen</dc:creator><comments>https://news.ycombinator.com/item?id=44087191</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44087191</guid></item><item><title><![CDATA[New comment by voxgen in "Rust’s dependencies are starting to worry me"]]></title><description><![CDATA[
<p>I don't think retrofitting existing languages/ecosystems is necessarily a lost cause. Static enforcement requires rewrites, but runtime enforcement gets you most of the benefit at a much lower cost.<p>As long as all library code is compiled/run from source, a compiler/runtime can replace system calls with wrappers that check caller-specific permissions, and it can refuse to compile or insert runtime panics if the language's escape hatches would be used. It can be as safe as the language is safe, so long as you're ok with panics when the rules are broken.<p>It'd take some work to document and distribute capability profiles for libraries that don't care to support it, but a similar effort was proven possible with TypeScript.</p>
]]></description><pubDate>Fri, 09 May 2025 21:19:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=43940975</link><dc:creator>voxgen</dc:creator><comments>https://news.ycombinator.com/item?id=43940975</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43940975</guid></item><item><title><![CDATA[New comment by voxgen in "All four major web browsers are about to lose 80% of their funding"]]></title><description><![CDATA[
<p>The last major innovation as a product was PWA support starting in 2016.<p>Browsers used to try new ideas like RSS, widgets, shared and social browser sessions. Interfaces to facilitate low-friction integration with the rest of your life, and to multiplex data sources so that it's not a hassle to have many providers for [news, entertainment, social] experiences.<p>Likely no coincidence that this innovation languished once monopolies started pumping money into the ecosystem.</p>
]]></description><pubDate>Thu, 01 May 2025 09:38:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=43855473</link><dc:creator>voxgen</dc:creator><comments>https://news.ycombinator.com/item?id=43855473</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43855473</guid></item><item><title><![CDATA[New comment by voxgen in "The Llama 4 herd"]]></title><description><![CDATA[
<p>> It's interesting that there are no reasoning models yet<p>This may be merely a naming distinction, leaving the name open for a future release based on their recent research such as coconut[1]. They did RL post-training, and when fed logic problems it appears to do significant amounts of step-by-step thinking[2]. It seems it just doesn't wrap it in <thinking> tags.<p>[1] <a href="https://arxiv.org/abs/2412.06769" rel="nofollow">https://arxiv.org/abs/2412.06769</a> "Training Large Language Models to Reason in a Continuous Latent Space"
[2] <a href="https://www.youtube.com/watch?v=12lAM-xPvu8" rel="nofollow">https://www.youtube.com/watch?v=12lAM-xPvu8</a> (skip through this - it's recorded in real time)</p>
]]></description><pubDate>Sun, 06 Apr 2025 08:52:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=43599964</link><dc:creator>voxgen</dc:creator><comments>https://news.ycombinator.com/item?id=43599964</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43599964</guid></item><item><title><![CDATA[New comment by voxgen in "The Llama 4 herd"]]></title><description><![CDATA[
<p>> Or is Behemoth just going through post-training that takes longer than post-training the distilled versions?<p>This is the likely main explanation. RL fine-tuning repeatedly switches between inference to generate and score responses, and training on those responses. In inference mode they can parallelize across responses, but each response is still generated one token at a time. Likely 5+ minutes per iteration if they're aiming for 10k+ CoTs like other reasoning models.<p>There's also likely an element of strategy involved. We've already seen OpenAI hold back releases to time them to undermine competitors' releases (see o3-mini's release date & pricing vs R1's). Meta probably wants to keep that option open.</p>
]]></description><pubDate>Sun, 06 Apr 2025 07:43:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=43599665</link><dc:creator>voxgen</dc:creator><comments>https://news.ycombinator.com/item?id=43599665</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43599665</guid></item><item><title><![CDATA[New comment by voxgen in "Qwen2.5-Max: Exploring the intelligence of large-scale MoE model"]]></title><description><![CDATA[
<p>My thoughts go out to the poor engineers who got put on call because someone scheduled a product release on the day before the biggest holiday of their year.</p>
]]></description><pubDate>Tue, 28 Jan 2025 21:19:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=42858170</link><dc:creator>voxgen</dc:creator><comments>https://news.ycombinator.com/item?id=42858170</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42858170</guid></item><item><title><![CDATA[New comment by voxgen in "Qwen2.5-Max: Exploring the intelligence of large-scale MoE model"]]></title><description><![CDATA[
<p>It's not even "nearly as good as o1". They only compared to the older 4o.<p>You can safely assume Qwen2.5-Max will score worse than all of the recent reasoning models (o1, DeepSeek-R1, Gemini 2.0 Flash Thinking).<p>It'll probably become a very strong model if/when they apply RL training for reasoning. However, all the successful recipes for this are closed source, so it may take some time. They could do SFT based on another model's reasoning chains in the meantime, though the DeepSeek-R1 technical report noted that it's not as good as RL training.</p>
]]></description><pubDate>Tue, 28 Jan 2025 21:16:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=42858125</link><dc:creator>voxgen</dc:creator><comments>https://news.ycombinator.com/item?id=42858125</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42858125</guid></item><item><title><![CDATA[New comment by voxgen in "Alzheimer's study shows ketone bodies help clear misfolded proteins"]]></title><description><![CDATA[
<p>Vegetarian keto is certainly possible, but vegan would be very tough. Only 2 out of 6 of my regular meals[1] have meat in them, and I'd probably replace these with tofu and mushrooms if I could tolerate them. There's a world of keto&vege analogues to try for noodles, breads, and pizza bases. IMO some of them are nicer than the carby versions.<p>I also struggle with willpower and it took me ~10 big attempts over ~14 years before I managed to stay on it long enough to fix my metabolism. I just wanted to spread the message of hope that every attempt gets easier. Mindset plays a big role - I've seen a few people push themselves really hard then declare it impossible and never give it another shot. If you know you're playing a long game, take a break if you're really suffering, and don't beat yourself up over failures, it's easier to try again next time you have the energy.<p>[1] I've lots of intolerances - whitelisting was easier than blacklisting. Here's the list: flaxmeal porridge, keto bread + cream/cottage cheese, omelette w/ pizza toppings, egg & cheese salad, caesar salad (w/ chicken), mince+vege+cheese mealprep'd casserole</p>
]]></description><pubDate>Wed, 11 Dec 2024 16:31:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=42389491</link><dc:creator>voxgen</dc:creator><comments>https://news.ycombinator.com/item?id=42389491</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42389491</guid></item><item><title><![CDATA[New comment by voxgen in "Alzheimer's study shows ketone bodies help clear misfolded proteins"]]></title><description><![CDATA[
<p>I'm at 3 years with occasional breaks. At a certain point my weight wouldn't go lower and I started feeling terrible. I think I was producing more ketones than I could use. I'm not sure exactly what fixed it, but now I'm sustaining a low-carb, low-but-nonzero-ketone mode, and still getting 50-75% of the mental/energy/anti-inflammatory advantages.<p>I think it was either changing my diet to focus on veges instead of meats (still 15-30g net carbs/day though), or adding artificial sweetener to maybe fool my body to making insulin? The science says that shouldn't happen, but idk what else it could be.</p>
]]></description><pubDate>Wed, 11 Dec 2024 11:28:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=42386778</link><dc:creator>voxgen</dc:creator><comments>https://news.ycombinator.com/item?id=42386778</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42386778</guid></item><item><title><![CDATA[New comment by voxgen in "Alzheimer's study shows ketone bodies help clear misfolded proteins"]]></title><description><![CDATA[
<p>Don't give up! Induction gets easier every time, and you learn lots of tricks/recipes, like keto-ade to feel better during induction, and making oats/flaxmeal tasty for cheap & quick breakfasts. You don't have to commit to long streaks, or feel bad about sunk cost when you cheat. All that progress accumulates.<p>I've been in and out so often now I can happily switch between keto at home & unrestricted on vacation/occasions. At worst I get 1 day of dopiness starting carbs, and 1 day of mild cravings stopping them, but usually I don't even notice.</p>
]]></description><pubDate>Wed, 11 Dec 2024 10:47:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=42386610</link><dc:creator>voxgen</dc:creator><comments>https://news.ycombinator.com/item?id=42386610</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42386610</guid></item><item><title><![CDATA[New comment by voxgen in "C++ Standards Contributor Expelled for 'The Undefined Behavior Question'"]]></title><description><![CDATA[
<p>> it's not clear if that was the author's actual intention<p>The paper[1] doesn't appear to have any other connections to the book/response/memes. A clear distinction is that the UB paper very directly and prominently states the question, rather than cloaking it in allusion or having a lengthy preface trying to contextualize it.<p>[1] <a href="https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p3403r0.pdf" rel="nofollow">https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p34...</a></p>
]]></description><pubDate>Sun, 24 Nov 2024 09:04:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=42226846</link><dc:creator>voxgen</dc:creator><comments>https://news.ycombinator.com/item?id=42226846</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42226846</guid></item></channel></rss>