<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ImprobableTruth</title><link>https://news.ycombinator.com/user?id=ImprobableTruth</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 01:15:09 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ImprobableTruth" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by ImprobableTruth in "Qwen3-Coder-Next"]]></title><description><![CDATA[
<p>If that was the real reason, why wouldn't they just make it so that if you don't correctly use caching you use up more of your limit?</p>
]]></description><pubDate>Tue, 03 Feb 2026 18:04:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46874620</link><dc:creator>ImprobableTruth</dc:creator><comments>https://news.ycombinator.com/item?id=46874620</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46874620</guid></item><item><title><![CDATA[New comment by ImprobableTruth in "If you put Apple icons in reverse it looks like someone getting good at design"]]></title><description><![CDATA[
<p>The quill and ink at least communicates that it's about writing. The new one is so abstract that when I first looked at it I had no idea what I was even looking at, it certainly doesn't communicate "this is like word" to me. Without comparison to the previous icon, how many people do you think would understand that the bottom line is intended to be a stroke drawn by the pen?</p>
]]></description><pubDate>Sun, 18 Jan 2026 02:33:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=46664310</link><dc:creator>ImprobableTruth</dc:creator><comments>https://news.ycombinator.com/item?id=46664310</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46664310</guid></item><item><title><![CDATA[New comment by ImprobableTruth in "Show HN: I made a memory game to teach you to play piano by ear"]]></title><description><![CDATA[
<p>The key thing is that you teach multiplication tables in a structured, incremental manner. Yes, it's just rote memorization, but the structure makes it way easier. You don't just dump all tables on the student at once and start quizzing them until they get it.<p>Imo not being able to select a subset of intervals to train heavily limits how useful this is.</p>
]]></description><pubDate>Fri, 09 Jan 2026 20:53:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46559175</link><dc:creator>ImprobableTruth</dc:creator><comments>https://news.ycombinator.com/item?id=46559175</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46559175</guid></item><item><title><![CDATA[New comment by ImprobableTruth in "GLM-4.7: Advancing the Coding Capability"]]></title><description><![CDATA[
<p>How is the raw Gemini 3 CoT accessed? Isn't it hidden?</p>
]]></description><pubDate>Mon, 22 Dec 2025 22:16:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=46359825</link><dc:creator>ImprobableTruth</dc:creator><comments>https://news.ycombinator.com/item?id=46359825</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46359825</guid></item><item><title><![CDATA[New comment by ImprobableTruth in "Reflections on AI at the End of 2025"]]></title><description><![CDATA[
<p>They're not making money on inference alone because they blow ungodly amounts on R&D. Otherwise it'd be a very profitable business.</p>
]]></description><pubDate>Sat, 20 Dec 2025 14:21:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46336392</link><dc:creator>ImprobableTruth</dc:creator><comments>https://news.ycombinator.com/item?id=46336392</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46336392</guid></item><item><title><![CDATA[New comment by ImprobableTruth in "Jonathan Blow has spent the past decade designing 1,400 puzzles"]]></title><description><![CDATA[
<p>> These games are the starting point, but the bulk of the game is new puzzles combining mechanics from different games together<p>Seems like the puzzles are novel, but the mechanics are not?</p>
]]></description><pubDate>Thu, 18 Dec 2025 10:55:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46311173</link><dc:creator>ImprobableTruth</dc:creator><comments>https://news.ycombinator.com/item?id=46311173</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46311173</guid></item><item><title><![CDATA[New comment by ImprobableTruth in "GPT-5.2"]]></title><description><![CDATA[
<p>An almost 50% price increase. Benchmarks look nice, but 50% more nice...?</p>
]]></description><pubDate>Thu, 11 Dec 2025 18:38:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46235192</link><dc:creator>ImprobableTruth</dc:creator><comments>https://news.ycombinator.com/item?id=46235192</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46235192</guid></item><item><title><![CDATA[New comment by ImprobableTruth in "Open models by OpenAI"]]></title><description><![CDATA[
<p>Unfortunately not, this model is noticeably worse. I imagine horizon is either gpt 5 nano/mini.</p>
]]></description><pubDate>Tue, 05 Aug 2025 17:43:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=44801480</link><dc:creator>ImprobableTruth</dc:creator><comments>https://news.ycombinator.com/item?id=44801480</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44801480</guid></item><item><title><![CDATA[New comment by ImprobableTruth in "Typed languages are better suited for vibecoding"]]></title><description><![CDATA[
<p>Even near-perfect LLMs would benefit from the compiler optimizations that types allow.<p>However perfect LLMs would just replace compilers and programming languages above assembly completely.</p>
]]></description><pubDate>Mon, 04 Aug 2025 01:55:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=44781435</link><dc:creator>ImprobableTruth</dc:creator><comments>https://news.ycombinator.com/item?id=44781435</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44781435</guid></item><item><title><![CDATA[New comment by ImprobableTruth in "Verified dynamic programming with Σ-types in Lean"]]></title><description><![CDATA[
<p>This is the fault of sloppy language. In Lean, _proofs_ (equivalent to functions) and _proof objects/certificates_ (values) need to be distinguished. You can't compute proofs, only proof objects. In the above quote, replace "proof" with "certificate" and you'll see that it's a perfectly valid (if trivial - it essentially just applies a lemma) proof.</p>
]]></description><pubDate>Sat, 21 Jun 2025 03:36:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=44334274</link><dc:creator>ImprobableTruth</dc:creator><comments>https://news.ycombinator.com/item?id=44334274</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44334274</guid></item><item><title><![CDATA[New comment by ImprobableTruth in "Verified dynamic programming with Σ-types in Lean"]]></title><description><![CDATA[
<p>Caveat: Coercions exist in Lean, so subtypes actually can be used like the supertype, similar to other languages. This is done via essentially adding an implicit casting operation when such a usage is encountered.</p>
]]></description><pubDate>Sat, 21 Jun 2025 03:28:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=44334238</link><dc:creator>ImprobableTruth</dc:creator><comments>https://news.ycombinator.com/item?id=44334238</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44334238</guid></item><item><title><![CDATA[New comment by ImprobableTruth in "Luxe Game Engine"]]></title><description><![CDATA[
<p>I think the concept of a game DSL is cool, but it just feels so undercooked to me.<p>Like, I'm a huge fan of gradual typing, especially TypeScript's, but gdscript's is just so primitive. Not even to speak of something like intersection or union types, even something basic like an interfaces mechanism is missing. has_method is an awful substitute - in general way too much relies on strings, making even simple refactoring a headache and breaks autocompletion. Lots of things also just aren't typable e.g. because generics are missing, pushing one to Variant. These aren't deal breakers, especially for the small-ish projects I've done, but it just feels bad.<p>A 'fully realized' version of gdscript would probably be great, but as is I'm just really not very fond of it and progress currently isn't exactly happening at a rapid pace (which is of course understandable).<p>Also - and this is definitely a lot more subjective - but I find its C++ FFI pretty ugly, even for basic stuff like working with structs. In theory using gsdcript as glue and C++ for the more core things would be a great approach (like unreal with its blueprints), but in practice I just want to avoid it as much as possible.</p>
]]></description><pubDate>Sat, 14 Jun 2025 02:32:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=44273822</link><dc:creator>ImprobableTruth</dc:creator><comments>https://news.ycombinator.com/item?id=44273822</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44273822</guid></item><item><title><![CDATA[New comment by ImprobableTruth in "Next.js 15.1 is unusable outside of Vercel"]]></title><description><![CDATA[
<p>What's the issue with the remix -> react-router transition? As far as I can tell it's just a branding thing.</p>
]]></description><pubDate>Thu, 12 Jun 2025 12:24:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=44256938</link><dc:creator>ImprobableTruth</dc:creator><comments>https://news.ycombinator.com/item?id=44256938</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44256938</guid></item><item><title><![CDATA[New comment by ImprobableTruth in "Careless People"]]></title><description><![CDATA[
<p>I think the fact that all (good) LLM datasets are full with licensed/pirated material means we'll never really see a decent open source model under the strict definition. Open weight + open source code is really the best we're going to get, so I'm fine with it coopting the term open source even if it doesn't fully apply.</p>
]]></description><pubDate>Thu, 24 Apr 2025 12:21:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=43781848</link><dc:creator>ImprobableTruth</dc:creator><comments>https://news.ycombinator.com/item?id=43781848</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43781848</guid></item><item><title><![CDATA[New comment by ImprobableTruth in "Nvidia adds native Python support to CUDA"]]></title><description><![CDATA[
<p>No, that's a different scenario. In the one I gave there's explicitly a dependency between requests. If you use gather, the network requests would be executed in parallel. If you have dependencies they're sequential by nature because later ones depend on values of former ones.<p>The 'trick' for CUDA is that you declare all this using buffers as inputs/outputs rather than values and that there's automatic ordering enforcement through CUDA's stream mechanism. Marrying that with the coroutine mechanism just doesn't really make sense.</p>
]]></description><pubDate>Sat, 05 Apr 2025 01:56:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=43589823</link><dc:creator>ImprobableTruth</dc:creator><comments>https://news.ycombinator.com/item?id=43589823</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43589823</guid></item><item><title><![CDATA[New comment by ImprobableTruth in "Nvidia adds native Python support to CUDA"]]></title><description><![CDATA[
<p>The reason is that the usage is completely different from coroutine based async. With GPUs you want to queue _as many async operations as possible_ and only then synchronize. That is, you would have a program like this (pseudocode):<p><pre><code>  b = foo(a)
  c = bar(b)
  d = baz(c)
  synchronize()
</code></pre>
With coroutines/async await, something like this<p><pre><code>  b = await foo(a)
  c = await bar(b)
  d = await baz(c)
</code></pre>
would synchronize after every step, being much more inefficient.</p>
]]></description><pubDate>Fri, 04 Apr 2025 18:24:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=43586052</link><dc:creator>ImprobableTruth</dc:creator><comments>https://news.ycombinator.com/item?id=43586052</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43586052</guid></item><item><title><![CDATA[New comment by ImprobableTruth in "OpenAI Audio Models"]]></title><description><![CDATA[
<p>On their subscriptions, specifically the pro subscription, because it's a flatrate to their most expensive model. The API prices are all much more expensive. It's unclear whether they're losing money on the normal subscriptions, but if so, probably not by much. Though it's definitely closer to what you described, subsidizing it to gain 'mindshare' or whatever.</p>
]]></description><pubDate>Fri, 21 Mar 2025 11:27:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=43434363</link><dc:creator>ImprobableTruth</dc:creator><comments>https://news.ycombinator.com/item?id=43434363</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43434363</guid></item><item><title><![CDATA[New comment by ImprobableTruth in "OpenAI Audio Models"]]></title><description><![CDATA[
<p>If you compare with e.g. Deepseek and other hosters, you'll find that OpenAI is actually almost certainly charging very high margins (Deepseek has an 80% profit margin and they're 10x cheaper than openai).<p>The training/R&D might make OpenAI burn VC cash, but this isn't comparable with companies like WeWork whose products actively burn cash</p>
]]></description><pubDate>Fri, 21 Mar 2025 10:12:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=43433842</link><dc:creator>ImprobableTruth</dc:creator><comments>https://news.ycombinator.com/item?id=43433842</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43433842</guid></item><item><title><![CDATA[New comment by ImprobableTruth in "The PhD Metagame: Don't try to reform science – not yet"]]></title><description><![CDATA[
<p>Do you take issue with the 'purely empirical' approach (just trying out variants and seeing which sticks) or only with its insufficient documentation?<p>I don't know how you'd improve on the former. For a lot of it there simply isn't any sound theoretical foundation, so you just end up with flimsy post-hoc rationalizations.<p>While I agree that it's unfortunate that people often just present magic numbers without explaining where they come from, in my experience providing documentation for how one arrives at these often enough gets punished because it draws more attention to them. That is, reviewers will e.g. complain about preliminary experiments, asking for theoretical analysis or question why only certain variants were tried, whereas magic numbers are just kind of accepted.</p>
]]></description><pubDate>Tue, 18 Mar 2025 14:09:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=43399604</link><dc:creator>ImprobableTruth</dc:creator><comments>https://news.ycombinator.com/item?id=43399604</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43399604</guid></item><item><title><![CDATA[New comment by ImprobableTruth in "OpenAI says it has evidence DeepSeek used its model to train competitor"]]></title><description><![CDATA[
<p>There are even much cheaper services that host it for only slightly more than deepseek itself [1]. I'm now very certain that deepseek is not offering the API at a loss, so either OpenAI has absurd margins or their model is much more expensive.<p>[1] the cheapest I've found, which also happens to run in the EU, is <a href="https://studio.nebius.ai/" rel="nofollow">https://studio.nebius.ai/</a> at $0.8/million input.<p>Edit: I just saw that openrouter also now has nebius</p>
]]></description><pubDate>Thu, 30 Jan 2025 09:14:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=42876231</link><dc:creator>ImprobableTruth</dc:creator><comments>https://news.ycombinator.com/item?id=42876231</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42876231</guid></item></channel></rss>