<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: mynti</title><link>https://news.ycombinator.com/user?id=mynti</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 21 Apr 2026 16:56:33 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=mynti" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by mynti in "Interactive map of all 40,500 wind turbines in Germany (real government data)"]]></title><description><![CDATA[
<p>Very cool! How hard would it be to add all the solar installations? They must be in the register as well</p>
]]></description><pubDate>Sat, 14 Mar 2026 13:38:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47376560</link><dc:creator>mynti</dc:creator><comments>https://news.ycombinator.com/item?id=47376560</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47376560</guid></item><item><title><![CDATA[New comment by mynti in "Welcome to the Wasteland: A Thousand Gas Towns"]]></title><description><![CDATA[
<p>What prevents a bad actor from posting "easy" problems to the board, solving them and getting quick reputation. Then as a validator they can easily validate their own malicious changes to someones software?</p>
]]></description><pubDate>Wed, 04 Mar 2026 10:19:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47245481</link><dc:creator>mynti</dc:creator><comments>https://news.ycombinator.com/item?id=47245481</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47245481</guid></item><item><title><![CDATA[New comment by mynti in "Mercury 2: Fast reasoning LLM powered by diffusion"]]></title><description><![CDATA[
<p>I always wondered how these models would reason correctly. I suppose they are diffusing fixed blocks of text for every step and after the first block comes the next and so on (that is how it looks in the chat interface anyways). But what happens if at the end of the first block it would need information about reasoning at the beginning of the first block? Autoregressive Models can use these tokens to refine the reasoning but I guess that Diffusion Models can only adjust their path after every block? Is there a way maybe to have dynamic block length?</p>
]]></description><pubDate>Wed, 25 Feb 2026 09:18:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47149258</link><dc:creator>mynti</dc:creator><comments>https://news.ycombinator.com/item?id=47149258</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47149258</guid></item><item><title><![CDATA[New comment by mynti in "Qwen3.5: Towards Native Multimodal Agents"]]></title><description><![CDATA[
<p>Does anyone know what kind of RL environments they are talking about? They mention they used 15k environments. I can think of a couple hundred maybe that make sense to me, but what is filling that large number?</p>
]]></description><pubDate>Mon, 16 Feb 2026 11:27:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47033764</link><dc:creator>mynti</dc:creator><comments>https://news.ycombinator.com/item?id=47033764</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47033764</guid></item><item><title><![CDATA[New comment by mynti in "GPT‑5.3‑Codex‑Spark"]]></title><description><![CDATA[
<p>With the rough numbers from the blog post at ~1k tokens a second in Cerebras it should put it right at the same size as GLM 4.7, which also is available at 1k tokens a second. And they say that it is a smaller model than the normal Codex model</p>
]]></description><pubDate>Thu, 12 Feb 2026 19:52:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=46994119</link><dc:creator>mynti</dc:creator><comments>https://news.ycombinator.com/item?id=46994119</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46994119</guid></item><item><title><![CDATA[New comment by mynti in "Claude is a space to think"]]></title><description><![CDATA[
<p>I think this says a lot about the business approach of Anthopic compared to OpenAI. Just the vast amount of free messages you get from OpenAI is crazy that turning a profit with that seems impossible. Anthropic is growing more slowly but it seems like they are not running a crazy deficit. They do not need to put ads or porn in their chatbot</p>
]]></description><pubDate>Wed, 04 Feb 2026 12:20:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=46884970</link><dc:creator>mynti</dc:creator><comments>https://news.ycombinator.com/item?id=46884970</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46884970</guid></item><item><title><![CDATA[New comment by mynti in "Retiring GPT-4o, GPT-4.1, GPT-4.1 mini, and OpenAI o4-mini in ChatGPT"]]></title><description><![CDATA[
<p>I feel like this comes from the rigorous Reinforcement Learning these models go through now. The token distribution is becoming so narrow, so the models give better answers more often that is stuffles their creativity and ability to break out of the harness. To me, every creative prompt I give them turns into kind of the same mush as output. It is rarely interesting</p>
]]></description><pubDate>Fri, 30 Jan 2026 07:21:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46821456</link><dc:creator>mynti</dc:creator><comments>https://news.ycombinator.com/item?id=46821456</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46821456</guid></item><item><title><![CDATA[New comment by mynti in "Trinity large: An open 400B sparse MoE model"]]></title><description><![CDATA[
<p>They trained it in 33 days for ~20m (that includes apparently not only the infrastructure but also the salaries over a 6 month period). And the model is coming close to QWEN and Deepseek. Pretty impressive</p>
]]></description><pubDate>Wed, 28 Jan 2026 08:10:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=46792419</link><dc:creator>mynti</dc:creator><comments>https://news.ycombinator.com/item?id=46792419</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46792419</guid></item><item><title><![CDATA[New comment by mynti in "Kimi Code CLI"]]></title><description><![CDATA[
<p>They gave it a soul: <a href="https://github.com/MoonshotAI/kimi-cli/blob/main/src/kimi_cli/soul/kimisoul.py" rel="nofollow">https://github.com/MoonshotAI/kimi-cli/blob/main/src/kimi_cl...</a></p>
]]></description><pubDate>Tue, 27 Jan 2026 11:49:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=46778758</link><dc:creator>mynti</dc:creator><comments>https://news.ycombinator.com/item?id=46778758</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46778758</guid></item><item><title><![CDATA[New comment by mynti in "Trump to impose tariffs on European nations over Greenland"]]></title><description><![CDATA[
<p>You cannot negotiate with a bully. The EU should have never backed down so easily before. I hope someone will soon find some balls and not let the US walk all over everyone</p>
]]></description><pubDate>Sat, 17 Jan 2026 17:57:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=46660221</link><dc:creator>mynti</dc:creator><comments>https://news.ycombinator.com/item?id=46660221</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46660221</guid></item><item><title><![CDATA[Chinese Universities Surge in Global Rankings as U.S. Schools Slip]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.nytimes.com/2026/01/15/us/harvard-global-ranking-chinese-universities-trump-cuts.html">https://www.nytimes.com/2026/01/15/us/harvard-global-ranking-chinese-universities-trump-cuts.html</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46631140">https://news.ycombinator.com/item?id=46631140</a></p>
<p>Points: 13</p>
<p># Comments: 2</p>
]]></description><pubDate>Thu, 15 Jan 2026 11:44:54 +0000</pubDate><link>https://www.nytimes.com/2026/01/15/us/harvard-global-ranking-chinese-universities-trump-cuts.html</link><dc:creator>mynti</dc:creator><comments>https://news.ycombinator.com/item?id=46631140</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46631140</guid></item><item><title><![CDATA[New comment by mynti in "Tell HN: Viral Hit Made by AI, 10M listens on Spotify last few days"]]></title><description><![CDATA[
<p>This is actually the first song where I would not have guessed it to be AI. I think the video is "performed" by a real human?</p>
]]></description><pubDate>Tue, 13 Jan 2026 14:02:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46601017</link><dc:creator>mynti</dc:creator><comments>https://news.ycombinator.com/item?id=46601017</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46601017</guid></item><item><title><![CDATA[New comment by mynti in "Building Autonomous Vehicles That Reason with Nvidia Alpamayo"]]></title><description><![CDATA[
<p>For all of these kind of releases I ask myself, if it would work well they would not release it for free</p>
]]></description><pubDate>Tue, 06 Jan 2026 21:06:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46518732</link><dc:creator>mynti</dc:creator><comments>https://news.ycombinator.com/item?id=46518732</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46518732</guid></item><item><title><![CDATA[New comment by mynti in "China's AI Chip Deficit: Why Huawei Can't Catch Nvidia"]]></title><description><![CDATA[
<p>Is it actually possible that nvidia chips will have 50TB/s bandwidth by 2028? Right now it shows they are at 8 TB/s. To me it seems the Nvidia forecast is a very, very optimistic exponential. Nontheless, Huawei not matching the scale of production seems to be the biggest hurdle</p>
]]></description><pubDate>Tue, 16 Dec 2025 10:35:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46286956</link><dc:creator>mynti</dc:creator><comments>https://news.ycombinator.com/item?id=46286956</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46286956</guid></item><item><title><![CDATA[New comment by mynti in "The Walt Disney Company and OpenAI Partner on Sora"]]></title><description><![CDATA[
<p>>> Disney and OpenAI affirm a shared commitment to responsible use of AI that protects the safety of users and the rights of creators.<p>Wow so Sora Slop is coming to payed Disney+?</p>
]]></description><pubDate>Thu, 11 Dec 2025 14:22:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46231727</link><dc:creator>mynti</dc:creator><comments>https://news.ycombinator.com/item?id=46231727</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46231727</guid></item><item><title><![CDATA[New comment by mynti in "Font of 'wasteful' diversity: State Department orders return to Times New Roman"]]></title><description><![CDATA[
<p>It is really hard to figure out what is satire and what is actual news these days with the orange man..</p>
]]></description><pubDate>Wed, 10 Dec 2025 13:16:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=46217326</link><dc:creator>mynti</dc:creator><comments>https://news.ycombinator.com/item?id=46217326</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46217326</guid></item><item><title><![CDATA[New comment by mynti in "Show HN: I replaced Markov Chains with Biomechanics to predict word transitions"]]></title><description><![CDATA[
<p>To me, this only makes sense on a word level not sentence level. I can understand that words, especially those that are older, have evolved because of energy and comfort constraints of our physiology. But to extend this to sentence level is a rather big step. I would suppose it works for simple, short sentences that had to be efficient in the past. But imagine sentences about computer science where most words are rather new and have been chosen by arbitrary rules.
To me it would be interesting to see, if this hypothesis holds when applied to longer and more complex sentences and "modern" words.</p>
]]></description><pubDate>Tue, 09 Dec 2025 09:48:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46203134</link><dc:creator>mynti</dc:creator><comments>https://news.ycombinator.com/item?id=46203134</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46203134</guid></item><item><title><![CDATA[New comment by mynti in "Europe's Green Energy Rush Slashed Emissions – and Crippled the Economy"]]></title><description><![CDATA[
<p>That is partly true. High population density means a lot of roof area. Solar is perfect to put on roofs, you need no extra land. It is basically free (save the investment of the panels which pay off quickly nowadays)</p>
]]></description><pubDate>Tue, 02 Dec 2025 08:20:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46118917</link><dc:creator>mynti</dc:creator><comments>https://news.ycombinator.com/item?id=46118917</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46118917</guid></item><item><title><![CDATA[New comment by mynti in "Ilya Sutskever: We're moving from the age of scaling to the age of research"]]></title><description><![CDATA[
<p>If we think of every generation as a compression step of some form of information into our DNA and early humans existed for ~1.000.000 years and a generation is happening ~20years on average, then we have only ~50.000 compression steps to today. Of course, we have genes from both parents so they is some overlap from others, but especially in the early days the pool of other humans was small. So that still does not look like it is on the order of magnitude anywhere close to modern machine learning. Sure, early humans had already a lot of information in their DNA but still</p>
]]></description><pubDate>Wed, 26 Nov 2025 08:17:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=46055286</link><dc:creator>mynti</dc:creator><comments>https://news.ycombinator.com/item?id=46055286</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46055286</guid></item><item><title><![CDATA[New comment by mynti in "Estimating AI productivity gains from Claude conversations"]]></title><description><![CDATA[
<p>".. Claude estimates that AI reduces task completion time by 80%. We use Claude to evaluate anonymized Claude.ai transcripts to estimate the productivity impact of AI."<p>What is this? So they take Claude and ask how much do you think you saved on time here? How can you take this seriously. ChatBots are easy to exaggerate, especially about something positive like this.</p>
]]></description><pubDate>Tue, 25 Nov 2025 13:25:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46045570</link><dc:creator>mynti</dc:creator><comments>https://news.ycombinator.com/item?id=46045570</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46045570</guid></item></channel></rss>