<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: FieryTransition</title><link>https://news.ycombinator.com/user?id=FieryTransition</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 30 Apr 2026 00:29:43 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=FieryTransition" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by FieryTransition in "Don't post generated/AI-edited comments. HN is for conversation between humans."]]></title><description><![CDATA[
<p>As ai moves on and becomes better, the only real solution, is to have closed of communities where you get veted to join. That is the sad reality.</p>
]]></description><pubDate>Wed, 11 Mar 2026 22:19:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47342990</link><dc:creator>FieryTransition</dc:creator><comments>https://news.ycombinator.com/item?id=47342990</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47342990</guid></item><item><title><![CDATA[New comment by FieryTransition in "The path to ubiquitous AI (17k tokens/sec)"]]></title><description><![CDATA[
<p>In my unscientific experience, yes, but being better at a certain rate is hard to really quantify, unless you just pull some random benchmark numbers.</p>
]]></description><pubDate>Fri, 20 Feb 2026 17:05:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47090659</link><dc:creator>FieryTransition</dc:creator><comments>https://news.ycombinator.com/item?id=47090659</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47090659</guid></item><item><title><![CDATA[New comment by FieryTransition in "The path to ubiquitous AI (17k tokens/sec)"]]></title><description><![CDATA[
<p>From my own experience, models are at the tipping point for being useful at prototypes in software, and those are very large frontier models not feasible to get down on wafers unless someone does something smart.<p>I really don't like the hallucination rate for most models but it is improving, so that is still far in the future.<p>What I could see though, is if the whole unit they made would be power efficient enough to run on a robotics platform for human computer interaction.<p>It makes sense they would try to make repurposing their tech as much as they could since making changes is frought with a long time frame and risk.<p>But if we look long term and pretend that they get it to work, they just need to stay afloat until better smaller models can be made with their technology, so it becomes a waiting game for investors and a risk assessment.</p>
]]></description><pubDate>Fri, 20 Feb 2026 17:03:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47090621</link><dc:creator>FieryTransition</dc:creator><comments>https://news.ycombinator.com/item?id=47090621</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47090621</guid></item><item><title><![CDATA[New comment by FieryTransition in "The path to ubiquitous AI (17k tokens/sec)"]]></title><description><![CDATA[
<p>If it's not reprogrammable, it's just expensive glass.<p>If you etch the bits into silicon, you then have to accommodate the bits by physical area, which is the transistor density for whatever modern process they use. This will give you a lower bound for the size of the wafers.<p>This can give huge wafers for a very set model which is old by the time it is finalized.<p>Etching generic functions used in ML and common fused kernels would seem much more viable as they could be used as building blocks.</p>
]]></description><pubDate>Fri, 20 Feb 2026 11:53:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47086874</link><dc:creator>FieryTransition</dc:creator><comments>https://news.ycombinator.com/item?id=47086874</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47086874</guid></item><item><title><![CDATA[New comment by FieryTransition in "The path to ubiquitous AI (17k tokens/sec)"]]></title><description><![CDATA[
<p>If you etch the bits into silicon, you then have to accommodate the bits by physical area, which is the transistor density for whatever modern process they use. This will give you a lower bound for the size of the wafers.</p>
]]></description><pubDate>Fri, 20 Feb 2026 11:49:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47086832</link><dc:creator>FieryTransition</dc:creator><comments>https://news.ycombinator.com/item?id=47086832</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47086832</guid></item><item><title><![CDATA[New comment by FieryTransition in "AI doesn’t reduce work, it intensifies it"]]></title><description><![CDATA[
<p>Thanks, I learned something, but the original point stands, 5 people is still not a lot and well within the scale where you could manage things within the team yourself without dedicated management and have first hand information flow.</p>
]]></description><pubDate>Wed, 18 Feb 2026 15:17:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47061897</link><dc:creator>FieryTransition</dc:creator><comments>https://news.ycombinator.com/item?id=47061897</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47061897</guid></item><item><title><![CDATA[New comment by FieryTransition in "AI doesn’t reduce work, it intensifies it"]]></title><description><![CDATA[
<p>And Unix was mainly made by two people, it's astounding that as I get older, even tech managers don't know "the mythical man month", and how software production generally scales.</p>
]]></description><pubDate>Tue, 10 Feb 2026 13:06:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46959213</link><dc:creator>FieryTransition</dc:creator><comments>https://news.ycombinator.com/item?id=46959213</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46959213</guid></item><item><title><![CDATA[New comment by FieryTransition in "Sustainable memristors from shiitake mycelium for high-frequency bioelectronics"]]></title><description><![CDATA[
<p>Imagine having a swarm of mushrooms everywhere to run computation on, if mushrooms could be programmed to expand and self arrange.<p>Ah, like a knifes edge, but would be exciting. Could have a literal bug in the code.</p>
]]></description><pubDate>Fri, 31 Oct 2025 18:59:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=45775460</link><dc:creator>FieryTransition</dc:creator><comments>https://news.ycombinator.com/item?id=45775460</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45775460</guid></item><item><title><![CDATA[New comment by FieryTransition in "Flix – A powerful effect-oriented programming language"]]></title><description><![CDATA[
<p>Rust is a good candidate, but it lacks some crucial aspects when it comes to what I would consider 'nice to haves' from a modern language in this territory.<p>While rust has traits, borrowing etc, it doesn't have a lot of things with regard to types and optimization. Things like:<p>- A lack of GADTs, or a stronger version, dependent types, or similar type system which would allow one to encode natural relationships, recursive ones, invariants etc.<p>- Tail call optimization guarantees, to allow for mutual recursion and optimization since game engines are just huge state machines, and it would allow to pass functions around which could call each other via mutual recursion, while allowing it to be optimized as well.<p>- Efficient structural sharing of immutable state, which would be memory layout and cache friendly<p>- Built in profiling from the getgo which the language developers would use and refine, so you could get information about how the program behaves over time and space.</p>
]]></description><pubDate>Sat, 12 Jul 2025 09:07:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=44540524</link><dc:creator>FieryTransition</dc:creator><comments>https://news.ycombinator.com/item?id=44540524</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44540524</guid></item><item><title><![CDATA[New comment by FieryTransition in "Flix – A powerful effect-oriented programming language"]]></title><description><![CDATA[
<p>I'm looking forward to the day where an ml/functional inspired language can be used for real time rendering and game engines, how far are we from that?<p>Realistically, one could argue it's not the right choice overall, but still, it's an application which would push the boundaries of what those languages have been perceived to have the greatness weakness in. An application which is mostly about handing mutable state with high performance.</p>
]]></description><pubDate>Fri, 11 Jul 2025 10:31:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=44530549</link><dc:creator>FieryTransition</dc:creator><comments>https://news.ycombinator.com/item?id=44530549</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44530549</guid></item><item><title><![CDATA[New comment by FieryTransition in "AV1@Scale: Film Grain Synthesis, The Awakening"]]></title><description><![CDATA[
<p>I love this concept/principle, one similar example I often bring up when I talk about machine learning, is comparing how a human would analyse night footage from a camera, and how a ML algorithm can pick up things no human would think about, even artifacts from the sensors which can be used as features. Noise is rarely ever just noise.</p>
]]></description><pubDate>Fri, 04 Jul 2025 10:14:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=44463130</link><dc:creator>FieryTransition</dc:creator><comments>https://news.ycombinator.com/item?id=44463130</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44463130</guid></item><item><title><![CDATA[New comment by FieryTransition in "Claude 4 System Card"]]></title><description><![CDATA[
<p>Turns out tuning LLMs on human preferences leads to sycophantic behavior, they even wrote about it themselves, guess they wanted to push the model out too fast.</p>
]]></description><pubDate>Sun, 25 May 2025 09:48:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=44086718</link><dc:creator>FieryTransition</dc:creator><comments>https://news.ycombinator.com/item?id=44086718</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44086718</guid></item><item><title><![CDATA[New comment by FieryTransition in "Gemini Diffusion"]]></title><description><![CDATA[
<p>There's a reason why less is called less, and not more.</p>
]]></description><pubDate>Thu, 22 May 2025 08:08:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=44059839</link><dc:creator>FieryTransition</dc:creator><comments>https://news.ycombinator.com/item?id=44059839</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44059839</guid></item><item><title><![CDATA[New comment by FieryTransition in "Show HN: I built a knife steel comparison tool"]]></title><description><![CDATA[
<p>I'm fine with ai slop if it provides value, the value here being questionable, because now I don't know if the values in the comparison are fact checked or hallucinations.</p>
]]></description><pubDate>Sat, 17 May 2025 19:36:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=44016441</link><dc:creator>FieryTransition</dc:creator><comments>https://news.ycombinator.com/item?id=44016441</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44016441</guid></item><item><title><![CDATA[New comment by FieryTransition in "Absolute Zero: Reinforced Self-Play Reasoning with Zero Data"]]></title><description><![CDATA[
<p>Agreed, it's a pretty obvious solution to the problems once you are immersed in the problem space. I think it's much harder to setup an efficient training pipeline for this which does every single little detail in the pipeline correctly while being efficient.</p>
]]></description><pubDate>Sun, 11 May 2025 11:38:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=43953087</link><dc:creator>FieryTransition</dc:creator><comments>https://news.ycombinator.com/item?id=43953087</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43953087</guid></item><item><title><![CDATA[New comment by FieryTransition in "AI assisted search-based research works now"]]></title><description><![CDATA[
<p>Plenty studies show that these models are better at catching and diagnosing than even a board of doctors are. Doctors are good at other things, and I hope the future will allow doctors to use these models together with their practice.<p>The problem is when the ai makes a catastrophic prediction, and the layman can't see it.</p>
]]></description><pubDate>Tue, 22 Apr 2025 12:04:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=43761161</link><dc:creator>FieryTransition</dc:creator><comments>https://news.ycombinator.com/item?id=43761161</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43761161</guid></item><item><title><![CDATA[New comment by FieryTransition in "Trust in Firefox and Mozilla Is Gone – Let's Talk Alternatives"]]></title><description><![CDATA[
<p>See my answer to the sibling comment, it's not meant as an ill'will comment, otherwise I would add, if people completely abandoned Firefox due to a lack of safeguards and trust, then yes, that would be even worse than establishing said safeguards.</p>
]]></description><pubDate>Sun, 02 Mar 2025 22:03:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=43235675</link><dc:creator>FieryTransition</dc:creator><comments>https://news.ycombinator.com/item?id=43235675</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43235675</guid></item><item><title><![CDATA[New comment by FieryTransition in "Trust in Firefox and Mozilla Is Gone – Let's Talk Alternatives"]]></title><description><![CDATA[
<p>No, the idea isn't to bleed them dry, but to disincentivize decisions in direct opposition to what they promised to donors, and make them legally hard to do or with actual consequences.<p>It would be a guide rail for people at the top to align themselves with people at the bottom. To be aligned with the promises they use in fundraising from donors (of both time and money).<p>I'm torn with the "just don't give them money then" which a sibling commenter said, it might work short term, but what about everything people have poured into this throughout the decades? I think all that work deserves to be safeguarded, it would show that whatever resources, be it money or time, cannot just be turned on itself by a passing leadership, and that there would be a safeguard against "flushing everything down" as the only choice.<p>Furthermore, I just don't see a promise/company statement as being enough, after everything that has happened. There needs to be legal accountability and safeguards for not sinking a multi-generational ship.</p>
]]></description><pubDate>Sun, 02 Mar 2025 21:58:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=43235640</link><dc:creator>FieryTransition</dc:creator><comments>https://news.ycombinator.com/item?id=43235640</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43235640</guid></item><item><title><![CDATA[New comment by FieryTransition in "Trust in Firefox and Mozilla Is Gone – Let's Talk Alternatives"]]></title><description><![CDATA[
<p>Is there a way to litigate Firefox, to pay back the money, based on the false premise they gave? And that the damages extend well beyond a few individuals?<p>Say, the threat of an actual litigation, would help hold them accountable in the future?</p>
]]></description><pubDate>Sun, 02 Mar 2025 18:01:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=43233104</link><dc:creator>FieryTransition</dc:creator><comments>https://news.ycombinator.com/item?id=43233104</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43233104</guid></item><item><title><![CDATA[New comment by FieryTransition in "Understanding Reasoning LLMs"]]></title><description><![CDATA[
<p>Thanks a lot for the detailed reply, it was better than I had hoped for :)<p>So knowledge transfer is something incredibly specific and much more narrow than what I thought. They don't transfer concepts by generalization, but they compress knowledge instead, which I assume the difference is, that generalization is much more fluid, while compression is much more static, like a dictionary where each key has a probability to be chosen, and all the relationships are frozen, and the only generalization that happens, is the generalization which is an expression of the training method used, since the training method freezes it's "model of the world" into the weights so to say? So if the training method itself cannot generalize, but only compress, why would the resulting model that the training method produces? Is that understood correctly?<p>Does there exist a computational model, which can be used to analyse a training method and put a bound on the expressiveness of the resulting model?<p>It's fascinating that the emergent ability of models disappear if you measure them differently. Guess the difference is that "emergent abilities" are kinda nonsensical, since they have no explanation of causality (i.e. it "just" happens), and just seeing the model getting linearly better with training fits into a much more sane framework. That is, like you said, when your success metric is measuring discretely, you also see the model itself as discrete, and it hides the continuous hill climbing you would otherwise see the model exhibit with a different non-discrete metric.<p>But the model still gets better over time, so would you expect the model to get progressively worse on a more generalized metric, or does it only relate to the spikes in the graph that they talk about? IE, they answer the question of "why" jumps in performance are not emergent, but they don't answer why the performance keeps increasing, even if it is linear, and whether it is detrimental to other less related tasks?<p>And if you wanted to test "emergent" wouldn't it be more interesting to test the model on tasks, which would be much more unrelated to the task at hand? That would be to test generalization, more so as we see humans see it? So it wouldn't really be emergence, but generalization of concepts?<p>It makes sense that it is more straightforward to refute a claim by using contradiction. Would it be good practice for papers, to try and refute their own claims by contradiction first? I guess that would save a lot of time.<p>It's interesting about the knowledge leakage, because I was thinking about the concept of world simulations and using models to learn about scenarios through simulations and consequence. But the act of creating a model to perceive the world, taints the model itself with bias, so the difficulty lies in creating a model which can rearrange itself to get rid of incorrect assumptions, while disconnecting its initial inherent bias. I thought about models which can create other models etc, but then how does the model itself measure success? If everything is changing, then so is the metric, so the model could decide to change what it measures as well. I thought about hard coding a metric into the model, but what if the metric I choose is bad, and we are then stuck with the same problem of bias as well. So it seems like there are only two options, it either converges towards total uncontrollability or it is inherently biased, there's doesn't seem to be any in-between?<p>I admit I'm trying to learn things about ML I just find general intelligence research fascinating (neuroscience as well), but the more I learn, the more I realize I should really go back to the fundamentals and build up. Because even things which seem like they make sense on a surface level, really has a lot of meaning behind them, and needs a well-built intuition not from a practical level, but from a theoretical level.<p>From the papers I've read which I find interesting, it's like there's always the right combination of creativity in thinking, which sometimes my intuition/curiosity about things proved right, but I lack the deeper understanding, which can lead to false confidence in results.</p>
]]></description><pubDate>Sun, 09 Feb 2025 15:12:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=42991131</link><dc:creator>FieryTransition</dc:creator><comments>https://news.ycombinator.com/item?id=42991131</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42991131</guid></item></channel></rss>