<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: quonn</title><link>https://news.ycombinator.com/user?id=quonn</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 09:08:45 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=quonn" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by quonn in "An LLM is a lossy encyclopedia"]]></title><description><![CDATA[
<p>"Prediction" is hardly more than another term for inference. It's the very essence of machine learning. There is nothing new or useful in this concept.</p>
]]></description><pubDate>Tue, 02 Sep 2025 12:27:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=45102212</link><dc:creator>quonn</dc:creator><comments>https://news.ycombinator.com/item?id=45102212</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45102212</guid></item><item><title><![CDATA[New comment by quonn in "SaaS Is Dead"]]></title><description><![CDATA[
<p>There was this BusinessWeek cover about OOP, almost 35 years ago:<p><a href="https://substackcdn.com/image/fetch/$s_!diR3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F58a90c5a-ce41-4055-a3ec-b01aaaefb1fe_2740x3666.jpeg" rel="nofollow">https://substackcdn.com/image/fetch/$s_!diR3!,f_auto,q_auto:...</a></p>
]]></description><pubDate>Mon, 25 Aug 2025 11:01:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=45012520</link><dc:creator>quonn</dc:creator><comments>https://news.ycombinator.com/item?id=45012520</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45012520</guid></item><item><title><![CDATA[New comment by quonn in "A.I. researchers are negotiating $250M pay packages"]]></title><description><![CDATA[
<p>Ronaldo competes in a sport that has 250 million players (mostly for leisure purposes) worldwide, who often practice daily since childhood, and still comes out on top.<p>Are there 250 million AI specialists and the ones hired by Meta still come out on top?</p>
]]></description><pubDate>Sat, 02 Aug 2025 09:34:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=44766053</link><dc:creator>quonn</dc:creator><comments>https://news.ycombinator.com/item?id=44766053</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44766053</guid></item><item><title><![CDATA[New comment by quonn in "Treating beef like coal would make a big dent in greenhouse-gas emissions"]]></title><description><![CDATA[
<p>I‘m trying to find something resembling a reasoned argument in your comment, but there‘s nothing except profanity.<p>I did not point out exceptions and the chicken example is merely an illustration of one of my points.<p>And who says we are talking about the west? Plenty of comments in this thread are talking about pandemics, something that is not known to originate from western agriculture.<p>You know what‘s a caveman take? Thinking that there is any chance to convince a meaningful number of people to reduce meat consumption globally in the required time window (20-50 years) in a way that has any bearing on climate change (as opposed to the many steps being taken that actually work). That‘s a caveman take.<p>But now some facts:<p><a href="https://interactive.carbonbrief.org/what-is-the-climate-impact-of-eating-meat-and-dairy/img/1.png" rel="nofollow">https://interactive.carbonbrief.org/what-is-the-climate-impa...</a><p>As you can see, the type of meat matters a lot. Cheese is doing worse than pork in this example (not sure I even believe this without more evidence yet). Non-meat sources of protein don‘t do very well: Tofu is just 2x better than poultry. Compare this to the giant bar for beef.<p>Better chart, apparently same source:<p><a href="https://ichef.bbci.co.uk/news/1536/cpsprodpb/0477/production/_121534110_0d88cbbb-c0ea-4ce7-99bd-f0b154892b33.jpg.webp" rel="nofollow">https://ichef.bbci.co.uk/news/1536/cpsprodpb/0477/production...</a><p>In short, yes, it would be theoretically possible to eliminate about 10% of global emissions if everyone everywhere stopped eating meat and replaced it with a balanced non-meat diet.<p>But such an outcome is not realistic.<p>This is my last comment on HN. It is sad what this corner of the internet has become.</p>
]]></description><pubDate>Thu, 17 Jul 2025 23:09:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=44599320</link><dc:creator>quonn</dc:creator><comments>https://news.ycombinator.com/item?id=44599320</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44599320</guid></item><item><title><![CDATA[New comment by quonn in "Treating beef like coal would make a big dent in greenhouse-gas emissions"]]></title><description><![CDATA[
<p>Of course. The whole study is about cities, even the first sentences already make this very clear. It has nothing to do with normal gardens, nothing _at all_.</p>
]]></description><pubDate>Thu, 17 Jul 2025 23:04:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=44599283</link><dc:creator>quonn</dc:creator><comments>https://news.ycombinator.com/item?id=44599283</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44599283</guid></item><item><title><![CDATA[New comment by quonn in "Treating beef like coal would make a big dent in greenhouse-gas emissions"]]></title><description><![CDATA[
<p>I said backyard, not some urban garden.</p>
]]></description><pubDate>Thu, 17 Jul 2025 16:40:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=44595209</link><dc:creator>quonn</dc:creator><comments>https://news.ycombinator.com/item?id=44595209</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44595209</guid></item><item><title><![CDATA[New comment by quonn in "Treating beef like coal would make a big dent in greenhouse-gas emissions"]]></title><description><![CDATA[
<p>Wrong.<p>- mass unethical treatment (assuming you do not mean the fact that animals are killed) is related to the conditions which are related to price<p>- health risks can be minimal depending on the amount and type of meat you eat<p>- the CO2 impact again depends on the meat and conditions. Surely chicken in your backyard can be kept without CO2 impacts with some effort.<p>- your very existence has a CO2 impact. By your own logic you have two choices …</p>
]]></description><pubDate>Thu, 17 Jul 2025 08:32:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=44591049</link><dc:creator>quonn</dc:creator><comments>https://news.ycombinator.com/item?id=44591049</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44591049</guid></item><item><title><![CDATA[New comment by quonn in "Mira Murati’s AI startup Thinking Machines valued at $12B in early-stage funding"]]></title><description><![CDATA[
<p>> Humans are tearing themselves apart and the only thing worth betting on is AI that can [...] end scarcity<p>Scarcity, wow...<p>- There is no scarcity in the rich world by historical standards.<p>- There is extreme poverty in large parts of the world, no amount of human intelligence has fixed this and therefore no amount of AI will. It is primarily not a question of intelligence.<p>- On top of that "ending scarcity" is impossible due to the hedonistic treadmill and the way the human mind works as well as the fact that with or without AI there will still be disease, aging and death.</p>
]]></description><pubDate>Wed, 16 Jul 2025 09:28:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=44580348</link><dc:creator>quonn</dc:creator><comments>https://news.ycombinator.com/item?id=44580348</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44580348</guid></item><item><title><![CDATA[New comment by quonn in "A non-anthropomorphized view of LLMs"]]></title><description><![CDATA[
<p>I think I already answered those above. I draw the line between 3 and 4, possibly between 4 and 5. I don't know for sure. But there are good reasons to hold this belief.<p>> the Buddhists call this nirvana, and it's quite difficult to actually achieve.<p>Not really. The zen buddhists call what I described kensho and it's not very hard to achieve. I specifically said a few seconds. Probably anyone who wholeheartedly meditated for some time has experienced this.<p>Nirvana, on the other hand, is just the other side of practice-and-enlightenment as a drawn out process. You may call it hard to achieve, others may call it the dharma gate of ease and joy.</p>
]]></description><pubDate>Thu, 10 Jul 2025 22:02:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=44526140</link><dc:creator>quonn</dc:creator><comments>https://news.ycombinator.com/item?id=44526140</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44526140</guid></item><item><title><![CDATA[New comment by quonn in "What is Realtalk’s relationship to AI? (2024)"]]></title><description><![CDATA[
<p>> you could fix your car 40 years ago, but you can’t now, because of scaled corporate processes.<p>You can fix your car just fine - just not the electronics. And those were to a large degree added for safety reasons. It is due to the complexity that they are difficult or impossible to fix.</p>
]]></description><pubDate>Thu, 10 Jul 2025 21:55:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=44526060</link><dc:creator>quonn</dc:creator><comments>https://news.ycombinator.com/item?id=44526060</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44526060</guid></item><item><title><![CDATA[New comment by quonn in "A non-anthropomorphized view of LLMs"]]></title><description><![CDATA[
<p>But nobody would dispute my basic definition (it is the subjective feeling or perception of being in the world).<p>There are unsettled questions but that definition will hold regardless.</p>
]]></description><pubDate>Mon, 07 Jul 2025 19:04:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=44493614</link><dc:creator>quonn</dc:creator><comments>https://news.ycombinator.com/item?id=44493614</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44493614</guid></item><item><title><![CDATA[New comment by quonn in "LLMs should not replace therapists"]]></title><description><![CDATA[
<p>Start here:<p><a href="https://udlbook.github.io/udlbook/" rel="nofollow">https://udlbook.github.io/udlbook/</a></p>
]]></description><pubDate>Mon, 07 Jul 2025 09:45:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=44488488</link><dc:creator>quonn</dc:creator><comments>https://news.ycombinator.com/item?id=44488488</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44488488</guid></item><item><title><![CDATA[New comment by quonn in "A non-anthropomorphized view of LLMs"]]></title><description><![CDATA[
<p>> LLMs are not conscious because unlike human brains they don't learn or adapt (yet).<p>That's neither a necessary nor sufficient condition.<p>In order to be conscious, learning may not be needed, but a perception of the passing of time may be needed which may require some short-term memory. People with severe dementia often can't even remember the start of a sentence they are reading, they can't learn, but they are certainly conscious because they have just enough short-term memory.<p>And learning is not sufficient either. Consciousness is about being a subject, about having a subjective experience of "being there" and just learning by itself does not create this experience. There is plenty of software that can do some form of real-time learning but it doesn't have a subjective experience.</p>
]]></description><pubDate>Mon, 07 Jul 2025 09:40:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=44488461</link><dc:creator>quonn</dc:creator><comments>https://news.ycombinator.com/item?id=44488461</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44488461</guid></item><item><title><![CDATA[New comment by quonn in "A non-anthropomorphized view of LLMs"]]></title><description><![CDATA[
<p>> To claim that LLMs do not experience consciousness requires a model of how consciousness works.<p>Nope. What can be asserted without evidence can also be dismissed without evidence. Hitchens's razor.<p>You know you have consciousness (by the very definition that you can observe it in yourself) and that's evidence. Because other humans are genetically and in every other way identical, you can infer it for them as well. Because mammals are very similar many people (but not everyone) infers it for them as well. There is zero evidence for LLMs and their _very_ construction suggests that they are like a calculator or like Excel or like any other piece of software no matter how smart they may be or how many tasks they can do in the future.<p>Additionally I am really surprised by how many people here confuse consciousness with intelligence. Have you never paused for a second in your life to "just be". Done any meditation? Or even just existed at least for a few seconds without a train of thought? It is very obvious that language and consciousness are completely unrelated and there is no need for language and I doubt there is even a need for intelligence to be conscious.<p>Consider this:<p>In the end an LLM could be executed (slowly) on a CPU that accepts very basic _discrete_ instructions, such as ADD and MOV. We know this for a fact. Those instructions can be executed arbitrarily slowly. There is no reason whatsoever to suppose that it should feel like anything to be the CPU to say nothing of how it would subjectively feel to be a MOV instruction. It's ridiculous. It's unscientific. It's like believing that there's a spirit in the tree you see outside, just because - why not? - why wouldn't there be a spirit in the tree?</p>
]]></description><pubDate>Mon, 07 Jul 2025 09:01:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=44488208</link><dc:creator>quonn</dc:creator><comments>https://news.ycombinator.com/item?id=44488208</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44488208</guid></item><item><title><![CDATA[New comment by quonn in "A non-anthropomorphized view of LLMs"]]></title><description><![CDATA[
<p>> A brain is also "just proteins and currents,"<p>This is actually not comparable, because the brain has a much more complex structure that is _not_ learned, even at that level. The proteins and their structure are not a result of training. The fixed part for LMMs is rather trivial and is, in fact, not much for than MatMul which is very easy to understand - and we do. The fixed part of the brain, including the structure of all the proteins is enormously complex which is very difficult to understand - and we don't.</p>
]]></description><pubDate>Mon, 07 Jul 2025 08:57:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=44488183</link><dc:creator>quonn</dc:creator><comments>https://news.ycombinator.com/item?id=44488183</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44488183</guid></item><item><title><![CDATA[New comment by quonn in "I extracted the safety filters from Apple Intelligence models"]]></title><description><![CDATA[
<p>I don‘t think so. My impression with LLMs is that they correct typos well. I would imagine this happens in early layers without much impact on the remaining computation.</p>
]]></description><pubDate>Sun, 06 Jul 2025 22:03:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=44484499</link><dc:creator>quonn</dc:creator><comments>https://news.ycombinator.com/item?id=44484499</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44484499</guid></item><item><title><![CDATA[New comment by quonn in "The Rise of Whatever"]]></title><description><![CDATA[
<p>Instant transfers have been available for many years. they were not free, but most banks supported doing them.</p>
]]></description><pubDate>Fri, 04 Jul 2025 08:18:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=44462369</link><dc:creator>quonn</dc:creator><comments>https://news.ycombinator.com/item?id=44462369</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44462369</guid></item><item><title><![CDATA[New comment by quonn in "The Rise of Whatever"]]></title><description><![CDATA[
<p>> pretty damn hard sending money in the EU too<p>You literally enter an IBAN and the transfer will appear in the other account the next day. And if you need the money in the target account immediately (within 10 seconds) you can do it, too, by checking a checkbox for a small fee and that fee will drop to ZERO across the EU in October 2025.</p>
]]></description><pubDate>Fri, 04 Jul 2025 08:16:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=44462355</link><dc:creator>quonn</dc:creator><comments>https://news.ycombinator.com/item?id=44462355</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44462355</guid></item><item><title><![CDATA[New comment by quonn in "Spending Too Much Money on a Coding Agent"]]></title><description><![CDATA[
<p>Well, that could be an additional problem.<p>My point was not that AI will necessarily be cheaper to run than $200, but that there is not much profit to be made. Of course the cost of inference will form a lower bound on the price as well.</p>
]]></description><pubDate>Thu, 03 Jul 2025 18:48:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=44458062</link><dc:creator>quonn</dc:creator><comments>https://news.ycombinator.com/item?id=44458062</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44458062</guid></item><item><title><![CDATA[New comment by quonn in "Spending Too Much Money on a Coding Agent"]]></title><description><![CDATA[
<p>Charging $200/month is economically only possible if there is not a true market for LLMs or some sort of monopoly power. Currently there is no evidence that this will be the case. There are already multiple competitors and the barrier to entry is relatively low (compared to e.g. the car industry or other manufacturing industries), there are no network effects (like for social networks) and no need to get the product 100% right (like compatibility to Photoshop or Office) and the prices for training will drop further. Furthermore $200 is not free (like Google).<p>Can anyone name one single widely-used digital product that does _not_ have to be precisely correct/compatible/identical to The Original and that everyone _does_ pay $200/month for?<p>Therefore, should prices that users pay get anywhere even close to that number, there will naturally be opportunities for competitors to bring prices down to a reasonable level.</p>
]]></description><pubDate>Thu, 03 Jul 2025 16:55:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=44457026</link><dc:creator>quonn</dc:creator><comments>https://news.ycombinator.com/item?id=44457026</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44457026</guid></item></channel></rss>