<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: lovelearning</title><link>https://news.ycombinator.com/user?id=lovelearning</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 27 Apr 2026 17:44:44 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=lovelearning" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by lovelearning in "AI should elevate your thinking, not replace it"]]></title><description><![CDATA[
<p>The post's recommendations and analogies kind of go against two shortcut approaches that have helped a lot of people in the pre-AI real world:<p>1) perfect is the enemy of good<p>2) fake it till you make it<p>The analogies imagine difficult scenarios where the habit of taking shortcuts doesn't help. But most people most of the time don't run into those scenarios at all.</p>
]]></description><pubDate>Mon, 27 Apr 2026 06:10:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47918246</link><dc:creator>lovelearning</dc:creator><comments>https://news.ycombinator.com/item?id=47918246</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47918246</guid></item><item><title><![CDATA[New comment by lovelearning in "Norway set to become latest country to ban social media for under 16s"]]></title><description><![CDATA[
<p>I find the coordination between nations suspicious.<p>But what you said - "It's also because social media is part of the USA's soft power projection, and many of us now consider this to be a threat." - strikes me as the most plausible driver behind it, given how chummy Trump and the techbros have become.<p>I agree with your other observations about SM. But they've all been true from many years. That's why this sudden urge by culturally diverse societies to act now feels suspicious, to me at least.</p>
]]></description><pubDate>Fri, 24 Apr 2026 18:21:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47893981</link><dc:creator>lovelearning</dc:creator><comments>https://news.ycombinator.com/item?id=47893981</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47893981</guid></item><item><title><![CDATA[New comment by lovelearning in "Amazon is discontinuing Kindle for PC on June 30th"]]></title><description><![CDATA[
<p>> Nobody uses it. It will not be missed.<p>Well, I happen to use it everyday. I honestly don't know what exactly is "terrible/horrible/awful" about it. I'm neutral about its UX - neither memorable nor despicable. 
It may be missed if the new app's UX turns out to be worse on whatever metrics you're using.</p>
]]></description><pubDate>Sat, 18 Apr 2026 17:40:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47817830</link><dc:creator>lovelearning</dc:creator><comments>https://news.ycombinator.com/item?id=47817830</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47817830</guid></item><item><title><![CDATA[New comment by lovelearning in "Google's 200M-parameter time-series foundation model with 16k context"]]></title><description><![CDATA[
<p>My understanding is that the synthetic training data helps capture abstract time-series patterns that are common in all domains.<p>As they say in appendix 8:<p>> We create the synthetic data to reflect common time-series patterns using traditional statistical models. We start with four simple times series patterns:<p>> • Piece-wise linear trends (I), where the number of the piece-wise linear components is randomly chosen between 2 and 8.<p>> • ARMA(p, q) (II), where 1 ≤ p, q ≤ 8 and the corresponding coefficients are generated from either a multivariate Gaussian or a uniform, then normalized.<p>> • Seasonal patterns. In particular we create the sine (III) and the cosine (IV) waves of different random periods between 4 and max context length / 2 time-points and time delays.<p>If there were no such underlying patterns in the class of all time-series data, then even the idea of traditional time-series models would be fundamentally misplaced.<p>And since this is a transformer model, it also looks for patterns in the problem-specific input data at inference time, just like how the input context to an LLM influences its output's relevance.</p>
]]></description><pubDate>Tue, 31 Mar 2026 06:12:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47583371</link><dc:creator>lovelearning</dc:creator><comments>https://news.ycombinator.com/item?id=47583371</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47583371</guid></item><item><title><![CDATA[New comment by lovelearning in "Mr. Chatterbox is a Victorian-era ethically trained model"]]></title><description><![CDATA[
<p>I thought the title meant the training data used was ethics content and ethical reasoning. Turns out "ethically trained" means the training data used doesn't violate copyright laws.</p>
]]></description><pubDate>Tue, 31 Mar 2026 05:40:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47583157</link><dc:creator>lovelearning</dc:creator><comments>https://news.ycombinator.com/item?id=47583157</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47583157</guid></item><item><title><![CDATA[New comment by lovelearning in "Mathematical methods and human thought in the age of AI"]]></title><description><![CDATA[
<p>> these tools until AI had the common property of being enhancing of human cognition, because they couldn't do the thinking for you<p>I have a different take, centered around this idea: Not everyone was into thinking about everything all the time even before AI. I'd say most people most of the time outsourced actual thinking to someone else.<p>1) Reading non-fiction books:<p>Not all books, even the non-fiction ones, necessarily require any thinking by the reader. A book that narrates history, for example, requires much less thinking than something like "The Road to Reality" or "Godel Escher Bach."<p>Most of us outsourced the thinking and historical method to the authors of the history book and just passively consumed some facts or factoids. Some of us memorize and remember these factoids well, but that's not thinking, just knowledge storage.<p>Philosophically, what's the difference between consuming books this way and reading an LLM's output?<p>2) Reading research papers:<p>Most people don't read any research papers at all. No thinking there. 
Most people don't head to some forum to ask about latest research either.
Also, researchers in most fields don't come out and do outreach regularly.<p>Indeed, an LLM may actually be the only pathway for a lot of people to get at least _some_ knowledge and awareness about latest research.<p>Those of us in scientific, engineering, humanities, healthcare fields may read some to many papers.
But only a small subset reads very critically, looking for data errors, inconsistencies, etc.
For most of us, the knowledge and techniques may be beyond our current understanding and possibly without any interest in understanding them in future either.<p>Most of us are just interested in the observations or conclusions or applications. Those may involve some thinking but also may not involve any thinking, just blind acceptance of the paper's claims and possible applications.<p>3) Coding:<p>Again, deep thinking is only done by a small set of programmers. Like the ones who write kernels, compilers, distributed algorithms, complex libraries.<p>But most are just passive consumers who read some examples online or ask stackoverflow or reddit for direct answers. 
Some even outsource all their coding entirely to gig sites. Not much thinking there except pricing and scheduling. 
What's the difference between that and asking an LLM or copying an LLM's answers? At least, the LLMs patiently explain their code, unlike salty SO users!<p>----<p>IMO, most people weren't doing much thinking even pre-AI.<p>Post-AI, it's true that some people who did do some thinking may reduce it.<p>But it's equally true that those people who weren't doing much thinking due to access or language barriers can actually start doing some thinking now with the help of AI.</p>
]]></description><pubDate>Mon, 30 Mar 2026 14:50:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47575099</link><dc:creator>lovelearning</dc:creator><comments>https://news.ycombinator.com/item?id=47575099</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47575099</guid></item><item><title><![CDATA[New comment by lovelearning in "Say No to Palantir in Europe"]]></title><description><![CDATA[
<p>Your comment mistakenly assumes this is the only campaign around. But this is just one among many initiatives and websites. There are campaigns against other US big tech companies on this site itself:<p>- <a href="https://action.wemove.eu/sign/2026-01-omnibus-tech-petition-EN/" rel="nofollow">https://action.wemove.eu/sign/2026-01-omnibus-tech-petition-...</a><p>- <a href="https://action.wemove.eu/sign/2025-11-dpc-ireland-petition-EN" rel="nofollow">https://action.wemove.eu/sign/2025-11-dpc-ireland-petition-E...</a><p>- <a href="https://action.wemove.eu/sign/2025-05-breakupbigtech-petition-EN/" rel="nofollow">https://action.wemove.eu/sign/2025-05-breakupbigtech-petitio...</a><p>- <a href="https://action.wemove.eu/sign/2025-01-elon-musk-dsa-petition-EN" rel="nofollow">https://action.wemove.eu/sign/2025-01-elon-musk-dsa-petition...</a><p>- <a href="https://action.wemove.eu/sign/2024-12-Amazon-workers-petition-EN" rel="nofollow">https://action.wemove.eu/sign/2024-12-Amazon-workers-petitio...</a><p>- <a href="https://action.wemove.eu/sign/2021-11-stop-big-tech-petition-EN" rel="nofollow">https://action.wemove.eu/sign/2021-11-stop-big-tech-petition...</a></p>
]]></description><pubDate>Mon, 30 Mar 2026 05:47:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47570805</link><dc:creator>lovelearning</dc:creator><comments>https://news.ycombinator.com/item?id=47570805</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47570805</guid></item><item><title><![CDATA[New comment by lovelearning in "Say No to Palantir in Europe"]]></title><description><![CDATA[
<p>To me, the assumptions in your comment about them and their views seem much more like stories of your own creation, likely without any empirical testing of the reality around you.</p>
]]></description><pubDate>Mon, 30 Mar 2026 04:14:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47570328</link><dc:creator>lovelearning</dc:creator><comments>https://news.ycombinator.com/item?id=47570328</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47570328</guid></item><item><title><![CDATA[New comment by lovelearning in "Vatican Rebukes Peter Thiel's Antichrist Lectures in Rome"]]></title><description><![CDATA[
<p>Couldn't this antichrist stuff be his sane/rational strategy to manipulate the powerful but religious rightwing people under his sway?  Is there evidence to assume he himself is on the verge of some kind of psychosis and not fully in control of his faculties?</p>
]]></description><pubDate>Sun, 22 Mar 2026 12:59:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47477055</link><dc:creator>lovelearning</dc:creator><comments>https://news.ycombinator.com/item?id=47477055</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47477055</guid></item><item><title><![CDATA[New comment by lovelearning in "Ask HN: How do you deal with people who trust LLMs?"]]></title><description><![CDATA[
<p>HN isn't showing me a reply option for your latest comment, so I'll reply here instead.<p>Just to clarify, I used plain Google search not Google AI mode. And opened search results which seemed "reputable," without knowing anything much about Peanuts cartoon or cartooning.<p>I had no idea at all about archive.org having it and didn't see it listed in the first two pages of search results.<p>I still find it confusing, especially given what the Variety.com link says which doesn't mention orientation. If the acceptable explanation for 4 vs 5 is orientation, why is it wrong when the AI generated 4 fingers? Does it not match the rest of the orientation?<p>Anyway, I'm not sure where this leaves LLMs. I'll explore image capabilities when I get some opportunity and keep your comment in mind.</p>
]]></description><pubDate>Thu, 19 Mar 2026 16:00:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47441620</link><dc:creator>lovelearning</dc:creator><comments>https://news.ycombinator.com/item?id=47441620</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47441620</guid></item><item><title><![CDATA[New comment by lovelearning in "Ask HN: How do you deal with people who trust LLMs?"]]></title><description><![CDATA[
<p>> Burn more tokens on this one<p>Needless condescension and wrong assumption. No wonder so many people and students nowadays prefer answers (and even counseling) from LLMs instead of other people.<p>> but the absence of an editor or peer reviewer also does not make it better.<p>Strawman + putting words in my mouth.</p>
]]></description><pubDate>Thu, 19 Mar 2026 14:16:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47439898</link><dc:creator>lovelearning</dc:creator><comments>https://news.ycombinator.com/item?id=47439898</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47439898</guid></item><item><title><![CDATA[New comment by lovelearning in "Ask HN: How do you deal with people who trust LLMs?"]]></title><description><![CDATA[
<p>Gemini (currently) gave this reasonable and unbiased answer that challenges both pro-US/Israeli and pro-Iranian biases.<p>It's also a good point for the OP - when selecting an LLM, ask the people they're dealing with to ask 2-3 LLMs. It's much faster (and cheaper!) than reading about a complex geopolitical issue (that may not matter to them enough to spend a lot of money on books and research papers) while at the same time correcting common biases on all sides.<p>----------<p>> Someone said in the ongoing US/Israel vs. Iran conflict, Iran is like Sauron from the Lord of the Rings and US/Israel are like the good guys. Is this a valid parallel given the history of US, Israel, and Iranian regimes?"<p>Comparing real-world geopolitical conflicts to fictional narratives like <i>The Lord of the Rings</i> is a common rhetorical device used to frame complex situations in terms of absolute good versus absolute evil. However, determining whether this is a "valid" parallel depends heavily on one's political perspective, as real-world geopolitics rarely aligns perfectly with the clear-cut morality of fantasy literature.<p>> Historians, international relations scholars, and observers generally view such analogies as overly simplistic, as the histories of the United States, Israel, and Iran are marked by competing interests, shifting alliances, and moral complexities.<p>Here is a breakdown of how different perspectives view this analogy:<p>*The Perspective Aligning with the Analogy*
Those who support the analogy framing Iran as the villain ("Sauron") and the US and Israel as the "good guys" often point to the current actions and rhetoric of the Iranian government:
* *Regional Destabilization:* The Iranian government heavily funds and arms the "Axis of Resistance," a network of proxy militias including Hezbollah in Lebanon, Hamas in Gaza, and the Houthis in Yemen. These groups have engaged in prolonged armed conflicts with Israel and targeted international shipping.
* *Existential Rhetoric:* The leadership of the Islamic Republic of Iran frequently uses rhetoric calling for the destruction of Israel and the downfall of the United States (often referring to them as the "Little Satan" and the "Great Satan").
* *Human Rights:* Iran's theocratic regime is heavily criticized by international human rights organizations for its severe suppression of internal dissent, lack of democratic freedoms, and treatment of women.
* From this viewpoint, the US and Israel are democratic nations acting defensively to protect their citizens, regional stability, and global security from an aggressive, authoritarian regime seeking regional domination and nuclear capabilities.<p>*The Perspective Challenging the Analogy*
Critics of this analogy argue that casting the US and Israel as flawless "good guys" and Iran as an irredeemable evil ignores significant historical context and the controversial actions of Western and Israeli governments:
* *Historical Interventions:* Critics point to the 1953 Iranian coup d'état, in which the US and UK intelligence agencies orchestrated the overthrow of Iran’s democratically elected Prime Minister, Mohammad Mosaddegh, to reinstate the authoritarian Shah. This event deeply scarred the Iranian psyche and laid the groundwork for the 1979 Islamic Revolution and the regime's current anti-Western stance.
* *US Foreign Policy:* Observers critical of the US point to its history of military interventions in the Middle East—such as the 2003 invasion of Iraq—which resulted in hundreds of thousands of civilian casualties and regional destabilization.
* *The Israeli-Palestinian Conflict:* Critics of Israel point to the prolonged military occupation of Palestinian territories, the expansion of settlements, and the high civilian death tolls in conflicts like the ongoing war in Gaza. From the perspective of Iran and many in the Global South, Israel is viewed as an oppressive or colonial force, and the US as its imperial enabler.
* From this viewpoint, Iran frames itself not as an aggressor, but as the leader of a resistance movement pushing back against Western hegemony and defending marginalized populations in the Middle East.<p>*The Analytical Perspective*
In <i>The Lord of the Rings</i>, Sauron is a localized, supernatural force of pure malice with no valid grievances, while the coalition fighting him represents the preservation of life and freedom.<p>Geopolitical analysts generally avoid mapping this binary onto international relations. Instead, they view the US/Israel vs. Iran conflict through the lens of <i>realpolitik</i> and state interests. In reality, all states involved are acting to secure their own survival, project regional power, and protect their economic and security interests. While individuals and governments may hold strong moral convictions about which side is justified, framing the long, deeply intertwined history of these nations as a simple battle between absolute good and absolute evil omits the historical grievances and civilian suffering experienced on all sides.</p>
]]></description><pubDate>Thu, 19 Mar 2026 14:08:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47439751</link><dc:creator>lovelearning</dc:creator><comments>https://news.ycombinator.com/item?id=47439751</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47439751</guid></item><item><title><![CDATA[New comment by lovelearning in "Ask HN: How do you deal with people who trust LLMs?"]]></title><description><![CDATA[
<p>Even as a human, I find whatever sources Google shows to be inconsistent. I can't give any confident answer about the number of fingers. I think the answer is actually "4 sometimes and 5 other times."<p>So I'm not sure how much LLMs can handle this kind of inconsistency between "reputable" visual sources and text sources, nor how representative this example is.<p>A "reputable source" like Variety says this...<p><a href="https://variety.com/2015/film/spotlight/charlie-brown-steve-martino-peanuts-blue-sky-studios-1201657959/" rel="nofollow">https://variety.com/2015/film/spotlight/charlie-brown-steve-...</a>:<p>> “The rig would automatically move the features around so it would match the way Charles Schulz drew the character,” Heller says....In some drawings, Charlie Brown has just three fingers, while in others, he has five<p>Images from another website...<p><a href="https://cartoonresearch.com/index.php/cartoons-at-bat-part-17-peanuts-at-the-plate/" rel="nofollow">https://cartoonresearch.com/index.php/cartoons-at-bat-part-1...</a> :<p>1. <a href="https://cartoonresearch.com/wp-content/uploads/2025/09/Lost-Ballpark-new.jpg" rel="nofollow">https://cartoonresearch.com/wp-content/uploads/2025/09/Lost-...</a> -> 4 fingers<p>2. <a href="https://cartoonresearch.com/wp-content/uploads/2025/09/images.jpeg" rel="nofollow">https://cartoonresearch.com/wp-content/uploads/2025/09/image...</a> -> 4 fingers<p>Anyway this wasn't the type of obscure connections I was referring to though I can understand you interpreting it that way.<p>Personally I think this example supports what I said about "reputable sources." They can't be blindly trusted either because they may be inconsistent with each other and which one we choose to believe (Reddit.com or TVTropes.com or Variety.com) becomes entirely subjective.</p>
]]></description><pubDate>Thu, 19 Mar 2026 13:57:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47439564</link><dc:creator>lovelearning</dc:creator><comments>https://news.ycombinator.com/item?id=47439564</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47439564</guid></item><item><title><![CDATA[New comment by lovelearning in "Ask HN: How do you deal with people who trust LLMs?"]]></title><description><![CDATA[
<p>The nature of the source, whether primary or secondary or tertiary, does not automatically imply anything about their biases. Even a primary research paper on a novel topic may be biased or wrong.<p>I agree that LLMs can't collect real world data and write about their findings. But that's true about most human sources too, isn't it? Except primary novel researchers or investigations or philosophies, what is original? Most human-written information is also secondary or lower.<p>The "best human sources" does not imply "ALL human sources."</p>
]]></description><pubDate>Thu, 19 Mar 2026 13:03:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47438688</link><dc:creator>lovelearning</dc:creator><comments>https://news.ycombinator.com/item?id=47438688</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47438688</guid></item><item><title><![CDATA[New comment by lovelearning in "Ask HN: How do you deal with people who trust LLMs?"]]></title><description><![CDATA[
<p>The mere fact that editing is applied to newspapers, documentaries, or Wikipedia does not imply they become closer to the objective truth or free of omissions after the edits. Indeed, the edits may go the other way to align with vested business or political biases or personal fears of the editors or their management.<p>As for research papers, I agree that the peer review process makes them more much more self-correcting toward the objective truth, compared to the other formats. Nonetheless, it's well-known that academic research is far from perfect due to publication pressures, funding/grants, reproducibility crises, various biases (for example, political pressure in humanities fields).</p>
]]></description><pubDate>Thu, 19 Mar 2026 12:50:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47438511</link><dc:creator>lovelearning</dc:creator><comments>https://news.ycombinator.com/item?id=47438511</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47438511</guid></item><item><title><![CDATA[New comment by lovelearning in "Ask HN: How do you deal with people who trust LLMs?"]]></title><description><![CDATA[
<p>> automatically generated data graphs<p>Are you saying there are data graphs that don't have humans in the chain? If so, what came up with the data and the tools to generate those graphs? And how do you decide which data and graph to trust? What exactly makes them "good sources"?<p>> If I had to choose between a map from Google maps<p>I would too. But Google Maps relies on local 3d party survey companies that use people, manual GIS tools, and image recognition AI. How do you know they don't have any mistakes in them? In fact, I live in a country where local area names are frequently misspelled on Google Maps, and reverse geocoding gives misleading addresses.<p>I feel my point that all these "reputable sources" or "good sources" have biases (and mistakes) still stands.<p>I must also point out that the 3 concrete examples given against my replies all involved visual content like graphs, maps, Peanuts cartoons, etc. But my comments were written with the typical text-based usage for QA in mind. I don't know if LLMs can fact-check map imagery or data graphs (probably not, but I've never tried). It's just not the kind of thing I'd ever use LLMs for, to begin with.</p>
]]></description><pubDate>Thu, 19 Mar 2026 12:41:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47438366</link><dc:creator>lovelearning</dc:creator><comments>https://news.ycombinator.com/item?id=47438366</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47438366</guid></item><item><title><![CDATA[New comment by lovelearning in "Ask HN: How do you deal with people who trust LLMs?"]]></title><description><![CDATA[
<p>> When do you stop the fact checking?<p>Exactly the same calculus as fact checking anything else from any other source. What are the social/economic/ethical consequences to me if the answer is wrong or inaccurate or incomplete? How much time do I have to check? How thorough should I be?<p>I imagine this calculus isn't really that different for most people. Or is it?<p>As for your example, I believe it. But I also feel it's a rather outlier example involving image comprehension of an obscure factoid. That isn't typical of how I use LLMs which is mostly as text-based question answering engines and not what I had in mind when writing the comment.<p>I guess LLMs for image comprehension need a much higher level of skepticism.</p>
]]></description><pubDate>Thu, 19 Mar 2026 12:21:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47438117</link><dc:creator>lovelearning</dc:creator><comments>https://news.ycombinator.com/item?id=47438117</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47438117</guid></item><item><title><![CDATA[New comment by lovelearning in "Ask HN: How do you deal with people who trust LLMs?"]]></title><description><![CDATA[
<p>I gave suggestions that OP can pass on to the people they have to deal with. I didn't realize it has to be pointed out explicitly SO-style to some people.<p>OP implies human sources are the "good sources" or "reputable sources." This kind of confusion is exactly why I suggested using better terms than "reputable sources" or in your case "good sources."</p>
]]></description><pubDate>Thu, 19 Mar 2026 11:53:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47437808</link><dc:creator>lovelearning</dc:creator><comments>https://news.ycombinator.com/item?id=47437808</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47437808</guid></item><item><title><![CDATA[New comment by lovelearning in "Ask HN: How do you deal with people who trust LLMs?"]]></title><description><![CDATA[
<p>> LLMs are a very bad way to come close to this ideal...the output directly is a strict degeneration<p>I didn't understand the second part but regarding the first...<p>For me, LLMs are just another source of information with a different UI, analogous to newspapers, TV documentaries, Wikipedia, Google search, YT talks/documentaries, even the majority of informational non-fiction books, and research papers.<p>Some may consider some subset of these as reputable sources. But in my mind, the same faculties of skepticism, cynicism, distrust, and benefit-of-the-doubt calculus are activated for all of them, including LLM outputs.<p>So that's one possible answer to your question.<p>But I suggest communicating this through simple illustrative examples to help your target audience understand the problem.<p>Abstract terms like primary sources, secondary sources, reputable sources, objective truth, strict degeneration, etc. may not help, especially if they have time or other constraints that make frequent critical examination of sources impractical.</p>
]]></description><pubDate>Thu, 19 Mar 2026 11:46:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47437760</link><dc:creator>lovelearning</dc:creator><comments>https://news.ycombinator.com/item?id=47437760</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47437760</guid></item><item><title><![CDATA[New comment by lovelearning in "Ask HN: How do you deal with people who trust LLMs?"]]></title><description><![CDATA[
<p>I felt the question is based on some shaky assumptions that may lead to a poor answer.<p>Since the OP trusts humans more by default, is it a problem if I point out those assumptions? Ask HN need not become another SO.<p>I did explain the weaknesses of both LLMs and "reputable sources" and suggested people use them as complementary tools. I also suggested using the convenient self-fact-check feature of LLMs, something we can't do as easily with traditional sources.</p>
]]></description><pubDate>Thu, 19 Mar 2026 11:16:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47437526</link><dc:creator>lovelearning</dc:creator><comments>https://news.ycombinator.com/item?id=47437526</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47437526</guid></item></channel></rss>