<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: lechatonnoir</title><link>https://news.ycombinator.com/user?id=lechatonnoir</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 02 May 2026 23:56:14 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=lechatonnoir" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by lechatonnoir in "Amazon cuts 16k jobs"]]></title><description><![CDATA[
<p>I mean... you can't think of any ways that AI could actually generate new value? Or more abstractly, of a way that Jevons' paradox can't apply in the case of AI?</p>
]]></description><pubDate>Thu, 29 Jan 2026 05:02:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=46805998</link><dc:creator>lechatonnoir</dc:creator><comments>https://news.ycombinator.com/item?id=46805998</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46805998</guid></item><item><title><![CDATA[New comment by lechatonnoir in "Anthropic's original take home assignment open sourced"]]></title><description><![CDATA[
<p>i think by your logic, they only thing that they do that is condescending is to say that an interview is not guaranteed.<p>people are mentioning that they do this for a reason, which explains away that behavior, so yeah, it kinda does change the fact of whether they are being condescending.</p>
]]></description><pubDate>Thu, 29 Jan 2026 04:59:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46805981</link><dc:creator>lechatonnoir</dc:creator><comments>https://news.ycombinator.com/item?id=46805981</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46805981</guid></item><item><title><![CDATA[New comment by lechatonnoir in "Stop crawling my HTML – use the API"]]></title><description><![CDATA[
<p>it's kind of hard to tell what your position is here. should people not ask chatbots how to scrape html? should people not purchase RAM to run chatbots locally?</p>
]]></description><pubDate>Sun, 14 Dec 2025 20:14:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46266403</link><dc:creator>lechatonnoir</dc:creator><comments>https://news.ycombinator.com/item?id=46266403</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46266403</guid></item><item><title><![CDATA[New comment by lechatonnoir in "All it takes is for one to work out"]]></title><description><![CDATA[
<p>i think in this thread the goalposts were slowly moved. people were initially talking about success being predicted by having the excess necessary to comfortably take many shots on goal. it seems like we've granted that this $250k shot was a one-time thing.<p>it is true but irrelevant to the original topic that this is more money than the global poor ever see, and that this is more money that most people get to have. i don't think anyone was arguing that this represents <i>zero</i> privilege</p>
]]></description><pubDate>Tue, 02 Dec 2025 23:34:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46128405</link><dc:creator>lechatonnoir</dc:creator><comments>https://news.ycombinator.com/item?id=46128405</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46128405</guid></item><item><title><![CDATA[New comment by lechatonnoir in "Open models by OpenAI"]]></title><description><![CDATA[
<p>I'm pretty sure there's no reason that Anthropic <i>has</i> to do research on open models, it's just that they produced their result on open models so that you can reproduce their result on open models without having access to theirs.</p>
]]></description><pubDate>Tue, 05 Aug 2025 18:12:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=44801941</link><dc:creator>lechatonnoir</dc:creator><comments>https://news.ycombinator.com/item?id=44801941</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44801941</guid></item><item><title><![CDATA[New comment by lechatonnoir in "Tesla seeks to guard crash data from public disclosure"]]></title><description><![CDATA[
<p>Hmm, that might be possible, but that's essentially not what I assumed. At the very least, they operate on the same hardware, so Autopilot is in some sense "intentionally bad" as a whole.</p>
]]></description><pubDate>Fri, 20 Jun 2025 21:44:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=44332404</link><dc:creator>lechatonnoir</dc:creator><comments>https://news.ycombinator.com/item?id=44332404</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44332404</guid></item><item><title><![CDATA[New comment by lechatonnoir in "Guess I'm a rationalist now"]]></title><description><![CDATA[
<p>I am sure that there are some people who exhibit the behaviors you're describing, but I really don't think the group as a whole is disinterested in prior work or discussion of philosophy in general:<p><a href="https://www.lesswrong.com/w/epistemology" rel="nofollow">https://www.lesswrong.com/w/epistemology</a><p><a href="https://www.lesswrong.com/w/priors" rel="nofollow">https://www.lesswrong.com/w/priors</a><p><a href="https://www.lesswrong.com/posts/2x67s6u8oAitNKF73/" rel="nofollow">https://www.lesswrong.com/posts/2x67s6u8oAitNKF73/</a> (a post noting that the foundational problems in mech interp are grounded in philosophical questions about representation ~150 years old)<p><a href="https://www.lesswrong.com/w/consciousness" rel="nofollow">https://www.lesswrong.com/w/consciousness</a> (the page on consciousness first citing the MIT and Stanford encyclopedias, then providing a timeline from Democritus, through Descartes, Hobbes,... all the way to Nagel, Chalmers, Tegmark).<p>There is also sort of a meme of interest in Thomas Kuhn: <a href="https://www.lesswrong.com/posts/HcjL8ydHxPezj6wrt/book-review-the-structure-of-scientific-revolutions" rel="nofollow">https://www.lesswrong.com/posts/HcjL8ydHxPezj6wrt/book-revie...</a><p>See also these attempts to refer and collate prior literature:
<a href="https://www.lesswrong.com/posts/qc7P2NwfxQMC3hdgm/rationalism-before-the-sequences" rel="nofollow">https://www.lesswrong.com/posts/qc7P2NwfxQMC3hdgm/rationalis...</a><p><a href="https://www.lesswrong.com/posts/xg3hXCYQPJkwHyik2/the-best-textbooks-on-every-subject" rel="nofollow">https://www.lesswrong.com/posts/xg3hXCYQPJkwHyik2/the-best-t...</a><p><a href="https://www.lesswrong.com/posts/SXJGSPeQWbACveJhs/the-best-tacit-knowledge-videos-on-every-subject" rel="nofollow">https://www.lesswrong.com/posts/SXJGSPeQWbACveJhs/the-best-t...</a><p><a href="https://www.lesswrong.com/posts/HLJMyd4ncE3kvjwhe/the-best-reference-works-for-every-subject" rel="nofollow">https://www.lesswrong.com/posts/HLJMyd4ncE3kvjwhe/the-best-r...</a><p><a href="https://www.lesswrong.com/posts/bMmD5qNFKRqKBJnKw/rigorous-political-science" rel="nofollow">https://www.lesswrong.com/posts/bMmD5qNFKRqKBJnKw/rigorous-p...</a><p>Now, one may disagree with the particular choices or philosophical positions taken, but it's pretty hard to say these people are ignorant or not trying to be informed about what prior thinkers have done, especially compared to any particular reference culture, except maybe academics.<p>As for the thing about Aella, I feel she's not as much of a thought leader as you've surmised, and I think doesn't claim to be. My personal view is that she does some interesting semi-rigorous surveying that is unlikely to be done elsewhere. She's not a scientist/statistician or a total revolutionary but her stuff is not devoid of informational value either. Some of her claims are hedged adequately, some of them are hedged a bit inadequately. You might have encountered some particularly (irrationally?) ardent fans.</p>
]]></description><pubDate>Fri, 20 Jun 2025 21:40:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=44332374</link><dc:creator>lechatonnoir</dc:creator><comments>https://news.ycombinator.com/item?id=44332374</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44332374</guid></item><item><title><![CDATA[New comment by lechatonnoir in "Guess I'm a rationalist now"]]></title><description><![CDATA[
<p>Here's a collection of debates about that topic:<p><a href="https://www.lesswrong.com/posts/85mfawamKdxzzaPeK/any-good-criticism-of-karl-popper-s-epistemology?commentId=kaJGMtZYLCQtti4kE" rel="nofollow">https://www.lesswrong.com/posts/85mfawamKdxzzaPeK/any-good-c...</a><p>I personally don't have that much of an interest in this topic, so I can't critique them for quality myself, but they may at least be of relevance to you.</p>
]]></description><pubDate>Fri, 20 Jun 2025 21:31:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=44332298</link><dc:creator>lechatonnoir</dc:creator><comments>https://news.ycombinator.com/item?id=44332298</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44332298</guid></item><item><title><![CDATA[New comment by lechatonnoir in "Guess I'm a rationalist now"]]></title><description><![CDATA[
<p>I am really not sure where you get any of these ideas. For each of your critiques, there are not only discussions, but taxonomies of compendiums of discussions about the topics at hand on LessWrong, which can easily be found by Googling any keyword or phrase in your comment.<p>On "considering what should be the baseline assumption":<p><a href="https://www.lesswrong.com/w/epistemology" rel="nofollow">https://www.lesswrong.com/w/epistemology</a><p><a href="https://www.lesswrong.com/w/priors" rel="nofollow">https://www.lesswrong.com/w/priors</a>, particularly <a href="https://www.lesswrong.com/posts/hNqte2p48nqKux3wS/trapped-priors-as-a-basic-problem-of-rationality" rel="nofollow">https://www.lesswrong.com/posts/hNqte2p48nqKux3wS/trapped-pr...</a><p>On the idea that "rationalists think that they can just apply rationality infinitely to everything":<p><a href="https://www.lesswrong.com/w/bounded-rationality" rel="nofollow">https://www.lesswrong.com/w/bounded-rationality</a><p>On the critique that rationalists are blind to the fact that "reason isn't the only thing that's important", generously reworded as "reason has to be grounded in a set of human values", some of the most philosophically coherent stuff I see on the internet is from LW:<p><a href="https://www.lesswrong.com/w/metaethics-sequence" rel="nofollow">https://www.lesswrong.com/w/metaethics-sequence</a><p><a href="https://www.lesswrong.com/w/human-values" rel="nofollow">https://www.lesswrong.com/w/human-values</a><p>On "systematically plan to validate":<p><a href="https://www.lesswrong.com/w/rationality-verification" rel="nofollow">https://www.lesswrong.com/w/rationality-verification</a><p><a href="https://www.lesswrong.com/w/making-beliefs-pay-rent" rel="nofollow">https://www.lesswrong.com/w/making-beliefs-pay-rent</a><p>On "what could hold true for one moment could easily shift":<p><a href="https://www.lesswrong.com/w/black-swans" rel="nofollow">https://www.lesswrong.com/w/black-swans</a><p><a href="https://www.lesswrong.com/w/distributional-shifts" rel="nofollow">https://www.lesswrong.com/w/distributional-shifts</a><p><a href="https://www.lesswrong.com/w/forecasting-and-prediction" rel="nofollow">https://www.lesswrong.com/w/forecasting-and-prediction</a></p>
]]></description><pubDate>Fri, 20 Jun 2025 21:28:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=44332279</link><dc:creator>lechatonnoir</dc:creator><comments>https://news.ycombinator.com/item?id=44332279</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44332279</guid></item><item><title><![CDATA[New comment by lechatonnoir in "Guess I'm a rationalist now"]]></title><description><![CDATA[
<p>Well, yeah, I think it's a pretty socially unaware thing to say about yourself out loud, so that's a pretty strong filter there.<p>It's rather different for a community to say that's a standard they aspire to, which is a lot less ridiculously grandstanding of a position IMO.</p>
]]></description><pubDate>Fri, 20 Jun 2025 21:22:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=44332231</link><dc:creator>lechatonnoir</dc:creator><comments>https://news.ycombinator.com/item?id=44332231</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44332231</guid></item><item><title><![CDATA[New comment by lechatonnoir in "Guess I'm a rationalist now"]]></title><description><![CDATA[
<p>I suppose you're right about that, so I can't make the argument go through by saying "mathematics" vs "philosophy". Maybe what I should say instead is that as some dialectics advance/technologies develop, subfields of both such things sprout up and have a lot of low-hanging fruit to pick, and in these cases, the new work will be descended from but not too-essentially informed by the prior work.<p>Like mathematical logic (in the intersection of math and philosophy) didn't have that many true predecessors and was developed very far by maybe only 5-10 individuals cumulatively, or information theory was basically established by Claude Shannon and maybe two other guys, or various aspects of convex optimization or Fourier analysis were only developed in the 80s or so, it stands to reason that the AI-related applications of various aspects of philosophy are ripe to be developed now. (By contrast, we don't see, as much, people on LW trying to redo linear algebra from the ground up, nor more "mature" aspects of philosophy.)<p>(If anything, I think it's more feasible than ever before, also, for a bunch of relative amateurs to non-professionally make real intellectual contributions, like noticeably moreso than 100 or even 20 years ago. That's what increasing the baseline levels of education/wealth/exposure to information was intended to achieve, on some level, isn't it?)</p>
]]></description><pubDate>Fri, 20 Jun 2025 21:19:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=44332211</link><dc:creator>lechatonnoir</dc:creator><comments>https://news.ycombinator.com/item?id=44332211</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44332211</guid></item><item><title><![CDATA[New comment by lechatonnoir in "Guess I'm a rationalist now"]]></title><description><![CDATA[
<p>I think you're conflating different groups of people pretty severely.<p>"shunned" in particular is a really strong word, e.g, global health and biosecurity are two of the named categories at the most central EA events:<p><a href="https://www.effectivealtruism.org/ea-global/events/ea-global-new-york-city-2025" rel="nofollow">https://www.effectivealtruism.org/ea-global/events/ea-global...</a></p>
]]></description><pubDate>Fri, 20 Jun 2025 20:57:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=44332018</link><dc:creator>lechatonnoir</dc:creator><comments>https://news.ycombinator.com/item?id=44332018</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44332018</guid></item><item><title><![CDATA[New comment by lechatonnoir in "Guess I'm a rationalist now"]]></title><description><![CDATA[
<p>Well, on a meta level, I think their community has decided that in general it's better to post (and subsequently be able to discuss) ideas that one is not yet very confident about, and ideally that's what the "epistemic status" markers are supposed to indicate to the reader.<p>They can't really be blamed for the fact that others go on to take the ideas more seriously than they intended.<p>(If anything, I think that at least in person, most rationalists are far less confident and far less persuasive than the typical person in proportion to the amount of knowledge/expertise/effort they have on a given topic, particularly in a professional setting, and they would all be well-served to do at least a normal human amount of "write and explain persuasively rather than as a mechanical report of the facts as you see them".)<p>(Also, with all communities there will be the more serious and dedicated core of the people, and then those who sort of cargo-cult or who defer much, or at least some, of their thinking to members with more status. This is sort of unavoidable on multiple levels-- for one, it's quite a reasonable thing to do with the amount of information out there, and for another, communities are always comprised of people with varying levels of seriousness, sincere people and grifters, careful thinkers and less careful thinkers, etc. (see mobs-geeks-sociopaths))<p>(Obviously even with these caveats there are exceptions to this statement, because society is complex and something about propaganda and consequentialism.)<p>Alternately, I wonder if you think there might be a better way of "writing unconfidently", like, other than not writing at all.</p>
]]></description><pubDate>Fri, 20 Jun 2025 20:49:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=44331962</link><dc:creator>lechatonnoir</dc:creator><comments>https://news.ycombinator.com/item?id=44331962</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44331962</guid></item><item><title><![CDATA[New comment by lechatonnoir in "Guess I'm a Rationalist Now"]]></title><description><![CDATA[
<p>Aside from the remark given in the other reply to your comment, I wonder what the standard is: how quickly should a community appear to correct its incorrect beliefs for them to not count as sheep?</p>
]]></description><pubDate>Fri, 20 Jun 2025 02:06:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=44324122</link><dc:creator>lechatonnoir</dc:creator><comments>https://news.ycombinator.com/item?id=44324122</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44324122</guid></item><item><title><![CDATA[New comment by lechatonnoir in "Guess I'm a rationalist now"]]></title><description><![CDATA[
<p>> They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is". The type of people that would be embarrassed to not have an opinion on a topic or say "I don't know"<p>edit: my apologies, that was someone else in the thread. I do feel like between the two comments though there is a "damned if you do, damned if you don't". (The original quote above I found absurd upon reading it.)</p>
]]></description><pubDate>Fri, 20 Jun 2025 02:00:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=44324100</link><dc:creator>lechatonnoir</dc:creator><comments>https://news.ycombinator.com/item?id=44324100</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44324100</guid></item><item><title><![CDATA[New comment by lechatonnoir in "Guess I'm a rationalist now"]]></title><description><![CDATA[
<p>Well, sure, but mathematics is the domain for which this holds maybe the most true out of any. It's less true for fields which are not as old.<p>I'm not sure if this counterpoint generalizes entirely to the original critique, since certainly LessWrongers aren't usually posting about or discussing math as if they've discovered it-- usually substantially more niche topics.</p>
]]></description><pubDate>Fri, 20 Jun 2025 01:51:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=44324070</link><dc:creator>lechatonnoir</dc:creator><comments>https://news.ycombinator.com/item?id=44324070</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44324070</guid></item><item><title><![CDATA[New comment by lechatonnoir in "Generative AI coding tools and agents do not work for me"]]></title><description><![CDATA[
<p>This is such a pointless, tired take.<p>You want to say this guy's experience isn't reproducible? That's one thing, but that's probably not the case unless you're assuming they're pretty stupid themselves.<p>You want to say that it Is reproducible, but that "that doesn't mean AI can think"? Okay, but that's not what the thread was about.</p>
]]></description><pubDate>Tue, 17 Jun 2025 08:20:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=44296831</link><dc:creator>lechatonnoir</dc:creator><comments>https://news.ycombinator.com/item?id=44296831</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44296831</guid></item><item><title><![CDATA[New comment by lechatonnoir in "Tesla seeks to guard crash data from public disclosure"]]></title><description><![CDATA[
<p>I think the idea is, why would the visualization be so intentionally bad in the Autopilot version as to not detect the kids entirely? What benefit does that confer, or, from another perspective, what software constraint forces this to be the case?</p>
]]></description><pubDate>Fri, 06 Jun 2025 21:46:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=44205308</link><dc:creator>lechatonnoir</dc:creator><comments>https://news.ycombinator.com/item?id=44205308</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44205308</guid></item><item><title><![CDATA[New comment by lechatonnoir in "My AI skeptic friends are all nuts"]]></title><description><![CDATA[
<p>I would like to understand how you ideally imagine a person solving issues of this type. I'm for understanding things instead of hacking at them in general, and this tendency increases the more central the things to understand are to the things you like to do. However, it's a point of common agreement that just in the domain of computer-related tech, there is far more to learn than a person can possibly know in a lifetime, and so we all have to make choices about which ones we want to dive into.<p>I do not expect to go through the process I just described for more than a few hours a year, so I don't think the net loss to my time is huge. I think that the most relevant counterfactual scenario is that I don't learn anything about how these things work at all, and I cope with my problem being unfixed. I don't think this is unusual behavior, to the degree that it's I think a common point of humor among Linux users: <a href="https://xkcd.com/963/" rel="nofollow">https://xkcd.com/963/</a> <a href="https://xkcd.com/456/" rel="nofollow">https://xkcd.com/456/</a><p>This is not to mention issues that are structurally similar (in the sense that search is expensive but verification is cheap, and the issue is generally esoteric so there are reduced returns to learning) but don't necessarily have anything to do with the Linux kernel: <a href="https://github.com/electron/electron/issues/42611">https://github.com/electron/electron/issues/42611</a><p>I wonder if you're arguing against a strawman that thinks that it's not necessary to learn anything about the basic design/concepts of operating systems at all. I think knowledge of it is fractally deep and you could run into esoterica you don't care about at any level, and as others in the thread have noted, at the very least when you are in the weeds with a problem the LLM can often (not always) be better documentation than the documentation. (Also, I actually think that some engineers do on a practical level need to know extremely little about these things and more power to them, the abstraction is working for them.)<p>Holding what you learn constant, it's nice to have control about in what order things force you to learn them. Yak-shaving is a phenomenon common enough that we have a term for it, and I don't know that it's virtuous to know how to shave a yak in-depth (or to the extent that it is, some days you are just trying to do something else).</p>
]]></description><pubDate>Tue, 03 Jun 2025 18:05:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=44172829</link><dc:creator>lechatonnoir</dc:creator><comments>https://news.ycombinator.com/item?id=44172829</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44172829</guid></item><item><title><![CDATA[New comment by lechatonnoir in "My AI skeptic friends are all nuts"]]></title><description><![CDATA[
<p>I relate to this a bit, and on a meta level I think the only way out is through. I'm trying to embrace optimizing the big picture process for my enjoyment and for positive and long-term effective mental states, which does include thinking about when not to use the thing and being thoughtful about exactly when to lean on it.</p>
]]></description><pubDate>Tue, 03 Jun 2025 05:35:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=44166686</link><dc:creator>lechatonnoir</dc:creator><comments>https://news.ycombinator.com/item?id=44166686</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44166686</guid></item></channel></rss>