<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: heyjamesknight</title><link>https://news.ycombinator.com/user?id=heyjamesknight</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 04 May 2026 00:07:58 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=heyjamesknight" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by heyjamesknight in "Ask HN: How can we solve the loneliness epidemic?"]]></title><description><![CDATA[
<p>No, I have the feeling that “feeling safe” is “unhealthy” because these online communities children get access to are full of predators who wish them harm.<p>The online communities in question do more damage than good. They encourage isolation and spread social contagion.<p>We should do more as a society, absolutely! But these places are not “stop gaps” because they’re NOT helping.</p>
]]></description><pubDate>Sat, 17 Jan 2026 23:24:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=46663160</link><dc:creator>heyjamesknight</dc:creator><comments>https://news.ycombinator.com/item?id=46663160</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46663160</guid></item><item><title><![CDATA[New comment by heyjamesknight in "Ask HN: How can we solve the loneliness epidemic?"]]></title><description><![CDATA[
<p>But that's the problem: they're NOT safe in those communities.<p>We've created these unhealthy gardens where young people feel safe, removing any reason for them to engage in the real world. They don't thrive in these places, they slowly withdraw.</p>
]]></description><pubDate>Fri, 16 Jan 2026 16:50:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=46648561</link><dc:creator>heyjamesknight</dc:creator><comments>https://news.ycombinator.com/item?id=46648561</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46648561</guid></item><item><title><![CDATA[New comment by heyjamesknight in "Sergey Brin's Unretirement"]]></title><description><![CDATA[
<p>You've cherry-picked a situation where there is an obvious social norm being broken. A better example would be going to the park and sitting on the bench you used to sit on with your ex. I agree with GP that this is healthier than lying despondent in bed.</p>
]]></description><pubDate>Wed, 07 Jan 2026 22:14:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=46533866</link><dc:creator>heyjamesknight</dc:creator><comments>https://news.ycombinator.com/item?id=46533866</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46533866</guid></item><item><title><![CDATA[New comment by heyjamesknight in "Sergey Brin's Unretirement"]]></title><description><![CDATA[
<p>Coping mechanisms are complex and diverse. The individual in question lost a major source of meaning-making in their life and was struggling to cope with that loss. I don't believe this is any less healthy than other common responses, which range from societal withdrawal to substance abuse.</p>
]]></description><pubDate>Wed, 07 Jan 2026 18:42:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46530630</link><dc:creator>heyjamesknight</dc:creator><comments>https://news.ycombinator.com/item?id=46530630</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46530630</guid></item><item><title><![CDATA[New comment by heyjamesknight in "Sergey Brin's Unretirement"]]></title><description><![CDATA[
<p>A mentally healthy person wants to be helpful. They want to be seen as helpful and they expect others around them to be helpful as well. This is the foundation of "pro-social" behavior: I benefit the group as much or more than the group benefits me.<p>Tying your identity to the place where you're helpful and where that help is appreciated and acknowledged isn't mental illness.</p>
]]></description><pubDate>Wed, 07 Jan 2026 13:44:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=46526285</link><dc:creator>heyjamesknight</dc:creator><comments>https://news.ycombinator.com/item?id=46526285</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46526285</guid></item><item><title><![CDATA[New comment by heyjamesknight in "You’re not burnt out, you’re existentially starving"]]></title><description><![CDATA[
<p>In Positive Psychology, the science of meaning in life (not of life) breaks meaning down into three dimensions: coherence, significance, and purpose. If your job isn’t affording you significance (because your actions don’t “matter” in your organization) then your ability to find meaning in that work is threatened.<p>Your work may have coherence and purpose, but if it doesn’t have significance then it isn’t the source of meaning you thought it was.</p>
]]></description><pubDate>Tue, 23 Dec 2025 13:54:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46365330</link><dc:creator>heyjamesknight</dc:creator><comments>https://news.ycombinator.com/item?id=46365330</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46365330</guid></item><item><title><![CDATA[New comment by heyjamesknight in "You’re not burnt out, you’re existentially starving"]]></title><description><![CDATA[
<p>I love your “clean source of sustainable energy” metaphor.  This is a great example of “eudaimonic” well-being, or the idea of “doing well.”</p>
]]></description><pubDate>Tue, 23 Dec 2025 13:47:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=46365283</link><dc:creator>heyjamesknight</dc:creator><comments>https://news.ycombinator.com/item?id=46365283</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46365283</guid></item><item><title><![CDATA[New comment by heyjamesknight in "You’re not burnt out, you’re existentially starving"]]></title><description><![CDATA[
<p>Hedonic treadmill only applies to hedonia, not the eudaimonia that meaningful work typically brings. “Doing well” doesn’t have the same elastic snap back that “being well” does, and there’s some evidence it can provide a buffer on the hedonic treadmill effect.</p>
]]></description><pubDate>Mon, 22 Dec 2025 00:18:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46350013</link><dc:creator>heyjamesknight</dc:creator><comments>https://news.ycombinator.com/item?id=46350013</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46350013</guid></item><item><title><![CDATA[New comment by heyjamesknight in "Autism's confusing cousins"]]></title><description><![CDATA[
<p>The center of the normal ditribution is “normal” or “normative.” That’s where the term comes from.<p>It’s like saying we shouldn’t call immigrants “aliens” because that conjures images of space. Where do you think the term comes from?</p>
]]></description><pubDate>Sat, 06 Dec 2025 15:31:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46174081</link><dc:creator>heyjamesknight</dc:creator><comments>https://news.ycombinator.com/item?id=46174081</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46174081</guid></item><item><title><![CDATA[New comment by heyjamesknight in "The Case That A.I. Is Thinking"]]></title><description><![CDATA[
<p>Multimodal models aren't really multimodal. The images are mapped to words and then the words are expanded upon by a single mode LLM.<p>If you didn't know the word "duck", you could still see the duck, hunt the duck, use the ducks feather's for your bedding and eat the duck's meat. You would know it could fly and swim without having to know what either of those actions were called.<p>The LLM "sees" a thing, identifies it as a "duck", and then depends on a single modal LLM to tell it anything about ducks.</p>
]]></description><pubDate>Tue, 04 Nov 2025 22:41:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=45816714</link><dc:creator>heyjamesknight</dc:creator><comments>https://news.ycombinator.com/item?id=45816714</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45816714</guid></item><item><title><![CDATA[New comment by heyjamesknight in "The Case That A.I. Is Thinking"]]></title><description><![CDATA[
<p>But language is the input and the vector space within which their knowledge is encoded and stored. The don't have a concept of a duck beyond what others have described the duck as.<p>Humans got by for millions of years with our current biological hardware before we developed language. Your brain stores a model of your <i>experience</i>, not just the words other experiencers have shared with yiu.</p>
]]></description><pubDate>Mon, 03 Nov 2025 19:39:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=45803444</link><dc:creator>heyjamesknight</dc:creator><comments>https://news.ycombinator.com/item?id=45803444</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45803444</guid></item><item><title><![CDATA[New comment by heyjamesknight in "Andrej Karpathy – It will take a decade to work through the issues with agents"]]></title><description><![CDATA[
<p>You’ve said the same thing fifteen times now.<p>I still don’t want to play with you, sorry.</p>
]]></description><pubDate>Wed, 22 Oct 2025 12:58:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=45668384</link><dc:creator>heyjamesknight</dc:creator><comments>https://news.ycombinator.com/item?id=45668384</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45668384</guid></item><item><title><![CDATA[New comment by heyjamesknight in "Andrej Karpathy – It will take a decade to work through the issues with agents"]]></title><description><![CDATA[
<p>Okie dokie mate, whatever you say.<p>Best of luck!</p>
]]></description><pubDate>Wed, 22 Oct 2025 00:19:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=45663507</link><dc:creator>heyjamesknight</dc:creator><comments>https://news.ycombinator.com/item?id=45663507</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45663507</guid></item><item><title><![CDATA[New comment by heyjamesknight in "Andrej Karpathy – It will take a decade to work through the issues with agents"]]></title><description><![CDATA[
<p>No moral posturing and no insults. Your behavior is just objectively noxious. Not just to me, not just in this thread: the vast majority of your conversations here go roughly the way this one did. A quick glance at your profile shows roughly half of the comments you make here end up light grey.<p>You have an enormous chip on your shoulder. You consistently make truth claims about entire fields that are still in debate and then you arrogantly shout over the other person when they disagree with you.<p>I strongly suggest you work on this. It will limit you in life. It probably already has. You probably already know how it has, even!<p>I'm not saying this to be mean, or because I "have nothing left to stand on." You're clearly intelligent and you clearly care about this topic. But until you mature and learn to behave, others will continue to withdraw from conversation with you.<p>Best of luck.</p>
]]></description><pubDate>Tue, 21 Oct 2025 15:56:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=45657354</link><dc:creator>heyjamesknight</dc:creator><comments>https://news.ycombinator.com/item?id=45657354</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45657354</guid></item><item><title><![CDATA[New comment by heyjamesknight in "Andrej Karpathy – It will take a decade to work through the issues with agents"]]></title><description><![CDATA[
<p>Nah, mate, the conversation never went "beyond my depth." You're just not an enjoyable conversation partner.<p>It doesn't matter how smart (you think) you are. If nobody wants to talk to you, you'll be spinning all that brain matter in the corner by yourself. Based on your comment history here, it looks like this happens to you more often than not.<p>I'm sure you have good points. I could probably learn a thing or two from you—maybe you could learn something from me too! But why on earth would anyone want to engage with someone who behaves like you do?<p>Again, best of luck.</p>
]]></description><pubDate>Tue, 21 Oct 2025 13:05:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=45655270</link><dc:creator>heyjamesknight</dc:creator><comments>https://news.ycombinator.com/item?id=45655270</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45655270</guid></item><item><title><![CDATA[New comment by heyjamesknight in "Andrej Karpathy – It will take a decade to work through the issues with agents"]]></title><description><![CDATA[
<p>I’m pretty sure everyone reading can see which of us is the arrogant one.<p>Good day, sir.</p>
]]></description><pubDate>Tue, 21 Oct 2025 11:09:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=45654483</link><dc:creator>heyjamesknight</dc:creator><comments>https://news.ycombinator.com/item?id=45654483</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45654483</guid></item><item><title><![CDATA[New comment by heyjamesknight in "Andrej Karpathy – It will take a decade to work through the issues with agents"]]></title><description><![CDATA[
<p>Look, mate, you can keep jumping up and down about this all you want. But you're arguing science fiction at this point. Not really worth continuing the conversation, but thanks.<p>Best of luck.</p>
]]></description><pubDate>Mon, 20 Oct 2025 23:08:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=45650569</link><dc:creator>heyjamesknight</dc:creator><comments>https://news.ycombinator.com/item?id=45650569</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45650569</guid></item><item><title><![CDATA[New comment by heyjamesknight in "Andrej Karpathy – It will take a decade to work through the issues with agents"]]></title><description><![CDATA[
<p>> We don’t yet understand the full physics of the brain, and we don’t fully understand LLMs either. That’s the point. The same kind of ignorance applies to both. Yet both produce coherent language, emotion like responses, creativity, reasoning, and abstraction. When two black boxes show convergent behavior under different substrates, the rational conclusion isn’t “one is impossible.” It’s “we’re closer than we realize.”<p>No. The LLM does not produce emotion-like responses. I'd argue no on creativity either. And only very limited in reasoning, in domains it has in its training set.<p>You have fundamental misunderstandings about neuroscience and cognitive science. Its hard to argue with you here because you simply don't know what you don't know.<p>Yes, the human brain is the machine we're describing. And we don't describe it very well. Definitely not at the level of understanding how to reproduce it with bitstrings.<p>I'm glad you're so passionate about this topic. But you're arguing the equivalent of FTL transit and living on Dyson Spheres. Its fun as a thought experiment and may theoretically be possible one day, but the line between what we're capable of today and that imagined future is neither straight nor visible—certainly not to the degree you're asserting here.<p>Will we one day have actual machine intelligence? Maybe. Is it going to come anytime soon, or look anything like the transformer-based LLM?<p>No.</p>
]]></description><pubDate>Mon, 20 Oct 2025 16:17:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=45645627</link><dc:creator>heyjamesknight</dc:creator><comments>https://news.ycombinator.com/item?id=45645627</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45645627</guid></item><item><title><![CDATA[New comment by heyjamesknight in "Andrej Karpathy – It will take a decade to work through the issues with agents"]]></title><description><![CDATA[
<p>At this point, you’re describing a machine which depends on a level of physics that simply isn’t possible. Even if it were theoretically possible to reconstruct the state of a human mind from physical components, we are so far from understanding how that could be done it is closer to the realm of impossible than possible.
Your theoretical math box that constructs affective qualia from bit strings isn’t a better description than saying the angels did it. And it bears zero resemblance to the models running today, except for, again, in a theoretical, mathematical way.<p>Back of the envelope math puts an estimate of 10^42 bits to capture the information present in your current physical brain state. Thats just a single brain, a single state. Now you need to build your mythical decoder device, which can translate qualia from this physical state. Where does it live? What’s its output look like? Another 10^40 bitstring?<p>Again, these arguments are fun on paper. But they’re completely removed from reality.</p>
]]></description><pubDate>Mon, 20 Oct 2025 12:16:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=45643030</link><dc:creator>heyjamesknight</dc:creator><comments>https://news.ycombinator.com/item?id=45643030</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45643030</guid></item><item><title><![CDATA[New comment by heyjamesknight in "Andrej Karpathy – It will take a decade to work through the issues with agents"]]></title><description><![CDATA[
<p>You are mistaking the map for the territory. The TERRITORY of human experience is higher dimensional. The LLM utilizes a lower resolution mapping of that territory, a projection from experience to textual (or pixel, or waveform, etc.) representations.<p>This is not just a lossy mapping; it excludes entire categories of experience that cannot be captured/encoded except for as a pointer to the real experience, one that is often shared by the embodied, embedded, enacted, and extended cognitive beings that have had that experience.<p>I can point to beauty and you can understand me because you've experienced beauty. I cannot encode beauty itself. The LLM cannot experience beauty. It may be able to analyze patterns of things determined beautiful by beauty experiencers, but this is, again, a lower resolution map of the actual experience of beauty. Nobody had to train you to experience beauty—you possess that capability innately.<p>You cannot encode the affective response one experiences when holding their newborn. You cannot encode the cognitive appraisal of a religious experience. You can't even encode the qualia of red except for, again, as a pointer to the color.<p>You're also missing that 4E cognitive beings have a fundamental experience of consciousness—particularly the aspect of "here" and "now". The LLM cannot experience either of those phenomena. I cannot encode here and now. But you can, and do, experience both of those constantly.</p>
]]></description><pubDate>Sun, 19 Oct 2025 17:27:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=45636006</link><dc:creator>heyjamesknight</dc:creator><comments>https://news.ycombinator.com/item?id=45636006</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45636006</guid></item></channel></rss>