<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: sailpvp998</title><link>https://news.ycombinator.com/user?id=sailpvp998</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 29 Apr 2026 20:16:14 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=sailpvp998" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by sailpvp998 in ""Self-aware" robots learn by watching humans. Is that a good thing?"]]></title><description><![CDATA[
<p>Calling this “self-aware” is misleading. The robot has body-state awareness, not introspective awareness. And calling it “learning like humans” exaggerates what is mostly sensorimotor adaptation rather than psychologically rich learning shaped by motives, memory, and lived experience.
My main problem here is the definition of the word "self-aware". That, according to my understanding, means being able to comtemplate oneself, one's existance, one's purpose, to me, self aware means something that can contemplate itself, not just correct its position based on what a sensor is telling it, not just navigate through without destructing itself, but to be able to think in a nuanced way about itself and the consequences related to that. So "self-aware", I call bullshit. These are just bots that are starting to find ways to not break themselves while doing one action they are programmed to do. Think of humans, we arent programmed to do anyhting, we learn progressively, through years, through lived experinece, these bots that "imitate" humans are just copying actions, if theyre programmed to, they will copy anything they see, good actions and bad ones. Human psych doesnt work the same way, we see someone do something, we dont mindlessly copy it, we go thorugh the stages of the social learning theory, attention, retention, motivation and finally replication, if we dont see motivation to imitate, we simply dont, we take factors like our own past experineces, our current surroundings, everything, and then learn, that is what learning is for us. For the robot, "learning" is just little more than motor replication, no nuance there.</p>
]]></description><pubDate>Sun, 26 Apr 2026 09:46:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47908896</link><dc:creator>sailpvp998</dc:creator><comments>https://news.ycombinator.com/item?id=47908896</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47908896</guid></item><item><title><![CDATA[New comment by sailpvp998 in "Can LLMs Scale to AGI?"]]></title><description><![CDATA[
<p>I mean, I do have an argument. AI, so giant, so much infrastructure, thing it couldnt do was replicate human thought, the unpredictability, the mess, it couldnt, maybe it will never. The spontaniety of the human mind, how you will be looking at cracks on a wall and come up with an idea for a painting, AI just never could have done that. Doesnt go to say AI is useless, but what Im saying is that these thinking models and idea generators are all worngly used and generate generic fluff that looks like all other gneric fluff, it gets nowhere. What AI does right now, even with so much infra, even with the best AI companies, Claude, etc, they excell at task automation, problem solving, but when it comes to building a "stance" which is consistent, and a "consistent personality", AI just cannot do it. Ai will just try its best to make you happy, it doesnt have a perspective, it gives you a general average of every available perspective. So no, whatever AGI is, intelligence, no I dont think AI is a concept that can scale to the same level as what we biologically are. Its not fundamentally possible, and we're already starting to see this.</p>
]]></description><pubDate>Sun, 26 Apr 2026 09:32:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47908832</link><dc:creator>sailpvp998</dc:creator><comments>https://news.ycombinator.com/item?id=47908832</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47908832</guid></item></channel></rss>