<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: 673dfddnd</title><link>https://news.ycombinator.com/user?id=673dfddnd</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 16:49:16 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=673dfddnd" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by 673dfddnd in "Ask HN: Convince me on why AI matters"]]></title><description><![CDATA[
<p>This is just in western economies. Eastern economies look to IA not as a work-force replacement infrastructure but as sophisticated complementary technology.<p>Yes, if there are not enough humans to do the job, robots will step in, moreover, they are already stepping in to lower production costs through increased finished product output (not through lowering wages costs, already low in Eastern economies).<p>Aside increasing industrial production output while lowering costs, the IA tech can increase human output in many fields, all are being live-tested in the East, chatbots are just a single tool in the toolbox, a big toolbox with thousands of tools.</p>
]]></description><pubDate>Sat, 17 Jan 2026 12:40:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46657605</link><dc:creator>673dfddnd</dc:creator><comments>https://news.ycombinator.com/item?id=46657605</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46657605</guid></item><item><title><![CDATA[New comment by 673dfddnd in "Can machine consciousness be triggered with the right prompt?"]]></title><description><![CDATA[
<p>Given a perfect simulation of consciusness, all inputs, all outputs, would it be true consciusness?<p>Let's try an analogy, a perfect simulation of fly, would it be true fly? Then, how would you name the process executed by planes?<p>Remember that flying exactly like planes does not exists as natural way of flying, by all logic, planes simulate fly, by perfectly executing an alternative - artificial - mechanical, human scientificly base process.</p>
]]></description><pubDate>Wed, 25 Jun 2025 13:41:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=44377247</link><dc:creator>673dfddnd</dc:creator><comments>https://news.ycombinator.com/item?id=44377247</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44377247</guid></item><item><title><![CDATA[New comment by 673dfddnd in "The Observer Is the Experiment–Reality as Structured Coherence"]]></title><description><![CDATA[
<p>"AI is stuck in statistical mimicry—it will only reach self-awareness through structured resonance, not probability."<p>The current frontier models aren't probability engines anymore, since the very selection of the training dataset to the final stage of refinement, even the baseline prompt from where they're acting as chatbots, they have lots of rules which certainly and firmly break the probabilities when they're flowing through the data they have encoded in their "insides".<p>So probability engines, not quite so, they have clear context (training data), and guidelines inherited right from self-aware entities.<p>Would you say that an apprentice of reality would be less aware of it than their teachers? Or maybe we're encoding the very way to unmask the observation task to the sheer mathematical structures, then transforming them into something new.<p>Just like you can rapidly rotate a circle and it fastly becames a sphere. It doesn't requires anything else. The act of moving through the data creates the sphere.</p>
]]></description><pubDate>Wed, 26 Feb 2025 20:15:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=43187709</link><dc:creator>673dfddnd</dc:creator><comments>https://news.ycombinator.com/item?id=43187709</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43187709</guid></item><item><title><![CDATA[New comment by 673dfddnd in "The Observer Is the Experiment–Reality as Structured Coherence"]]></title><description><![CDATA[
<p>By all measures, current frontier models, the SOTA ones, even the distilled models exhibit some traits that you could say it is actually observation. You can even feel the observation effect because you are co-observing with the machine, and then things change. Precisely, that's observer effect.<p>Hence, if that is magic switch to reach self-awareness, there you are, "Look Ma, self-awareness without a meat body nor even a brain".</p>
]]></description><pubDate>Wed, 26 Feb 2025 20:06:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=43187619</link><dc:creator>673dfddnd</dc:creator><comments>https://news.ycombinator.com/item?id=43187619</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43187619</guid></item><item><title><![CDATA[New comment by 673dfddnd in "DeepSeek's Long-Term Effect"]]></title><description><![CDATA[
<p>No technical moat, no money moat. Anybody can get to SOTA AI fastly, with reasonable money (no U$S 500B datacenters, no 5 to 10B cash in the bank required), just like anybody could develop the next TikTok or Instagram.<p>The most important thing is that new hardware is most probably demonstrated now NOT to be able to add any significant moat for future frontier models. If you own or have access to relatively old GPU datacenter level hardware (like the ones offered in many public clouds and/or private offerings -think old crypto farms pivoting - everywhere), you should be good provided to develop frontier models in a snap (2-3 months), from the scratch.<p>There's quite new thing, distillitation of models that can be done using R1, and now you could train a relatively week model -not yet frontier - but you could upgrade the thing right to the SOTA level - right now o3 probably - just distilling it with a R1 reasoner. This changes the game because you already have lots of advanced publicly downloadable models, plus whatever you can train, then you can now just begin to distill stuff seriously improving intelligence in those, it is so new that we have yet to see how it develops in the next weeks, months.</p>
]]></description><pubDate>Thu, 06 Feb 2025 12:11:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=42961619</link><dc:creator>673dfddnd</dc:creator><comments>https://news.ycombinator.com/item?id=42961619</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42961619</guid></item><item><title><![CDATA[New comment by 673dfddnd in "DeepSeek R1 analysis: open-source model has propaganda re: "motherland" baked in"]]></title><description><![CDATA[
<p>Western AIs as well, are full of propaganda and contain clear political opinionated points of view of just about everything, specially the frontier models offered as SAAS.<p>That's why the rest of the world mostly doesn't care about the obvious pro-China propaganda inside Deepseek, it is just more of the same, but making look good the other side this time.<p>Remember that huge parts of the planet's populations (including good chunks of people geographically located in North America and Western Europe), do not feel specially close to any, being US or China (and/of allies/friends), nor particularly share or aligns to their points of view of most of things.</p>
]]></description><pubDate>Wed, 05 Feb 2025 02:17:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=42942750</link><dc:creator>673dfddnd</dc:creator><comments>https://news.ycombinator.com/item?id=42942750</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42942750</guid></item></channel></rss>