<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: caetris2</title><link>https://news.ycombinator.com/user?id=caetris2</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 13 Apr 2026 22:03:04 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=caetris2" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by caetris2 in "Graph-based AI model maps the future of innovation"]]></title><description><![CDATA[
<p>The paper is a tremendous effort of passion and love for the art of science and the science of deriving discovery from art. I assure you, this person is someone to pay attention to and I hope they never give up on loving the work they do.</p>
]]></description><pubDate>Thu, 14 Nov 2024 02:08:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=42132430</link><dc:creator>caetris2</dc:creator><comments>https://news.ycombinator.com/item?id=42132430</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42132430</guid></item><item><title><![CDATA[Collaborative Cognitive Memory in AI Systems]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.kennybastani.com/2024/11/collaborative-cognitive-memory-in-ai.html">https://www.kennybastani.com/2024/11/collaborative-cognitive-memory-in-ai.html</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42132113">https://news.ycombinator.com/item?id=42132113</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 14 Nov 2024 01:13:49 +0000</pubDate><link>https://www.kennybastani.com/2024/11/collaborative-cognitive-memory-in-ai.html</link><dc:creator>caetris2</dc:creator><comments>https://news.ycombinator.com/item?id=42132113</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42132113</guid></item><item><title><![CDATA[New comment by caetris2 in "Project Sid: Many-agent simulations toward AI civilization"]]></title><description><![CDATA[
<p>Yes... Imagine a blog post at the same quality as this paper that framed their work and their pursuits in a way that <i>genuinely got people excited about what could be around the corner</i>, but with the context that frames exactly how far away they are from achieving what would be the ultimate vision.</p>
]]></description><pubDate>Mon, 04 Nov 2024 02:22:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=42037837</link><dc:creator>caetris2</dc:creator><comments>https://news.ycombinator.com/item?id=42037837</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42037837</guid></item><item><title><![CDATA[New comment by caetris2 in "Project Sid: Many-agent simulations toward AI civilization"]]></title><description><![CDATA[
<p>These are extremely hard problems to solve and it is important for any claims to be validated at this early phase of generative AI.</p>
]]></description><pubDate>Mon, 04 Nov 2024 02:19:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=42037820</link><dc:creator>caetris2</dc:creator><comments>https://news.ycombinator.com/item?id=42037820</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42037820</guid></item><item><title><![CDATA[New comment by caetris2 in "Project Sid: Many-agent simulations toward AI civilization"]]></title><description><![CDATA[
<p>You've absolutely nailed it here, I agree. To make any progress at all at the tremendously difficult problem they are trying to solve, they need to be frank about just how far away they are from what it is they are marketing.<p>I am whole-heartedly in support of commercial interests to drum of awareness and engagement by the authors. This is definitely a cool thing to be working on, however, what does make more sense is to frame the situation more honestly and attract folks to the desire of solving tremendously <i>hard</i> problems based on a level of expertise and awareness that truly moves the ball forward.<p>What would be far more interesting would be for the folks involved to say all the ten thousand things that went wrong in their experiments and to lay out the common-sense conclusions from those findings (just like the one you shared, which is truly insightful and correct).<p>We need to move past this industry and their enablers that continually try to win using the wrong methodology -- pushing away the most inventive and innovative people that are ripe and ready to make paradigm shifts in the AI field and industry.</p>
]]></description><pubDate>Mon, 04 Nov 2024 02:11:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=42037783</link><dc:creator>caetris2</dc:creator><comments>https://news.ycombinator.com/item?id=42037783</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42037783</guid></item><item><title><![CDATA[New comment by caetris2 in "Project Sid: Many-agent simulations toward AI civilization"]]></title><description><![CDATA[
<p>LLMs are stateless and they do not remember the past (as in they don't have a database), making the training data a non-issue here. Therefore, the claims made here in this paper <i>are</i> not possible because the simulation would require each agent to have a memory context larger than any available LLM's context window. The claims made here by the original poster are patently false.<p>The ideas here are not supported by any kind of validated understanding of the limitations of language models. I want to be clear -- the kind of AI that is being purported to be used in the paper is something that has been in video games for over 2 decades, which is akin to Starcraft or Diablo's NPCs.<p>The <i>key</i> issue is that this is a intentional false claim that can certainly damage mainstream understanding of LLM safety and what is possible at the current state of the art.<p>Agentic systems are not well-suited to achieve any of the things that are proposed in the paper, and Generative AI does not enable these kinds of advancements.</p>
]]></description><pubDate>Mon, 04 Nov 2024 00:21:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=42037271</link><dc:creator>caetris2</dc:creator><comments>https://news.ycombinator.com/item?id=42037271</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42037271</guid></item><item><title><![CDATA[New comment by caetris2 in "Project Sid: Many-agent simulations toward AI civilization"]]></title><description><![CDATA[
<p>I've reviewed the paper and I'm confident this paper was fabricated over a collection of false claims. The claims made are not genuine and should not be taken at face value without peer review. The provided charts and graphics are sophisticated forgeries in many cases when reviewing and vetting their applicability to the claims made.<p>It is currently not possible for any kind of LLM to do what is being proposed, while maybe the intentions are good with regard to commercial interests, I want to be clear: this paper seems indicate that election-related activities were coordinated by groups of AI agents in a simulation. These kinds of claims require substantial evidence and that was not provided.<p>The prompts that are provided are not in any way connected to an applied usage of LLMs that are described.</p>
]]></description><pubDate>Sun, 03 Nov 2024 22:34:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=42036645</link><dc:creator>caetris2</dc:creator><comments>https://news.ycombinator.com/item?id=42036645</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42036645</guid></item></channel></rss>