<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: snowcrash123</title><link>https://news.ycombinator.com/user?id=snowcrash123</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 28 Apr 2026 17:52:48 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=snowcrash123" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by snowcrash123 in "LLM Powered Autonomous Agents"]]></title><description><![CDATA[
<p>Good read. Currently there are lot of issues in autonomous agents apart from Finite context length, task decomposition and natural language as interface mentioned in the article. 
I think for agents to truly find adoption in real world, agent trajectory fine tuning is critical component - how do you make an agent perform better to achieve particular objective with every subsequent run. Basically making the agents learn similar to how we learn when we<p>Also I think current LLMs might not fit well for agent use cases in mid to long term because the RL they go through is based on input-best output methods whereas the intelligence that you need in agents is more around how to build an algorithm to achieve an objective on the fly - this requires perhaps new type of large models ( Large Agent Models ? ) which are trained using RLfD ( Reinforcement Learning from demonstration )<p>Also I think one of the key missing piece is a highly configurable software middle ware between Intelligence ( LLMs ), Memory ( Vector Dbs ~LTMs, STMs ), Tools and workflows across every iteration. Current agent core loop to find next best action is too simplistic. For example if core self prompting loop or iteration of an agent can be configured for the use case in hand. Eg for BabyAGI, every iteration goes through workflow of Plan, Prioritize and Execute or in AutoGPT it finds the next best action based on LTM/STM, or GPTEngineer it is to write specs > write tests > write code. Now for dev infra monitoring agent this workflow might be totally different - it would look like consume logs from different tools like Grafana, Splunk, APMs > See if it doesnt have an anomaly > if it has an anomaly then take human input for feedback. Every use case in real world has it's own workflow and current construct of agent frameworks have this thing hard coded in base prompt. In SuperAGI( <a href="https://superagi.com" rel="nofollow noreferrer">https://superagi.com</a>) ( disclaimer : Im creator of it ), core iteration workflow of agent can be defined as part of agent provisioning.<p>Another missing piece is notion of Knowledge. Agents currently depend entirely upon knowledge of LLMs or search results to execute on tasks, but if a specialised knowledge set is plugged to an agent, it performs significantly better.</p>
]]></description><pubDate>Tue, 27 Jun 2023 11:57:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=36491622</link><dc:creator>snowcrash123</dc:creator><comments>https://news.ycombinator.com/item?id=36491622</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36491622</guid></item><item><title><![CDATA[New comment by snowcrash123 in "Show HN: Open-source framework to build, manage and run Autonomous Agents"]]></title><description><![CDATA[
<p>So langchain's agents are lightweight implementation of existing liberaries. Superagi on the other hand is focussed on real world production deployment</p>
]]></description><pubDate>Wed, 21 Jun 2023 12:58:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=36417870</link><dc:creator>snowcrash123</dc:creator><comments>https://news.ycombinator.com/item?id=36417870</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36417870</guid></item><item><title><![CDATA[New comment by snowcrash123 in "Show HN: Open-source framework to build, manage and run Autonomous Agents"]]></title><description><![CDATA[
<p>Currently, there is support for GPT3.5, GPT3.5 16k and GPT4, but there are some open prs for opensource models like GPT4All and Vicuna. Going forward idea is to integrate with as many models as possible and philosophically it is model agnostic</p>
]]></description><pubDate>Wed, 21 Jun 2023 12:51:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=36417814</link><dc:creator>snowcrash123</dc:creator><comments>https://news.ycombinator.com/item?id=36417814</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36417814</guid></item><item><title><![CDATA[New comment by snowcrash123 in "Show HN: Open-source framework to build, manage and run Autonomous Agents"]]></title><description><![CDATA[
<p>Goal of SuperAGI is to build useful autonomous agents and to do that there are bunch of things I have included in the project which is not there in autogpt etc, like agent trajectory fine tuning, running concurrent agents or agent clusters, configurable workflows for each iteration of agent<p>This project came out of building an autonomous marketing app, so faced some of the challenges of using autogpt , babyagi etc in the prod</p>
]]></description><pubDate>Wed, 21 Jun 2023 12:40:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=36417702</link><dc:creator>snowcrash123</dc:creator><comments>https://news.ycombinator.com/item?id=36417702</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36417702</guid></item><item><title><![CDATA[Show HN: Open-source framework to build, manage and run Autonomous Agents]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/TransformerOptimus/SuperAGI">https://github.com/TransformerOptimus/SuperAGI</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=36417564">https://news.ycombinator.com/item?id=36417564</a></p>
<p>Points: 6</p>
<p># Comments: 8</p>
]]></description><pubDate>Wed, 21 Jun 2023 12:23:30 +0000</pubDate><link>https://github.com/TransformerOptimus/SuperAGI</link><dc:creator>snowcrash123</dc:creator><comments>https://news.ycombinator.com/item?id=36417564</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36417564</guid></item><item><title><![CDATA[New comment by snowcrash123 in "Emerging architectures for LLM applications"]]></title><description><![CDATA[
<p>I agree on the excerpt on agents. Reliability and reproducibility of task completion is the biggest problem for agents to cross the chasm to real life use cases. When agents are given an objective, they think everything from first principles or scratch about next best action to complete the objective and agent trajectory ends up becoming more of a linguistic dance. But we are solving some of the agent specific problems at SuperAGI <a href="https://github.com/TransformerOptimus/SuperAGI">https://github.com/TransformerOptimus/SuperAGI</a> ( disclaimer : Im creator of it ) by doing agent trajectory fine tuning using recursive instructions. Think about objective as telling agent to go from A to B and instructions are akin to giving it directions about the route. And this instruction can be self created after every run and fed into subsequent runs to improve the trajectory.<p>Other problem with agent is : most independent agents are capable of doing very thin slice of use case, but for complex knowledge work tasks, more often than not, one agent is not enough. You need a team of agents. We introduced a concept of Agent Clusters - which operate in master slave architecture and coordinating among themselves to complete nuanced tasks and coordinating via shared memory and shared task list.<p>Another big bottleneck I think is lack of a notional concept of Knowledge for Agents. We have LTM and STM, but knowledge is specialized understanding of particular class of objectives ( ecommerce customer support, Account based marketing, medical diagnostics for particular condition etc ) plugged into the agent. Currently agents leverage on the knowledge available in the LLMs. LLMs are great for intelligence, but not necessarily knowledge required for an objective. So we added concept of knowledge - which is a embedding plugged into agent apart from LTM / STM<p>There lot of other challenges that need to solved like agent performance monitoring, agent specific models, agent to agent communication etc to truly solve for agents deployed in production. Not sure about point mentioned in the article that they might even take over the entire stack because autonomous agentic behaviour is good for certain use cases and not for all kinds of apps.</p>
]]></description><pubDate>Wed, 21 Jun 2023 12:10:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=36417446</link><dc:creator>snowcrash123</dc:creator><comments>https://news.ycombinator.com/item?id=36417446</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36417446</guid></item><item><title><![CDATA[New comment by snowcrash123 in "OpenLLM"]]></title><description><![CDATA[
<p>This looks like a very cool project and much needed</p>
]]></description><pubDate>Mon, 19 Jun 2023 12:03:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=36390067</link><dc:creator>snowcrash123</dc:creator><comments>https://news.ycombinator.com/item?id=36390067</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36390067</guid></item></channel></rss>