<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: liukidar</title><link>https://news.ycombinator.com/user?id=liukidar</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 27 Apr 2026 08:38:19 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=liukidar" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by liukidar in "Show HN: Smooth CLI – Token-efficient browser for AI agents"]]></title><description><![CDATA[
<p>Ahah, indeed that's true... That's why we've just released Smooth CLI (<a href="https://docs.smooth.sh/cli/overview">https://docs.smooth.sh/cli/overview</a>) and the SKILL.md (smooth-sdk/skills/smooth-browser/SKILL.md) associated with it. That should contain everything your agent needs to know to use Smooth. We will definitely add a LLM-friendly reference to it in the landing page and the docs introduction.</p>
]]></description><pubDate>Fri, 06 Feb 2026 15:26:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=46913979</link><dc:creator>liukidar</dc:creator><comments>https://news.ycombinator.com/item?id=46913979</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46913979</guid></item><item><title><![CDATA[Show HN: Smooth – Faster, cheaper browser agent API]]></title><description><![CDATA[
<p>Hey there HN! We're Antonio and Luca, and we're excited to introduce Smooth, a state-of-the-art browser agent that is <i>5x faster</i> and <i>7x cheaper</i> than Browser Use (<a href="https://docs.circlemind.co/performance">https://docs.circlemind.co/performance</a>).<p>We built Smooth because existing browser agents were slow, expensive, and unreliable. Even simple tasks could take minutes and cost dollars in API credits.<p>We started as users of Browser Use, but the pain was obvious. So we built something better. Smooth is 5x faster, 7x cheaper, and more reliable. And along the way, we discovered two principles that make agents actually work.<p>(1) Think like the LLM (<a href="https://x.com/karpathy/status/1937902205765607626" rel="nofollow">https://x.com/karpathy/status/1937902205765607626</a>).<p>The most important thing is to put yourself in the shoes of the LLM. This is especially important when designing the context. How you present the problem to the LLM determines whether it succeeds or fails. Imagine playing chess with an LLM. You could represent the board in countless ways - image, markdown, JSON, etc. Which one you choose matters more than any other part of the system. Clean, intuitive context is everything. We call this LLM-Ex.<p>(2) Let them write code (<a href="https://arxiv.org/pdf/2401.07339" rel="nofollow">https://arxiv.org/pdf/2401.07339</a>)<p>Tool calling is limited. If you want agents that can handle complex logic and manipulate objects reliably, you need code. Coding offers a richer, more composable action space. Suddenly, designing for the agent feels more like designing for a human developer, which makes everything simpler.
By applying these two principles religiously, we realized you don't need huge models to get reliable results. Small, efficient models can get you higher reliability while also getting human-speed navigation and a huge cost reduction.<p>How it works:<p>1. Extract: we look at the webpage and extract all relevant elements by looking at the rendered page.<p>2. Filter and Clean: then, we use some simple heuristics to clean up the webpage. If an element is not interactive, e.g. because a banner is covering it, we remove it.<p>3. Recursively separate sections: we use several heuristics to represent the webpage in a way that is both LLM-friendly and as similar as possible to how humans see it.<p>We packaged Smooth in an easy API with instant browser spin-up, custom proxies, persistent sessions, and auto-CAPTCHA solvers. Our goal is to give you this infrastructure so that you can focus on what's important: building great apps for your users.<p>Before we built this, Antonio was at Amazon, Luca was finishing a PhD at Oxford, and we've been obsessed with reliable AI agents for years. Now we know: if you want agents to work reliably, focus on the context.<p>Try it for free at <a href="https://zero.circlemind.co/developer">https://zero.circlemind.co/developer</a><p>Docs are here: <a href="https://docs.circlemind.co">https://docs.circlemind.co</a><p>Demo video: <a href="https://youtu.be/18v65oORixQ" rel="nofollow">https://youtu.be/18v65oORixQ</a><p>We'd love feedback :)</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45027597">https://news.ycombinator.com/item?id=45027597</a></p>
<p>Points: 54</p>
<p># Comments: 16</p>
]]></description><pubDate>Tue, 26 Aug 2025 15:05:02 +0000</pubDate><link>https://www.smooth.sh/</link><dc:creator>liukidar</dc:creator><comments>https://news.ycombinator.com/item?id=45027597</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45027597</guid></item><item><title><![CDATA[New comment by liukidar in "Show HN: FastGraphRAG – Better RAG using good old PageRank"]]></title><description><![CDATA[
<p>LLMs are only used to construct the graph, to navigate it we use an algorithmic approach. As of now, what we do is very similar to HippoRAG (<a href="https://github.com/OSU-NLP-Group/HippoRAG">https://github.com/OSU-NLP-Group/HippoRAG</a>), their paper can give a good overview on how things are working under the hood!</p>
]]></description><pubDate>Mon, 18 Nov 2024 23:19:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=42178258</link><dc:creator>liukidar</dc:creator><comments>https://news.ycombinator.com/item?id=42178258</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42178258</guid></item><item><title><![CDATA[New comment by liukidar in "Show HN: FastGraphRAG – Better RAG using good old PageRank"]]></title><description><![CDATA[
<p>That would be awesome, we have a discord you can join and we can talk there (link is in the github repo, message Antonio)
or you can message antonio [at] circlemind.com</p>
]]></description><pubDate>Mon, 18 Nov 2024 23:15:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=42178233</link><dc:creator>liukidar</dc:creator><comments>https://news.ycombinator.com/item?id=42178233</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42178233</guid></item><item><title><![CDATA[New comment by liukidar in "Show HN: FastGraphRAG – Better RAG using good old PageRank"]]></title><description><![CDATA[
<p>Thanks for sharing! These are all very helpful insights! We'll keep this in mind :)</p>
]]></description><pubDate>Mon, 18 Nov 2024 23:13:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=42178206</link><dc:creator>liukidar</dc:creator><comments>https://news.ycombinator.com/item?id=42178206</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42178206</guid></item><item><title><![CDATA[New comment by liukidar in "Show HN: FastGraphRAG – Better RAG using good old PageRank"]]></title><description><![CDATA[
<p>We are building connectors for that, so it will soon :) At the moment we are using python-igraph (which does everything locally) as we wanted to offer something as ready to use as possible.</p>
]]></description><pubDate>Mon, 18 Nov 2024 23:03:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=42178112</link><dc:creator>liukidar</dc:creator><comments>https://news.ycombinator.com/item?id=42178112</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42178112</guid></item><item><title><![CDATA[New comment by liukidar in "Show HN: FastGraphRAG – Better RAG using good old PageRank"]]></title><description><![CDATA[
<p>This is super interesting! Thanks for sharing. Here we are talking of graphs in the milions nodes/edges, so efficiency is not that big of a deal, since anyway things are gonna be parsed by a LLM to craft an asnwer which will always be the bottleneck. Indeed PageRank is the first step, but we would be happy to test more accurate alternatives. Importantly, we are using personalized pagerank here, meaning we give specific intial weights to a set (potentially quite large) of nodes, would TC support that (as well as giving weight to edges, since we are also looking into that)?</p>
]]></description><pubDate>Mon, 18 Nov 2024 23:02:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=42178100</link><dc:creator>liukidar</dc:creator><comments>https://news.ycombinator.com/item?id=42178100</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42178100</guid></item><item><title><![CDATA[New comment by liukidar in "Show HN: FastGraphRAG – Better RAG using good old PageRank"]]></title><description><![CDATA[
<p>We have tried from small novels to full documentations of some milion tokens and both seem to create interesting graphs, it would be great to hear some feedback as more people start using it :)</p>
]]></description><pubDate>Mon, 18 Nov 2024 22:57:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=42178056</link><dc:creator>liukidar</dc:creator><comments>https://news.ycombinator.com/item?id=42178056</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42178056</guid></item><item><title><![CDATA[New comment by liukidar in "Show HN: FastGraphRAG – Better RAG using good old PageRank"]]></title><description><![CDATA[
<p>Hey! Our todo list is a bit swamped with things right now, but we'll try to have a look at that as soon as possible. On the Ollama github I found contrasting information: <a href="https://github.com/ollama/ollama/issues/2416">https://github.com/ollama/ollama/issues/2416</a> and <a href="https://github.com/ollama/ollama/pull/2925">https://github.com/ollama/ollama/pull/2925</a>
They also suggest to look at this: <a href="https://github.com/severian42/GraphRAG-Local-UI/blob/main/embedding_proxy.py">https://github.com/severian42/GraphRAG-Local-UI/blob/main/em...</a><p>Hope this can help!</p>
]]></description><pubDate>Mon, 18 Nov 2024 22:54:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=42178014</link><dc:creator>liukidar</dc:creator><comments>https://news.ycombinator.com/item?id=42178014</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42178014</guid></item><item><title><![CDATA[New comment by liukidar in "Show HN: FastGraphRAG – Better RAG using good old PageRank"]]></title><description><![CDATA[
<p>It is to mark the package as private (in the sense that for normal usage you shouldn't need it). We are still writing the documentation on how to customize every little bit of the graph construction and querying pipeline, once that is ready we will expose the right tools (and files) for all of that :) For now just go with `from fast_graphrag import GraphRAG` and you should be good to go :)</p>
]]></description><pubDate>Mon, 18 Nov 2024 19:04:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=42175735</link><dc:creator>liukidar</dc:creator><comments>https://news.ycombinator.com/item?id=42175735</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42175735</guid></item><item><title><![CDATA[New comment by liukidar in "Show HN: FastGraphRAG – Better RAG using good old PageRank"]]></title><description><![CDATA[
<p>The graph is currently stored using python-igraph. The codebase is designed such that it is easy to integrate any graphdb by writing a light wrapper around it (we will provide support to stuff like neo4j in the near future). We haven't tried triplex since we saw that gpt4o-mini is fast and precise enough for now (and we use it not only for extraction of entities and relationships, but also to get descriptions and resolve conflicts), but for sure with fine tuning results should improve.
The graph is queried by finding an initial set of nodes that are relevant to a given query and then running personalized pageranking from those nodes to find other relevant passages. Currently, we select the inital nodes with semantic search both on the whole query and entities extracted from it, but we are planning for other exciting additions to this method :)</p>
]]></description><pubDate>Mon, 18 Nov 2024 18:51:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=42175597</link><dc:creator>liukidar</dc:creator><comments>https://news.ycombinator.com/item?id=42175597</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42175597</guid></item><item><title><![CDATA[New comment by liukidar in "Show HN: FastGraphRAG – Better RAG using good old PageRank"]]></title><description><![CDATA[
<p>Exactly! Also PageRank is used to navigate the graph and find "missing links" between the concepts selected from the query using semantic search via LLMs (so to be able to find information to answer questions that require multi-hop or complex reasoning in one go).</p>
]]></description><pubDate>Mon, 18 Nov 2024 18:24:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=42175322</link><dc:creator>liukidar</dc:creator><comments>https://news.ycombinator.com/item?id=42175322</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42175322</guid></item><item><title><![CDATA[Show HN: FastGraphRAG – Better RAG using good old PageRank]]></title><description><![CDATA[
<p>Hey there HN! We’re Antonio, Luca, and Yuhang, and we’re excited to introduce Fast GraphRAG, an open-source RAG approach that leverages knowledge graphs and the 25 years old PageRank for better information retrieval and reasoning.<p>Building a good RAG pipeline these days takes a lot of manual optimizations. Most engineers intuitively start from naive RAG: throw everything in a vector database and hope that semantic search is powerful enough. This can work for use cases where accuracy isn’t too important and hallucinations are tolerable, but it doesn’t work for more difficult queries that involve multi-hop reasoning or more advanced domain understanding. Also, it’s impossible to debug it.<p>To address these limitations, many engineers find themselves adding extra layers like agent-based preprocessing, custom embeddings, reranking mechanisms, and hybrid search strategies. Much like the early days of machine learning when we manually crafted feature vectors to squeeze out marginal gains, building an effective RAG system often becomes an exercise in crafting engineering “hacks.”<p>Earlier this year, Microsoft seeded the idea of using Knowledge Graphs for RAG and published GraphRAG - i.e. RAG with Knowledge Graphs. We believe that there is an incredible potential in this idea, but existing implementations are naive in the way they create and explore the graph. That’s why we developed Fast GraphRAG with a new algorithmic approach using good old PageRank.<p>There are two main challenges when building a reliable RAG system:<p>(1) Data Noise: Real-world data is often messy. Customer support tickets, chat logs, and other conversational data can include a lot of irrelevant information. If you push noisy data into a vector database, you’re likely to get noisy results.<p>(2) Domain Specialization: For complex use cases, a RAG system must understand the domain-specific context. This requires creating representations that capture not just the words but the deeper relationships and structures within the data.<p>Our solution builds on these insights by incorporating knowledge graphs into the RAG pipeline. Knowledge graphs store entities and their relationships, and can help structure data in a way that enables more accurate and context-aware information retrieval. 12 years ago Google announced the knowledge graph we all know about [1]. It was a pioneering move. Now we have LLMs, meaning that people can finally do RAG on their own data with tools that can be as powerful as Google’s original idea.<p>Before we built this, Antonio was at Amazon, while Luca and Yuhang were finishing their PhDs at Oxford. We had been thinking about this problem for years and we always loved the parallel between pagerank and the human memory [2]. We believe that searching for memories is incredibly similar to searching the web.<p>Here’s how it works:<p>- Entity and Relationship Extraction: Fast GraphRAG uses LLMs to extract entities and their relationships from your data and stores them in a graph format [3].<p>- Query Processing: When you make a query, Fast GraphRAG starts by finding the most relevant entities using vector search, then runs a personalized PageRank algorithm to determine the most important “memories” or pieces of information related to the query [4].<p>- Incremental Updates: Unlike other graph-based RAG systems, Fast GraphRAG natively supports incremental data insertions. This means you can continuously add new data without reprocessing the entire graph.<p>- Faster: These design choices make our algorithm faster and more affordable to run than other graph-based RAG systems because we eliminate the need for communities and clustering.<p>Suppose you’re analyzing a book and want to focus on character interactions, locations, and significant events:<p><pre><code>  from fast_graphrag import GraphRAG
  
  DOMAIN = "Analyze this story and identify the characters. Focus on how they interact with each other, the locations they explore, and their relationships."
  
  EXAMPLE_QUERIES = [
      "What is the significance of Christmas Eve in A Christmas Carol?",
      "How does the setting of Victorian London contribute to the story's themes?",
      "Describe the chain of events that leads to Scrooge's transformation.",
      "How does Dickens use the different spirits (Past, Present, and Future) to guide Scrooge?",
      "Why does Dickens choose to divide the story into \"staves\" rather than chapters?"
  ]
  
  ENTITY_TYPES = ["Character", "Animal", "Place", "Object", "Activity", "Event"]
  
  grag = GraphRAG(
      working_dir="./book_example",
      domain=DOMAIN,
      example_queries="\n".join(EXAMPLE_QUERIES),
      entity_types=ENTITY_TYPES
  )
  
  with open("./book.txt") as f:
      grag.insert(f.read())
  
  print(grag.query("Who is Scrooge?").response)
</code></pre>
This code creates a domain-specific knowledge graph based on your data, example queries, and specified entity types. Then you can query it in plain English while it automatically handles all the data fetching, entity extractions, co-reference resolutions, memory elections, etc. When you add new data, locking and checkpointing is handled for you as well.<p>This is the kind of infrastructure that GenAI apps need to handle large-scale real-world data. Our goal is to give you this infrastructure so that you can focus on what’s important: building great apps for your users without having to care about manually engineering a retrieval pipeline. In the managed service, we also have a suite of UI tools for you to explore and debug your knowledge graph.<p>We have a free hosted solution with up to 100 monthly requests. When you’re ready to grow, we have paid plans that scale with you. And of course you can self host our open-source engine.<p>Give us a spin today at <a href="https://circlemind.co">https://circlemind.co</a> and see our code at <a href="https://github.com/circlemind-ai/fast-graphrag">https://github.com/circlemind-ai/fast-graphrag</a><p>We’d love feedback :)<p>[1] <a href="https://blog.google/products/search/introducing-knowledge-graph-things-not/" rel="nofollow">https://blog.google/products/search/introducing-knowledge-gr...</a><p>[2] Griffiths, T. L., Steyvers, M., & Firl, A. (2007). Google and the Mind: Predicting Fluency with PageRank. Psychological Science, 18(12), 1069–1076. <a href="http://www.jstor.org/stable/40064705" rel="nofollow">http://www.jstor.org/stable/40064705</a><p>[3] Similarly to Microsoft’s GraphRAG: <a href="https://github.com/microsoft/graphrag">https://github.com/microsoft/graphrag</a><p>[4] Similarly to OSU’s HippoRAG: <a href="https://github.com/OSU-NLP-Group/HippoRAG">https://github.com/OSU-NLP-Group/HippoRAG</a><p><a href="https://vhs.charm.sh/vhs-4fCicgsbsc7UX0pemOcsMp.gif" rel="nofollow">https://vhs.charm.sh/vhs-4fCicgsbsc7UX0pemOcsMp.gif</a></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42174829">https://news.ycombinator.com/item?id=42174829</a></p>
<p>Points: 457</p>
<p># Comments: 119</p>
]]></description><pubDate>Mon, 18 Nov 2024 17:43:13 +0000</pubDate><link>https://github.com/circlemind-ai/fast-graphrag</link><dc:creator>liukidar</dc:creator><comments>https://news.ycombinator.com/item?id=42174829</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42174829</guid></item></channel></rss>