<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: kanodiaayush</title><link>https://news.ycombinator.com/user?id=kanodiaayush</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 16 Apr 2026 21:15:49 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=kanodiaayush" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by kanodiaayush in "Ask HN: What Are You Working On? (March 2026)"]]></title><description><![CDATA[
<p><a href="https://kerns.ai/" rel="nofollow">https://kerns.ai/</a> - Get answers to questions with citations, visualize papers/books/reports.<p>Wondering if there are other similar tools out there which people love, and why ChatGPT/Gemini/Claude won't let you do the same in their native apps.</p>
]]></description><pubDate>Mon, 09 Mar 2026 01:06:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47303572</link><dc:creator>kanodiaayush</dc:creator><comments>https://news.ycombinator.com/item?id=47303572</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47303572</guid></item><item><title><![CDATA[New comment by kanodiaayush in "Making MCP cheaper via CLI"]]></title><description><![CDATA[
<p>If we use prompt caching - isn't a largish MCP tools section just like a fixed token penalty in return for higher speed at runtime, because tools don't need to be discovered on demand, and that's the better tradeoff? At least for the most powerful models it doesn't feel like their quality goes down much with a few MCP servers. I might be missing something.</p>
]]></description><pubDate>Thu, 26 Feb 2026 02:17:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47160970</link><dc:creator>kanodiaayush</dc:creator><comments>https://news.ycombinator.com/item?id=47160970</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47160970</guid></item><item><title><![CDATA[New comment by kanodiaayush in "Claws are now a new layer on top of LLM agents"]]></title><description><![CDATA[
<p>This is very interesting — what's an example query and metadata update you've made? Now I'm thinking if I would do this.</p>
]]></description><pubDate>Tue, 24 Feb 2026 00:02:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47130885</link><dc:creator>kanodiaayush</dc:creator><comments>https://news.ycombinator.com/item?id=47130885</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47130885</guid></item><item><title><![CDATA[New comment by kanodiaayush in "Claws are now a new layer on top of LLM agents"]]></title><description><![CDATA[
<p>This is interesting. Do you mean this is like chat with your book, or these are books you've already finished reading which you have a query over to ask? And does it search raw book text or metadata?</p>
]]></description><pubDate>Sun, 22 Feb 2026 05:32:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47108492</link><dc:creator>kanodiaayush</dc:creator><comments>https://news.ycombinator.com/item?id=47108492</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47108492</guid></item><item><title><![CDATA[New comment by kanodiaayush in "The path to ubiquitous AI (17k tokens/sec)"]]></title><description><![CDATA[
<p>I'm loving summarization of articles using their chatbot! Wow!</p>
]]></description><pubDate>Fri, 20 Feb 2026 12:08:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47087016</link><dc:creator>kanodiaayush</dc:creator><comments>https://news.ycombinator.com/item?id=47087016</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47087016</guid></item><item><title><![CDATA[New comment by kanodiaayush in "Ask HN: What are you working on? (February 2026)"]]></title><description><![CDATA[
<p>Kerns (<a href="https://kerns.ai" rel="nofollow">https://kerns.ai</a>) — a research environment for deeply understanding topics across multiple sources. Upload papers, articles, or books into a workspace that persists across sessions. Read with AI summaries that let you zoom in and out of any document. Generate knowledge maps to visualize how ideas connect. Run deep research agents that produce comprehensive, cited reports. Free to use, would love feedback from anyone doing heavy reading/research.</p>
]]></description><pubDate>Mon, 09 Feb 2026 04:10:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46941485</link><dc:creator>kanodiaayush</dc:creator><comments>https://news.ycombinator.com/item?id=46941485</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46941485</guid></item><item><title><![CDATA[Show HN: Kerns – Research that compounds instead of resetting]]></title><description><![CDATA[
<p>When I do real research — reading papers, books, or trying to understand a complex topic — it never happens in one sitting.<p>I explore a bit, get interrupted, come back days later, ask new questions, find new sources, and gradually build understanding.<p>Existing AI tools are great at answering questions once, but they reset context every time. Notes tools preserve text, but not understanding — they don’t help me reconnect ideas, revisit sources, or see how my thinking has evolved.<p>I built Kerns to sit in between.<p>Kerns is an AI research workspace designed for research that unfolds over time. Each topic lives in a long-lived space where:<p>• You can ask new questions as they emerge
• Upload PDFs, papers, or documents alongside discovered sources
• Run deep research in parallel, not one query at a time
• Move fluidly between summaries and original source text
• Revisit work weeks later without losing context<p>Instead of scattering work across chats, documents, and bookmarks, research accumulates in one place and compounds as you return to it.<p>I’ve been using Kerns myself for things like:<p>• Reading and synthesizing books and essays
• Comparing technical papers and model releases
• Tracking ongoing policy or industry topics as they evolve<p>I’d love feedback from anyone who does sustained research — analysts, researchers, founders, students, or serious self-learners.<p>Thank you!</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46702372">https://news.ycombinator.com/item?id=46702372</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 21 Jan 2026 07:42:41 +0000</pubDate><link>https://www.kerns.ai/</link><dc:creator>kanodiaayush</dc:creator><comments>https://news.ycombinator.com/item?id=46702372</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46702372</guid></item><item><title><![CDATA[Show HN: Kerns – A Workspace for Deep, Ongoing Research]]></title><description><![CDATA[
<p>Deep research rarely happens in a single pass. For high-stakes work, or to deeply understand something, you run many deep researches in parallel, revisit them over time, and synthesize understanding gradually. We combine deep research with other docs.<p>Kerns is built for this mode of research.<p>It groups multiple deep researches under a single research area, so follow-ups and side investigations accumulate instead of fragmenting across chats and docs. Outputs are structured so you can start shallow and selectively go deep—because you don’t know upfront which deep researches will matter.<p>Synthesis is an explicit second step. Kerns helps you connect and reconcile insights across deep researches, grounded in the sourced material rather than one-off summaries. This stage also lets you consult other docs on the same level as deep researches.<p>Research doesn’t stop once a report is written. Kerns passively keeps your work up to date by monitoring sources and surfacing meaningful changes, so staying current doesn’t require restarting.<p>Built for researchers, analysts, investors, and serious self-learners doing multi-week or multi-month research where clarity and correctness actually matter.<p>Would love feedback!</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46651496">https://news.ycombinator.com/item?id=46651496</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 16 Jan 2026 20:08:14 +0000</pubDate><link>https://www.kerns.ai/</link><dc:creator>kanodiaayush</dc:creator><comments>https://news.ycombinator.com/item?id=46651496</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46651496</guid></item><item><title><![CDATA[Show HN: Kerns – A Continuous Research Workspace]]></title><description><![CDATA[
<p>Most research tools help you collect links. Kerns is built for ongoing research.<p>You define topics and sources once. Kerns continuously tracks them over time, surfaces what changes, and structures the material so understanding compounds instead of resetting each session.<p>The key difference is the interface layer. Beyond feeds and summaries, Kerns organizes research into reasoning-ready views—maps, structured summaries, and synthesized perspectives—so you can actually think through complex areas rather than just store information.<p>We built this for people doing deep, long-running research (researchers, analysts, investors,  founders, autodidacts) where the hard part isn’t finding sources, but keeping a coherent mental model as the space evolves.<p>Would love feedback, especially from people who’ve tried to maintain research across weeks or months.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46534355">https://news.ycombinator.com/item?id=46534355</a></p>
<p>Points: 5</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 07 Jan 2026 22:51:16 +0000</pubDate><link>https://www.kerns.ai/</link><dc:creator>kanodiaayush</dc:creator><comments>https://news.ycombinator.com/item?id=46534355</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46534355</guid></item><item><title><![CDATA[Show HN: An AI environment to understand sources or topics]]></title><description><![CDATA[
<p>LLMs are a great way to understand things with. Current methods are however engineered only for brief chat like interactions, whereas deep understanding requires continuous research, exploration and deep reading too.<p>Kerns is an AI interface designed to help with all of this - there's a powerful chat agent to search and reason with over the web and your sources. You can run deep research, and enable background agents in the app to work on your behalf. You can read original source with a powerful AI reader that offers chapter level summarization and in context question answering. There's a powerful interactive mindmap you can build on your sources, and visual notetaking as you chat.<p>I would love any feedback!</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46131222">https://news.ycombinator.com/item?id=46131222</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 03 Dec 2025 07:25:17 +0000</pubDate><link>https://www.kerns.ai/</link><dc:creator>kanodiaayush</dc:creator><comments>https://news.ycombinator.com/item?id=46131222</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46131222</guid></item><item><title><![CDATA[Show HN: A visual AI interface to understand topics/books/papers with LLMs]]></title><description><![CDATA[
<p>LLMs make us feel like we can learn anything, but chat is just one primitive. I'm trying to build interfaces which let us cover a topic our sources at length with AI.
These interfaces let us consolidate our understanding at glance so we don't get lost in long chat message histories, zoom in and zoom out of information fast and minimize tedious work such as context engineering.
Curious if you find this interesting, and have feedback! Please try it (it's free).</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46062851">https://news.ycombinator.com/item?id=46062851</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 26 Nov 2025 21:59:40 +0000</pubDate><link>https://www.kerns.ai/</link><dc:creator>kanodiaayush</dc:creator><comments>https://news.ycombinator.com/item?id=46062851</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46062851</guid></item><item><title><![CDATA[Show HN: An AI interface to understand anything, better than NotebookLM]]></title><description><![CDATA[
<p>I'm building out Kerns, an AI environment for research. You can seed a space with a topic and multiple source documents, and complete your research completely in one space. There's a powerful chat agent which can reason across tool calls, and give cited answers. These citations take to a reader where you can dive deeper. Further, you can read/listen to original source docs at various levels of summary per chapter. As you chat, a tree builds out which helps you keep track of long chats across sessions, and consolidate your understanding at a glance. You can also do deep research in the app, and we're building out a background agent which sends you constant updates on your topic of interest. Finally, there's interactive mindmaps for exploration.<p>My goal is to have one place to do understand any topic which minimizes manual context engineering, and jumping around between chat/notes/readers.<p>Please let me know if you try it and have feedback!</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46049966">https://news.ycombinator.com/item?id=46049966</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 25 Nov 2025 19:56:35 +0000</pubDate><link>https://www.kerns.ai/</link><dc:creator>kanodiaayush</dc:creator><comments>https://news.ycombinator.com/item?id=46049966</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46049966</guid></item><item><title><![CDATA[New comment by kanodiaayush in "Gemini 3"]]></title><description><![CDATA[
<p>Similar stuff my end; I'm coding up a complex feature - Claude would have taken fewer interventions on my part, and would have been non buggy right off the bat. But apart from that the experience is comparable.</p>
]]></description><pubDate>Wed, 19 Nov 2025 02:30:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=45975183</link><dc:creator>kanodiaayush</dc:creator><comments>https://news.ycombinator.com/item?id=45975183</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45975183</guid></item><item><title><![CDATA[New comment by kanodiaayush in "Gemini 3"]]></title><description><![CDATA[
<p>yeah testing it out! good to know the above. My feel also is that claude is better so far.</p>
]]></description><pubDate>Wed, 19 Nov 2025 02:20:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=45975130</link><dc:creator>kanodiaayush</dc:creator><comments>https://news.ycombinator.com/item?id=45975130</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45975130</guid></item><item><title><![CDATA[New comment by kanodiaayush in "Gemini 3"]]></title><description><![CDATA[
<p>I don't really understand the amount of ongoing negativity in the comments. This is not the first time a product has been near copied, and the experience for me is far superior to code in a terminal. It comes with improvements even though imperfect, and I'm excited for those! I've long wanted the ability to comment on code diffs instead of just writing things back down in chat. And I'm excited for the quality of gemini 3.0 pro; although I'm running into rate limits. I can already tell its something I'm going to try out a lot!</p>
]]></description><pubDate>Wed, 19 Nov 2025 02:12:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=45975075</link><dc:creator>kanodiaayush</dc:creator><comments>https://news.ycombinator.com/item?id=45975075</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45975075</guid></item><item><title><![CDATA[New comment by kanodiaayush in "Show HN: Kerns, an AI interface to understand anything, better than NotebookLM"]]></title><description><![CDATA[
<p>Thanks a lot! Thanks for the feedback. URLs already show up as links if the agent decides to do a search and find refs when making the mind map. I'll work on adding images, thanks!</p>
]]></description><pubDate>Wed, 12 Nov 2025 02:54:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=45895913</link><dc:creator>kanodiaayush</dc:creator><comments>https://news.ycombinator.com/item?id=45895913</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45895913</guid></item><item><title><![CDATA[New comment by kanodiaayush in "Show HN: Kerns, an AI interface to understand anything, better than NotebookLM"]]></title><description><![CDATA[
<p>Thanks, good to know! I wonder if you ended up using AI note taking too and if you were able to compare..</p>
]]></description><pubDate>Wed, 12 Nov 2025 00:04:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=45894625</link><dc:creator>kanodiaayush</dc:creator><comments>https://news.ycombinator.com/item?id=45894625</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45894625</guid></item><item><title><![CDATA[Show HN: Kerns, an AI interface to understand anything, better than NotebookLM]]></title><description><![CDATA[
<p>I'm building out Kerns, as an AI environment for research. You can seed a space with a topic and multiple source documents, and complete your research completely in one space. There's interactive mindmaps for exploration, podcast mode, powerful source readers with original plus chapter level summaries that let you zoom into source on demand, a powerful chat agent that lets you control context and cite refs, and AI assisted note taking.<p>My goal is to have one place to do research on any topic which minimizes manual context engineering, and jumping around between chat/notes/readers.<p>Please let me know if you try it and have feedback!</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45884473">https://news.ycombinator.com/item?id=45884473</a></p>
<p>Points: 7</p>
<p># Comments: 4</p>
]]></description><pubDate>Tue, 11 Nov 2025 05:45:02 +0000</pubDate><link>https://www.kerns.ai/</link><dc:creator>kanodiaayush</dc:creator><comments>https://news.ycombinator.com/item?id=45884473</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45884473</guid></item><item><title><![CDATA[New comment by kanodiaayush in "The Parallel Search API"]]></title><description><![CDATA[
<p>I'm really excited to try out your deep research apis, the benchmark results look really interesting and the pricing is compelling.</p>
]]></description><pubDate>Fri, 07 Nov 2025 03:04:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=45843161</link><dc:creator>kanodiaayush</dc:creator><comments>https://news.ycombinator.com/item?id=45843161</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45843161</guid></item><item><title><![CDATA[New comment by kanodiaayush in "Show HN: Git for LLMs – A context management interface"]]></title><description><![CDATA[
<p>I also realised I forgot to commend you; I think this is a useful interface! Kudos on building it! I'm working on something very related myself.<p>I think in general these things should not be confused to be the one and same artifact - that of a personal memory device and that for LLM context management. Right now, it seems to double up, of which the main problem is that it kind of puts the burden on me to manage my memory device, which should be automatic I think. I don't have perfect thoughts on it, so I'll leave it at this, its work in progress..</p>
]]></description><pubDate>Fri, 24 Oct 2025 05:58:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=45691292</link><dc:creator>kanodiaayush</dc:creator><comments>https://news.ycombinator.com/item?id=45691292</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45691292</guid></item></channel></rss>