<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: cagz</title><link>https://news.ycombinator.com/user?id=cagz</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 14 May 2026 14:38:23 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=cagz" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by cagz in "delta time"]]></title><description><![CDATA[
<p>I like how far mature adulthood goes :) One never gets old.</p>
]]></description><pubDate>Thu, 14 May 2026 06:11:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=48131708</link><dc:creator>cagz</dc:creator><comments>https://news.ycombinator.com/item?id=48131708</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48131708</guid></item><item><title><![CDATA[New comment by cagz in "Ask HN: What Are You Working On? (April 2026)"]]></title><description><![CDATA[
<p>Getting frustrated by not having RAG solutions able to answer schematic questions such as "how many X are there", I've created DuoRAG: <a href="https://github.com/cagriy/duo-rag" rel="nofollow">https://github.com/cagriy/duo-rag</a><p>It maintains a vector store and a SQL database. While vector store supports usual RAG operations, the ones that require counting, summation, selection are routed to the SQL database.<p>There is an option to start with an initial schema, or let it discover the schema itself. Then on the day to day use, if a user query cannot be responded, a candidate schema entry is created to be populated on the next backfill run.<p>So in actual use, user asks the question such as "Give me the list of people who are scientists". If it is not in the schema, LLM suggest checking it later. Backfill runs at night. Next day it can answer the same question without issues.</p>
]]></description><pubDate>Mon, 13 Apr 2026 13:55:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47752031</link><dc:creator>cagz</dc:creator><comments>https://news.ycombinator.com/item?id=47752031</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47752031</guid></item><item><title><![CDATA[New comment by cagz in "I run multiple $10K MRR companies on a $20/month tech stack"]]></title><description><![CDATA[
<p>Nice tech read, but without information about which companies, doing what, just feels way too click-baity.</p>
]]></description><pubDate>Sun, 12 Apr 2026 08:11:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47737209</link><dc:creator>cagz</dc:creator><comments>https://news.ycombinator.com/item?id=47737209</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47737209</guid></item><item><title><![CDATA[New comment by cagz in "Unfolder for Mac – A 3D model unfolding tool for creating papercraft"]]></title><description><![CDATA[
<p>I found the idea very interesting but put off a little bit by the various details such as face normals etc (have limited knowledge of the topic). Here are few ideas to increase adoption:<p>- Sample files<p>- A video of end-to-end process of creating a basic model (perhaps something more complex than a cube) from 3d design to finished artefact.<p>- Support for STL<p>- Built-in option to adjust (reduce) face counts</p>
]]></description><pubDate>Fri, 10 Apr 2026 05:56:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47714160</link><dc:creator>cagz</dc:creator><comments>https://news.ycombinator.com/item?id=47714160</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47714160</guid></item><item><title><![CDATA[Show HN: DuoRAG – A dual stack RAG that self-evolves]]></title><description><![CDATA[
<p>Imagine a corpus of documents with scientist biographies.<p>The traditional RAG works fine until you ask questions like:
- "Who was born before 1800?"
- "How many are mathematicians?"
- "List names and birthdays for mathematicians"<p>These result in an incomplete answer due to top-k, with no signs of incompleteness.<p>For an initial corpus, it is possible to improve this problem by extracting metadata for a predetermined set of fields. This approach has two problems:<p>- One has to predict all the questions that can be asked against the corpus upfront.
- Constantly revising that prediction as the documents change, e.g. adding Nobel prizes later, or extending the document set to contain artists.<p>DuoRAG aims to solve both problems by:<p>- An initial metadata (schema) discovery before the first ingestion
- Self-update schema with candidate fields when it fails to answer a question</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47542131">https://news.ycombinator.com/item?id=47542131</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 27 Mar 2026 12:55:04 +0000</pubDate><link>https://github.com/cagriy/duo-rag</link><dc:creator>cagz</dc:creator><comments>https://news.ycombinator.com/item?id=47542131</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47542131</guid></item><item><title><![CDATA[New comment by cagz in "MAUI Is Coming to Linux"]]></title><description><![CDATA[
<p>I am a big fan of MAUI, but I'd really wish they fixed existing issues instead of extending it further. 3.9k open issues and counting. I've got 5 open, verified bugs, some from 2023 :(</p>
]]></description><pubDate>Mon, 23 Mar 2026 06:39:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47486115</link><dc:creator>cagz</dc:creator><comments>https://news.ycombinator.com/item?id=47486115</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47486115</guid></item><item><title><![CDATA[New comment by cagz in "LLM Architecture Gallery"]]></title><description><![CDATA[
<p>It is perhaps my eyes, but when I zoom in enough to make it readable, it gets blurry. A higher-res image would be much appreciated. Great idea otherwise.</p>
]]></description><pubDate>Mon, 16 Mar 2026 08:22:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47396358</link><dc:creator>cagz</dc:creator><comments>https://news.ycombinator.com/item?id=47396358</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47396358</guid></item><item><title><![CDATA[Ego's Fight with AI]]></title><description><![CDATA[
<p>Article URL: <a href="https://cagriy.github.io/Egos-fight-with-AI">https://cagriy.github.io/Egos-fight-with-AI</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47298373">https://news.ycombinator.com/item?id=47298373</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Sun, 08 Mar 2026 16:02:18 +0000</pubDate><link>https://cagriy.github.io/Egos-fight-with-AI</link><dc:creator>cagz</dc:creator><comments>https://news.ycombinator.com/item?id=47298373</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47298373</guid></item><item><title><![CDATA[Meterstick for Claude Code]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/cagriy/meterstick">https://github.com/cagriy/meterstick</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47288886">https://news.ycombinator.com/item?id=47288886</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 07 Mar 2026 16:12:03 +0000</pubDate><link>https://github.com/cagriy/meterstick</link><dc:creator>cagz</dc:creator><comments>https://news.ycombinator.com/item?id=47288886</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47288886</guid></item><item><title><![CDATA[New comment by cagz in "When AI writes the software, who verifies it?"]]></title><description><![CDATA[
<p>> No one is formally verifying the result<p>This might be the case for a hobby project or a start-up MVP being created in a rush, but in reality, there are a few points we may want to take into account:<p>1. Software teams I work with are maintaining the usual review practices. Even if a feature is completely created by AI. It goes through the usual PR review process. The dev may choose "Accept All", although I am not saying this is a good practice, the change still gets reviewed by a human.<p>2. From my experience, sub-agents intended for code and security review do a good job. It is even possible to use another model to review the code, which can provide a different perspective.<p>3. A year ago, code written by AI was failing to run the first time, requiring a painful joint troubleshooting effort. Now it works 95% of the time, but perhaps it is not optimal. Given the speed at which it is improving, it is safe to expect that in 6-9 months' time, it will not only work but will also be written to a good quality.</p>
]]></description><pubDate>Wed, 04 Mar 2026 06:26:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47243863</link><dc:creator>cagz</dc:creator><comments>https://news.ycombinator.com/item?id=47243863</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47243863</guid></item><item><title><![CDATA[New comment by cagz in "When does MCP make sense vs CLI?"]]></title><description><![CDATA[
<p>I understand the argument, and there are some really good points.<p>My biggest concern would be that adopting the CLI method would require LLM to have permission to execute binaries on the filesystem. This is a non-issue in an openclaw-type scenario where permission is there by design, but it would be more difficult to adopt in an enterprise setting. There are ways to limit LLMs to a directory tree where only allowed CLIs live, but there will still be hacks to break out of it. Not to mention, LLM would use an MCP or another local tool to execute CLI commands, making it a two-step process.<p>I am a supporter of human tools for humans and AI tools for AI. The best example is something like WebMCP vs the current method of screenshotting webpages and trying to find buttons inputboxes etc.<p>If we keep them separate, we can allow them to evolve to fully support each use case. Otherwise, the CLIs would soon start to include LLM-specific switches and arguments, e.g., to provide information in JSON.<p>Tools like awscli are good examples of there LLM can use a CLI. But then we need to remember that these are partly, if not mostly, intended for machine use, so CI/CD pipelines can do things.</p>
]]></description><pubDate>Mon, 02 Mar 2026 06:34:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47214576</link><dc:creator>cagz</dc:creator><comments>https://news.ycombinator.com/item?id=47214576</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47214576</guid></item><item><title><![CDATA[New comment by cagz in "Get free Claude max 20x for open-source maintainers"]]></title><description><![CDATA[
<p>I think their intention is not mining your data (easily opt-outable) or hoping that you maintain the subscription after 6 months. It is rather making large open source project maintainers give AI a proper go.<p>Believe it or not, there are still a large amount of great tech professionals out there who are sceptical about AI. Many tried AI a year ago and has the impression that "It was alright but had limitations". AI came a long way since then, and it is going to improve even faster over the next 6 months. So this is Anthropics invite for you to join that journey.<p>In turn, of course this fuels the adoption by superstars (maintainers) endorsing the models.</p>
]]></description><pubDate>Sat, 28 Feb 2026 06:51:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47191366</link><dc:creator>cagz</dc:creator><comments>https://news.ycombinator.com/item?id=47191366</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47191366</guid></item><item><title><![CDATA[New comment by cagz in "Get free Claude max 20x for open-source maintainers"]]></title><description><![CDATA[
<p>The use of data for model training is a simple toggle, very easy to opt out of during the initial setup.<p>Also, the end product is open source anyway, so there is no case of IP being leaked into training data. What remains is that they can use, with your permission, the overall coding practices of a great programmer to fine-tune Claude's code and models. As in, how one approaches planning or troubleshooting. Is this a bad thing? Perhaps every maintainer should decide for themselves whether they want to contribute back or not.</p>
]]></description><pubDate>Sat, 28 Feb 2026 06:41:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47191288</link><dc:creator>cagz</dc:creator><comments>https://news.ycombinator.com/item?id=47191288</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47191288</guid></item><item><title><![CDATA[New comment by cagz in "Get free Claude max 20x for open-source maintainers"]]></title><description><![CDATA[
<p>Assuming they've got reasonable programming skills. They can simply find an open-source project they are passionate about. Spend time understanding the overall structure. Then pick up an issue raised by the community and prepare a fix as a pull request.<p>The first PR is unlikely to be merged the next day; however, it sparks lots of productive discussions with the rest of the community, allowing your kid to build a mental model of the project's best practices and sensitivities.<p>The more he contributes, the more integral he becomes to the community. After gaining enough experience through small issues, they can even consider working on a new feature.<p>As a byproduct, a great addition to the CV if they are also looking to go commercial.</p>
]]></description><pubDate>Sat, 28 Feb 2026 06:34:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47191233</link><dc:creator>cagz</dc:creator><comments>https://news.ycombinator.com/item?id=47191233</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47191233</guid></item><item><title><![CDATA[AI – We are asking the wrong question]]></title><description><![CDATA[
<p>Article URL: <a href="https://cagriy.github.io/AI-We-are-asking-the-wrong-question">https://cagriy.github.io/AI-We-are-asking-the-wrong-question</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47123065">https://news.ycombinator.com/item?id=47123065</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 23 Feb 2026 14:53:09 +0000</pubDate><link>https://cagriy.github.io/AI-We-are-asking-the-wrong-question</link><dc:creator>cagz</dc:creator><comments>https://news.ycombinator.com/item?id=47123065</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47123065</guid></item><item><title><![CDATA[New comment by cagz in "Meta Deployed AI and It Is Killing Our Agency"]]></title><description><![CDATA[
<p>I don't think this is an AI issue. It is about the terms of use: they don't allow a second account, even if it's intended for ad management. The recommended way is to use Meta Business Manager via the existing account.</p>
]]></description><pubDate>Sat, 21 Feb 2026 07:27:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47098374</link><dc:creator>cagz</dc:creator><comments>https://news.ycombinator.com/item?id=47098374</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47098374</guid></item><item><title><![CDATA[New comment by cagz in "AI is destroying open source, and it's not even good yet"]]></title><description><![CDATA[
<p>Low-quality AI-created PRs that are submitted to open-source repositories are prompted by humans. And those are the same humans who fails to review AI's output properly before submitting (or letting AI submit) as PRs. Let's not blame the tools instead of bad workmanship.<p>A smaller number of PRs generated by OpenClaw-type bots are also doing so based on their owner's direct or implied instructions. I mean, someone is giving them GitHub credentials and letting them loose.<p>AI is also allowing the creation of many new open-source projects, led by responsible developers.<p>Given the exponential speed at which AI is progressing, surely the quality of such PRs is going to improve. But there are also opportunities for the open-source community to improve their response. It will sound controversial, but AI can be used to perform an initial review of PRs, suggest improvements, and, in extreme cases, reject them.</p>
]]></description><pubDate>Tue, 17 Feb 2026 04:17:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47043657</link><dc:creator>cagz</dc:creator><comments>https://news.ycombinator.com/item?id=47043657</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47043657</guid></item><item><title><![CDATA[New comment by cagz in "GPT-5.2 derives a new result in theoretical physics"]]></title><description><![CDATA[
<p>Does the article have a strong marketing vibe? Absolutely
Does the research performed move the needle, however small, in theoretical physics? Yes
Could we have expected this to happen a year ago? Not really.<p>My personal opinion is that things will only accelerate from here.</p>
]]></description><pubDate>Sat, 14 Feb 2026 09:46:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47013155</link><dc:creator>cagz</dc:creator><comments>https://news.ycombinator.com/item?id=47013155</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47013155</guid></item><item><title><![CDATA[New comment by cagz in "How London cracked mobile phone coverage on the Underground"]]></title><description><![CDATA[
<p>I use the underground frequently. It doesn't really feel like half of it is covered. Where it is available, it works amazingly. I might have been using the other half by sheer luck.</p>
]]></description><pubDate>Sun, 18 Jan 2026 08:53:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=46666060</link><dc:creator>cagz</dc:creator><comments>https://news.ycombinator.com/item?id=46666060</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46666060</guid></item><item><title><![CDATA[New comment by cagz in "Start your meetings at 5 minutes past"]]></title><description><![CDATA[
<p>Good idea, after trying it a number of times, it has some downsides. Most calendar applications cannot clearly display 5 minutes past, and the meeting appears to start on the hour visually. One of the attendees ends up dialling in at the hour, and then everyone gets a notification that the meeting has started.<p>Half of the people who get the notification click "join" without checking. This ends up with a half-populated meeting room. The issue becomes obvious, and somebody says, "Let's dial back in 5 mins", and drops off. Half of the people like the idea and drop off, while the rest decide to stay and chat.<p>Meanwhile, some of those who dropped off see this as a great opportunity to grab a brew. That inadvertently triggers some water-cooler, kettle-corner chats, and they end up running late for the 5-past. The rest usually get engaged in something else to make use of 5 minutes, and miss 5-past since no new notifications are issued due to the people already chatting in the meeting :)</p>
]]></description><pubDate>Sat, 10 Jan 2026 08:18:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=46563844</link><dc:creator>cagz</dc:creator><comments>https://news.ycombinator.com/item?id=46563844</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46563844</guid></item></channel></rss>