<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: miguelaeh</title><link>https://news.ycombinator.com/user?id=miguelaeh</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 16:19:21 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=miguelaeh" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by miguelaeh in "Claude Memory"]]></title><description><![CDATA[
<p>> Most importantly, you need to carefully engineer the learning process, so that you are not simply compiling an ever growing laundry list of assertions and traces, but a rich set of relevant learnings that carry value through time. That is the hard part of memory, and now you own that too!<p>I am interested in knowing more about how this part works. Most approaches I have seen focus on basic RAG pipelines or some variant of that, which don't seem practical or scalable.<p>Edit: and also, what about procedural memory instead of just storing facts or instructions?</p>
]]></description><pubDate>Thu, 23 Oct 2025 19:21:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=45685764</link><dc:creator>miguelaeh</dc:creator><comments>https://news.ycombinator.com/item?id=45685764</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45685764</guid></item><item><title><![CDATA[Fixing bugs automatically from a screen recording]]></title><description><![CDATA[
<p>Article URL: <a href="https://nitpicks.ai">https://nitpicks.ai</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45491368">https://news.ycombinator.com/item?id=45491368</a></p>
<p>Points: 14</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 06 Oct 2025 13:44:36 +0000</pubDate><link>https://nitpicks.ai</link><dc:creator>miguelaeh</dc:creator><comments>https://news.ycombinator.com/item?id=45491368</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45491368</guid></item><item><title><![CDATA[New comment by miguelaeh in "Nitpicks: Record a video and let an agent implement the code"]]></title><description><![CDATA[
<p>Hey there! Nitpicks creator here.<p>I built Nitpicks initially because I was tired of PMs sending me small screen recordings with changes they wanted, so that it automatically implemented them for me.<p>Then, I was very surprised by the good results that it was producing so I decided to convert it into an actual product others can use. It is really useful for non-technical people in a product team. The whole team can now contribute to the product even if they have no idea of how to code, it's just a click and record the screen.<p>I would love to see you trying it out and sharing your feedback. Feel free to reach out directly to the email at the footer of the page.</p>
]]></description><pubDate>Sat, 09 Aug 2025 16:16:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=44847735</link><dc:creator>miguelaeh</dc:creator><comments>https://news.ycombinator.com/item?id=44847735</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44847735</guid></item><item><title><![CDATA[OpenAI Deep Research paying only for the inference you consume]]></title><description><![CDATA[
<p>Article URL: <a href="https://open-deep-research-anotherwrapper.vercel.app/">https://open-deep-research-anotherwrapper.vercel.app/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43218610">https://news.ycombinator.com/item?id=43218610</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 01 Mar 2025 12:29:45 +0000</pubDate><link>https://open-deep-research-anotherwrapper.vercel.app/</link><dc:creator>miguelaeh</dc:creator><comments>https://news.ycombinator.com/item?id=43218610</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43218610</guid></item><item><title><![CDATA[New comment by miguelaeh in "Show HN: Extracts and analyzes discussions from Reddit communities"]]></title><description><![CDATA[
<p>You should add some way of contact in the website</p>
]]></description><pubDate>Thu, 20 Feb 2025 13:59:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=43114695</link><dc:creator>miguelaeh</dc:creator><comments>https://news.ycombinator.com/item?id=43114695</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43114695</guid></item><item><title><![CDATA[New comment by miguelaeh in "I built a portable AI account that connects to apps with one click"]]></title><description><![CDATA[
<p>Hi HN!<p>I believe there are many issues for devs and users with the current approach of copy-pasting AI provider API keys into applications.<p>I have built brainlink.dev as a solution to this and would love to get your feedback.<p>It is a portable AI account that users can connect with one click to every application that integrates the SDK. It works as follows:<p>1. The user clicks the connect button to link his brainlink account with the app.
2. The app obtains an access token to perform inference on behalf of the user, so the user pays for the usage.<p>Behind the scenes, a secure Auth Code Flow with PKCE takes place so that the app obtains an access and refresh token instead of directly an API key. When the application calls a model providing a user access token, the user pays for the inference.<p>I believe this approach offers multiple benefits to both, developers and users.<p>As a developer:<p>- I can build and test my app against a specific model without being tied to what API key the user provided, ensuring that everyone gets the same UX.<p>- I can easily move my app to a different model at any time. Without brainlink, if users add, let's say, an OpenAI API key, to change to Claude I would need to ask every user to update their API key.<p>- Asking for API keys goes against the ToS of most providers<p>As a user:<p>- The initial friction of configuring API keys disappears, especially for non-technical users who don't know what's an API key.<p>- My privacy increases because AI providers can't track my usage as it goes through the proxy.<p>- I have a single account that I can connect to multiple apps and see how much each app is consuming<p>- I can easily revoke connections (tokens)<p>I tried to make it very simple to integrate with an embeddable button, but you can also create your own button. Here is a live demo with a very simple chat: <a href="https://demo.brainlink.dev" rel="nofollow">https://demo.brainlink.dev</a><p>I would love to hear your feedback and would be happy to help anyone who wants to integrate it.</p>
]]></description><pubDate>Fri, 14 Feb 2025 17:31:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=43050734</link><dc:creator>miguelaeh</dc:creator><comments>https://news.ycombinator.com/item?id=43050734</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43050734</guid></item><item><title><![CDATA[I built a portable AI account that connects to apps with one click]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.brainlink.dev/developers">https://www.brainlink.dev/developers</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43050733">https://news.ycombinator.com/item?id=43050733</a></p>
<p>Points: 8</p>
<p># Comments: 1</p>
]]></description><pubDate>Fri, 14 Feb 2025 17:31:47 +0000</pubDate><link>https://www.brainlink.dev/developers</link><dc:creator>miguelaeh</dc:creator><comments>https://news.ycombinator.com/item?id=43050733</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43050733</guid></item><item><title><![CDATA[New comment by miguelaeh in "Show HN: Search engine that presents answers as news briefs"]]></title><description><![CDATA[
<p>Cool! Are you indexing the web or just using a search engine like Bing? I understand that using a search engine is a problem because your results will be as good as those of the search engine</p>
]]></description><pubDate>Wed, 22 Jan 2025 09:52:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=42791022</link><dc:creator>miguelaeh</dc:creator><comments>https://news.ycombinator.com/item?id=42791022</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42791022</guid></item><item><title><![CDATA[New comment by miguelaeh in "Running AI locally in your users' browsers"]]></title><description><![CDATA[
<p>I have been recently exploring and testing this approach of offloading the AI inference to the user instead of using a cloud API.
There are many advantages and I can see how this could be the norm in the future.<p>Also, I was surprised by the amounts of people that have GPUs and how well SLMs perform in many cases, even those with just 1B parameters.</p>
]]></description><pubDate>Fri, 25 Oct 2024 17:41:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=41947575</link><dc:creator>miguelaeh</dc:creator><comments>https://news.ycombinator.com/item?id=41947575</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41947575</guid></item><item><title><![CDATA[Running AI locally in the users' browsers]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.offload.fyi/blog/running-ai-directly-in-the-user-browser">https://www.offload.fyi/blog/running-ai-directly-in-the-user-browser</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=41947574">https://news.ycombinator.com/item?id=41947574</a></p>
<p>Points: 3</p>
<p># Comments: 2</p>
]]></description><pubDate>Fri, 25 Oct 2024 17:41:40 +0000</pubDate><link>https://www.offload.fyi/blog/running-ai-directly-in-the-user-browser</link><dc:creator>miguelaeh</dc:creator><comments>https://news.ycombinator.com/item?id=41947574</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41947574</guid></item><item><title><![CDATA[A WebGPU C++ Guide]]></title><description><![CDATA[
<p>Article URL: <a href="https://eliemichel.github.io/LearnWebGPU/">https://eliemichel.github.io/LearnWebGPU/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=41796830">https://news.ycombinator.com/item?id=41796830</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 10 Oct 2024 08:34:31 +0000</pubDate><link>https://eliemichel.github.io/LearnWebGPU/</link><dc:creator>miguelaeh</dc:creator><comments>https://news.ycombinator.com/item?id=41796830</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41796830</guid></item><item><title><![CDATA[New comment by miguelaeh in "Forget ChatGPT: why researchers now run small AIs on their laptops"]]></title><description><![CDATA[
<p>I am betting on local AI and building offload.fyi to make it easy to implement in any app</p>
]]></description><pubDate>Sat, 21 Sep 2024 14:37:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=41610273</link><dc:creator>miguelaeh</dc:creator><comments>https://news.ycombinator.com/item?id=41610273</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41610273</guid></item><item><title><![CDATA[New comment by miguelaeh in "Tax the Rich – European Citizens' Initiative"]]></title><description><![CDATA[
<p>Wow. I didn't know that. It's crazy.<p>I am aware of the legal and tax burdens that many entrepreneurs suffer because of the exit tax. That's the reason why many of them (me included) decide to open their companies directly in the U.S., which makes Spain to continue becoming more poor.<p>I really don't understand how we passed from being the biggest world empire to where we are right now in barely 500 years.</p>
]]></description><pubDate>Wed, 18 Sep 2024 20:30:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=41585081</link><dc:creator>miguelaeh</dc:creator><comments>https://news.ycombinator.com/item?id=41585081</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41585081</guid></item><item><title><![CDATA[New comment by miguelaeh in "Meta AI: "The Future of AI Is Open Source and Decentralized""]]></title><description><![CDATA[
<p>The first cars, networks, and many other things were not unexpensive. They became so with time and growing adoption.<p>Cost of compute will continue decreasing and we will reach that point where it is feasible to have AI everywhere. I think with this particular technology we have already reached a no return point</p>
]]></description><pubDate>Wed, 18 Sep 2024 20:17:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=41584946</link><dc:creator>miguelaeh</dc:creator><comments>https://news.ycombinator.com/item?id=41584946</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41584946</guid></item><item><title><![CDATA[New comment by miguelaeh in "Tax the Rich – European Citizens' Initiative"]]></title><description><![CDATA[
<p>I am from Spain, the system is just a joke. It is not taxing the wealthy people, just those wealthy enough to live well but not worth creating a holding company, which is in most cases working people with well-paid jobs and not the so hated businessman.</p>
]]></description><pubDate>Wed, 18 Sep 2024 11:29:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=41578379</link><dc:creator>miguelaeh</dc:creator><comments>https://news.ycombinator.com/item?id=41578379</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41578379</guid></item><item><title><![CDATA[New comment by miguelaeh in "Scramble: Open-Source Alternative to Grammarly"]]></title><description><![CDATA[
<p>You're welcome! Let me know if you plan to integrate local models as mentioned in other comments, I am working on something to make it transparent.</p>
]]></description><pubDate>Wed, 18 Sep 2024 11:22:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=41578334</link><dc:creator>miguelaeh</dc:creator><comments>https://news.ycombinator.com/item?id=41578334</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41578334</guid></item><item><title><![CDATA[New comment by miguelaeh in "Scramble: Open-Source Alternative to Grammarly"]]></title><description><![CDATA[
<p>I don't think the point here should be the cost, but the fact that you are sending everything you write to OpenAI to train their models on your information. The option of a local model allows you to preserve the privacy of what you write.
I like that.</p>
]]></description><pubDate>Wed, 18 Sep 2024 11:19:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=41578295</link><dc:creator>miguelaeh</dc:creator><comments>https://news.ycombinator.com/item?id=41578295</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41578295</guid></item><item><title><![CDATA[New comment by miguelaeh in "Scramble: Open-Source Alternative to Grammarly"]]></title><description><![CDATA[
<p>I am a Grammarly user and I just installed Scramble to try it out. However, it does not seem to work. When I click on any of the options, nothing happens. I use Ubuntu 22.04.<p>Also, to provide some feedback, it would be awesome to make it automatically appear on the text areas and highlight errors like Grammarly does, it creates a much better UX.</p>
]]></description><pubDate>Wed, 18 Sep 2024 11:13:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=41578242</link><dc:creator>miguelaeh</dc:creator><comments>https://news.ycombinator.com/item?id=41578242</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41578242</guid></item><item><title><![CDATA[New comment by miguelaeh in "Does your startup need complex cloud infrastructure?"]]></title><description><![CDATA[
<p>I guess what some people do not understand is that K8s was created internally at Google, for managing their services and handling millions of users.<p>For new projects that, with luck, will have a couple hundred users at the beginning it is just overkilling (and also very expensive).<p>My approach is usually Vercel + some AWS/Hetzner instance running the services with docker-compose inside or sometimes even just a system service that starts with the instance. That's just enough. I like to use Vercel when deploying web apps because it is free for this scale and also saves me time with continuous deployment without having to ssh into the instances, fetch the new code and restart the service.</p>
]]></description><pubDate>Fri, 13 Sep 2024 09:21:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=41529484</link><dc:creator>miguelaeh</dc:creator><comments>https://news.ycombinator.com/item?id=41529484</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41529484</guid></item><item><title><![CDATA[Are companies interested on running LLM inference locally?]]></title><description><![CDATA[
<p>I have been thinking about this recently. Many projects are focused on running LLMs and SLMs locally. However, is that just for playing around? Or do you actually want to run the inference locally in your companies?<p>I feel like there could be 2 major advantages: costs at scale and privacy.<p>1. When talking about the cost, GPT-4o-mini is inexpensive and if we continue in that path, the cost for inference will become negligible soon. Unless your company makes huge use of the model (or uses huge contexts), like those running thousands of autonomous agents, investing in the hardware, does not seem like the best alternative.<p>2. Privacy. I would say this is more relevant for some industries that work with highly sensitive data. However, I can see how big companies simply engage in private cloud contracts with Azure or other cloud providers. They provide that peace of mind and scalability and at the same time, depending on the contract, some guarantees.<p>So my big question is, do you know use cases or companies deploying LLMs on their data centers, or looking to do it or is this just for hobbyists?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=41522549">https://news.ycombinator.com/item?id=41522549</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 12 Sep 2024 16:22:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=41522549</link><dc:creator>miguelaeh</dc:creator><comments>https://news.ycombinator.com/item?id=41522549</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41522549</guid></item></channel></rss>