<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: whoisjuan</title><link>https://news.ycombinator.com/user?id=whoisjuan</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 08:51:01 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=whoisjuan" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by whoisjuan in "AI World Clocks"]]></title><description><![CDATA[
<p>The time given to the model. So the difference between two generations is just somethng trivially different like: "12:35" vs 12:36"</p>
]]></description><pubDate>Fri, 14 Nov 2025 19:45:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=45931255</link><dc:creator>whoisjuan</dc:creator><comments>https://news.ycombinator.com/item?id=45931255</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45931255</guid></item><item><title><![CDATA[New comment by whoisjuan in "AI World Clocks"]]></title><description><![CDATA[
<p>It's actually quite fascinating if you watch it for 5 minutes. Some models are overall bad, but others nail it in one minute and butcher it in the next.<p>It's perhaps the best example I have seen of model drift driven by just small, seemingly unimportant changes to the prompt.</p>
]]></description><pubDate>Fri, 14 Nov 2025 19:05:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=45930624</link><dc:creator>whoisjuan</dc:creator><comments>https://news.ycombinator.com/item?id=45930624</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45930624</guid></item><item><title><![CDATA[New comment by whoisjuan in "Launch HN: Exa (YC S21) – The web as a database"]]></title><description><![CDATA[
<p>Did you guys change the pricing of Exa?<p>When I checked this a year or so ago, I might have gotten the impression that it was cheaper. Now, it costs the same as what Perplexity charges for search-grounded queries, which is the same as Google charges for Gemini queries with search.<p>So basically, one player sets a price, and everyone is anchored on that as the pricing for the entire category? I'm just genuinely interested in why every offering in this space is priced like this.<p>It seems a bit misaligned with how pure LLM queries are priced.<p>I have a product that would benefit from search grounding, but this pricing wouldn't work with my volume of queries.</p>
]]></description><pubDate>Tue, 06 May 2025 22:21:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=43910228</link><dc:creator>whoisjuan</dc:creator><comments>https://news.ycombinator.com/item?id=43910228</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43910228</guid></item><item><title><![CDATA[New comment by whoisjuan in "Memories are not only in the brain, human cell study finds"]]></title><description><![CDATA[
<p><a href="https://pubmed.ncbi.nlm.nih.gov/38694651/" rel="nofollow">https://pubmed.ncbi.nlm.nih.gov/38694651/</a></p>
]]></description><pubDate>Sat, 09 Nov 2024 15:55:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=42095061</link><dc:creator>whoisjuan</dc:creator><comments>https://news.ycombinator.com/item?id=42095061</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42095061</guid></item><item><title><![CDATA[New comment by whoisjuan in "Memories are not only in the brain, human cell study finds"]]></title><description><![CDATA[
<p>This is wild, but many studies have reached the same conclusion.<p>I remember reading somewhere that heart transplant recipients have random memory flashes that are not their memories, and sometimes they develop new personality traits.</p>
]]></description><pubDate>Sat, 09 Nov 2024 15:06:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=42094783</link><dc:creator>whoisjuan</dc:creator><comments>https://news.ycombinator.com/item?id=42094783</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42094783</guid></item><item><title><![CDATA[New comment by whoisjuan in "DoNotPay has to pay $193K for falsely touting untested AI lawyer, FTC says"]]></title><description><![CDATA[
<p>This is a very sneaky ethically gray company. Their app is not only of terrible quality but also full of dark patterns. I'm convinced that any revenue they make comes from people who can't figure out how to cancel. Stay away from it.</p>
]]></description><pubDate>Thu, 26 Sep 2024 15:58:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=41659972</link><dc:creator>whoisjuan</dc:creator><comments>https://news.ycombinator.com/item?id=41659972</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41659972</guid></item><item><title><![CDATA[New comment by whoisjuan in "Will Figma become an awkward middle ground?"]]></title><description><![CDATA[
<p>Ohh nice catch. Will do. Thanks</p>
]]></description><pubDate>Wed, 24 Jul 2024 22:11:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=41062736</link><dc:creator>whoisjuan</dc:creator><comments>https://news.ycombinator.com/item?id=41062736</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41062736</guid></item><item><title><![CDATA[New comment by whoisjuan in "Will Figma become an awkward middle ground?"]]></title><description><![CDATA[
<p>I'm a designer. I built brainglue.ai without Figma, a design system, or a UI library. I just went directly to code (react+tailwind) and let a style organically emerge.<p>I'm not saying that I'm a unicorn and that my idea-to-code-to-design execution is flawless, but I certainly believe that in this situation, if I hadn't done it this way, I wouldn't have done it all. However, doing this would be wasteful or dumb in almost every other situation that requires my design output.<p>People pay for Figma precisely because it's a middle ground. It was a middle ground before, and it will continue to be unless something fundamental changes.</p>
]]></description><pubDate>Wed, 24 Jul 2024 20:54:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=41061982</link><dc:creator>whoisjuan</dc:creator><comments>https://news.ycombinator.com/item?id=41061982</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41061982</guid></item><item><title><![CDATA[New comment by whoisjuan in "Apple tries to rein in Hollywood spending after years of losses"]]></title><description><![CDATA[
<p>AppleTV+ is a tiny business. It's nowhere near of generating enough revenue to cover a $20B hole in content production costs.<p>Yes, Apple generates LOTS of revenue overall, but that doesn't justify bleeding cash on a business line that hasn't produced material returns and has no significant positive trajectory in sight.<p>It's clear that Apple saw this as their Prime Video bet on their services strategy, but that hasn't worked out. Just look at AppleTV+ market share. It's hilariously miniscule.</p>
]]></description><pubDate>Mon, 22 Jul 2024 18:09:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=41037576</link><dc:creator>whoisjuan</dc:creator><comments>https://news.ycombinator.com/item?id=41037576</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41037576</guid></item><item><title><![CDATA[New comment by whoisjuan in "Walls are starting to close in for Tesla, let's have a closer look"]]></title><description><![CDATA[
<p>I own a Model 3, and I honestly think it is the best car I have ever owned.<p>Despite that, I know people who have ruled out owning a Tesla because they believe the brand mirrors Elon Musk's public persona. They flat-out reject any Tesla product because the brand's visible face is someone they believe doesn't represent their values.<p>I'm unsure he understood the implications of becoming such a polarizing figure. It was totally unnecessary, yet that was his choice.</p>
]]></description><pubDate>Wed, 22 May 2024 21:01:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=40446492</link><dc:creator>whoisjuan</dc:creator><comments>https://news.ycombinator.com/item?id=40446492</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40446492</guid></item><item><title><![CDATA[New comment by whoisjuan in "Meta's new LLM-based test generator"]]></title><description><![CDATA[
<p>No op, but I don’t think test-driven development resounds with everyone who writes code.<p>I don’t want to write tests for everything. I just want to write the ones that matter.</p>
]]></description><pubDate>Sat, 24 Feb 2024 00:44:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=39488001</link><dc:creator>whoisjuan</dc:creator><comments>https://news.ycombinator.com/item?id=39488001</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39488001</guid></item><item><title><![CDATA[New comment by whoisjuan in "GPT-4-turbo produces shorter completions when it "thinks" its December vs. May"]]></title><description><![CDATA[
<p>I think productivity is lower in the winter, so I'm not sure about quality per se, but intuitively it makes sense that anything written in the winter months is less verbose.</p>
]]></description><pubDate>Mon, 11 Dec 2023 21:25:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=38605700</link><dc:creator>whoisjuan</dc:creator><comments>https://news.ycombinator.com/item?id=38605700</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38605700</guid></item><item><title><![CDATA[New comment by whoisjuan in "OpenAI's board has fired Sam Altman"]]></title><description><![CDATA[
<p>GPTs is basically a ripoff of Poe by Quora.
Quora’s CEO is Adam D’ Angelo.
Adam D’ Angelo is one of OpenAI’s board members.<p>Make your own conclusions.</p>
]]></description><pubDate>Fri, 17 Nov 2023 21:35:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=38310751</link><dc:creator>whoisjuan</dc:creator><comments>https://news.ycombinator.com/item?id=38310751</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38310751</guid></item><item><title><![CDATA[New comment by whoisjuan in "1Password detects "suspicious activity" in its internal Okta account"]]></title><description><![CDATA[
<p>It was a minor incident, but it does remind me that centralized password managers seem to have an awful amount of concentrated risk.<p>Or is something like 1Password truly secure at its core, even if an attacker penetrates some layers of access?</p>
]]></description><pubDate>Tue, 24 Oct 2023 00:47:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=37993640</link><dc:creator>whoisjuan</dc:creator><comments>https://news.ycombinator.com/item?id=37993640</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37993640</guid></item><item><title><![CDATA[New comment by whoisjuan in "1Password discloses security incident linked to Okta breach"]]></title><description><![CDATA[
<p>It was a minor incident, but it does remind me that centralized password managers seem to have an awful amount of concentrated risk.<p>Is something like 1Password truly secure at its core, even if an attacker penetrates some layers of access?</p>
]]></description><pubDate>Tue, 24 Oct 2023 00:46:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=37993634</link><dc:creator>whoisjuan</dc:creator><comments>https://news.ycombinator.com/item?id=37993634</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37993634</guid></item><item><title><![CDATA[Show HN: Brainglue, an empirical playground for AI experimentation]]></title><description><![CDATA[
<p>Hi HN.<p>My name is Juan and I'm the creator of Brainglue.<p>Brainglue is a fun and empirical playground for large language models that allows anyone to build powerful prompt chains that can solve complex generative AI problems.<p>Brainglue focuses on providing an easy-to-use environment for prompt chaining.<p>It's now well understood that chaining prompts is one of the most effective ways to leverage LLMs for GenAI problems.<p>Prompt chains yield better reasoning and more accuracy, but experimenting and productizing these chains isn't yet trivial.<p>With Brainglue, you get an environment where is easy to build these chains and configure them for specific GenAI tasks.<p>Brainglue also comes out-of-the-box, comes with a straightforward API that allows you to use your AI chains from other applications and services.<p>Still early days, but I have high hopes for this kind of AI scripting form factor.<p>If you try it out and have any feedback, please let me know at brainglue@rasterwise.com</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=37525408">https://news.ycombinator.com/item?id=37525408</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 15 Sep 2023 16:19:13 +0000</pubDate><link>https://www.brainglue.ai/</link><dc:creator>whoisjuan</dc:creator><comments>https://news.ycombinator.com/item?id=37525408</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37525408</guid></item><item><title><![CDATA[New comment by whoisjuan in "What happened in this GPT-3 conversation?"]]></title><description><![CDATA[
<p>Unlikely. You can see that the model returns to normal behavior after it exhausts the context window that causes this.<p>Instructions are consistently passed as system instructions in a ChatGPT conversation, so if that was causing the erratic behavior, we wouldn’t see the model defaulting back to its normal behavior after the context window became large enough to lose part of the initial context.</p>
]]></description><pubDate>Tue, 08 Aug 2023 21:33:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=37055756</link><dc:creator>whoisjuan</dc:creator><comments>https://news.ycombinator.com/item?id=37055756</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37055756</guid></item><item><title><![CDATA[New comment by whoisjuan in "What happened in this GPT-3 conversation?"]]></title><description><![CDATA[
<p>One thing I have observed with the rise of generative AI is that the general direction everyone is pushing towards is to make the models behave deterministically when in principle, LLMs are probabilistic.<p>Every time a newer model is released, we will go through the same cycle of figuring out their emergent intelligent properties over and over.<p>But I’m not sure that approach will make evident that we are into AGI territory.<p>We really are going to need new kind of evaluations because it’s evident that passing the bar or whatever isn’t really give you a proxy for intelligence, let alone sentience.</p>
]]></description><pubDate>Tue, 08 Aug 2023 21:28:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=37055713</link><dc:creator>whoisjuan</dc:creator><comments>https://news.ycombinator.com/item?id=37055713</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37055713</guid></item><item><title><![CDATA[New comment by whoisjuan in "Show HN: Continue – Open-source coding autopilot"]]></title><description><![CDATA[
<p>Hey. I gave it a try to Continue and I think this is going in the right direction at least for me. I guess opinions on how this should work are subjective.<p>But I do really like that functionality of attaching context to the query! Love that. I replied to the founder of Sourcegraph in this thread and I think that should answer your question as well.<p>I'm excited about your approach and even more about the fact that you made it open source! Thanks for that. If this sticks for me I can see myself contributing to it.</p>
]]></description><pubDate>Thu, 27 Jul 2023 22:52:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=36901183</link><dc:creator>whoisjuan</dc:creator><comments>https://news.ycombinator.com/item?id=36901183</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36901183</guid></item><item><title><![CDATA[New comment by whoisjuan in "Show HN: Continue – Open-source coding autopilot"]]></title><description><![CDATA[
<p>I think there are several problems with these co-pilot solutions, but the most obvious ones are the context switching problem and the lack of steer-ability.<p>Let's say I want to build a complex React component that does a lot of stuff under the hood. Perhaps it needs to handle multiple inputs, it needs to show certain children UI conditionally based on the state and it also needs to push the state to a global redux store.<p>Perhaps the component also depends on some utils and other root components that pass props to it.<p>None of these solutions seem to acknowledge that what would improve my productivity is to have a proper way to model the problem for the AI the same way I'm understanding it. Allowing me to build the blast radius of the problem, instead of expecting the AI to infer it.<p>In the case of the React Component what I want is:<p>1) bootstrap the component<p>2) modify it to address requirements that emerge as I explore the needs.<p>3) look at dependencies such as schema files or root components and suggest modifications that align with the desired functionality or allow me to point out at required changes in dependencies.<p>When I say that these solutions are suboptimal, what I mean is that there's no straight-forward way for me to engage in these code generation tasks in a way that doesn't feel fragmented.<p>What I literally want is go to a file, tell the AI to consume all the context of that file and its dependencies and then modify it to add features, fix bugs, improve code or extend functionality. I then want to have the ability to accept those changes as a reviewer or suggest changes. And I want to do without forking away from my current context.<p>Nobody has gotten this experience right because what most players in this space are doing is building these restrictive form factors or atomized features like text brushes, global chats or in-line prompt to code generation.<p>I appreciate having these, but ultimately what I want is to have an AI system that accepts a context (a file or set of files) and a prompt and then generates the requested code modifications.<p>None of these extension allow me to do this effectively and if they do then it seems that ability is being diminished by a poor user-facing abstraction.<p>In my opinion, this is a design problem.</p>
]]></description><pubDate>Thu, 27 Jul 2023 19:04:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=36898559</link><dc:creator>whoisjuan</dc:creator><comments>https://news.ycombinator.com/item?id=36898559</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36898559</guid></item></channel></rss>