<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: billmalarky</title><link>https://news.ycombinator.com/user?id=billmalarky</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 15:06:18 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=billmalarky" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by billmalarky in "Claws are now a new layer on top of LLM agents"]]></title><description><![CDATA[
<p>Yes -- definitely that's the value prop. But it's not binary all or nothing.<p>AI automation is about trust (honestly, same as human delegation).<p>You give it access to a little bit of data, just enough to do a basic useful thing or two, then you give it a bit of responsibility.<p>Then as you build confidence and trust, you give it a little more access, and allow it to take on a little more responsibility. Naturally, if it blows up in your face, you dial back access and responsibility quick.<p>As an analogy, folks drive their cars on the highway at 65-85+ MPH. Fatality rate goes up somewhat exponentially with speed and anything 60+ is considerably more deadly than ~30mph.<p>We're all so confident that a wheel won't randomly fall off because we've built so much trust with the quality of modern automobiles. But it does happen (I had a friend in high-school who's wheel popped off on a 45 mph road -- naturally he was going 50-55 IIRC).<p>In the early 1900s people would have thought you had a death wish to drive this fast. 25-30mph was normal then -- the automobiles at the time just weren't developed enough to be trusted at higher speeds.<p>My previous comment was about the fact that it is possible to build this sandboxing/bastion layer with live web accounts that allows for fine grained control over how much data you want to expose to the ai.</p>
]]></description><pubDate>Sun, 22 Feb 2026 19:15:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47113748</link><dc:creator>billmalarky</dc:creator><comments>https://news.ycombinator.com/item?id=47113748</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47113748</guid></item><item><title><![CDATA[New comment by billmalarky in "Claws are now a new layer on top of LLM agents"]]></title><description><![CDATA[
<p>Bastion hosts.<p>You don't give it your "prod email", you give it a secondary email you created specifically for it.<p>You don't give it your "prod Paypal", you create a secondary paypal (perhaps a paypal account registered using the same email as the secondary email you gave it).<p>You don't give it your "prod bank checking account", you spin up a new checking with Discover.com (or any other online back that takes <5min to create a new checking account). With online banking it is fairly straightforward to set up fully-sandboxed financial accounts. You can, for example, set up one-way flows from your "prod checking account" to your "bastion checking account." Where prod can push/pull cash to the bastion checking, but the bastion cannot push/pull (or even see) the prod checking acct. The "permissions" logic that supports this is handled by the Nacha network (which governs how ACH transfers can flow). Banks cannot... ignore the permissions... they quickly (immediately) lose their ability to legally operate as a bank if they do...<p>Now then, I'm not trying to handwave away the serious challenges associated with this technology. There's also the threat of reputational risks etc since it is operating as your agent -- heck potentially even legal risk if things get into the realm of "oops this thing accidentally committed financial fraud."<p>I'm simply saying that the idea of least privileged permissions applies to online accounts as well as everything else.</p>
]]></description><pubDate>Sat, 21 Feb 2026 19:11:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47103689</link><dc:creator>billmalarky</dc:creator><comments>https://news.ycombinator.com/item?id=47103689</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47103689</guid></item><item><title><![CDATA[New comment by billmalarky in "Opus 4.5 is not the normal AI agent experience that I have had thus far"]]></title><description><![CDATA[
<p>Hi Kypro this is very interesting perspective. Can you reach out to me? I'd like to discuss what you're observing with you a bit in private as it relates heavily to a project I'm currently working on. My contact info is on my profile. Pls shoot me a connection request and just say you're kypro from HN :)<p>Or is there a good way for me to contact you? Your profile doesn't list anything and your handle doesn't seem to have much of an online footprint.<p>Lastly, I promise I'm not some weirdo, I'm a realperson™ -- just check my HN comment history. A lot of people in the AI community have met me in person and can confirm (swyx etc).<p>Look forward to chatting!</p>
]]></description><pubDate>Wed, 07 Jan 2026 20:18:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46531953</link><dc:creator>billmalarky</dc:creator><comments>https://news.ycombinator.com/item?id=46531953</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46531953</guid></item><item><title><![CDATA[New comment by billmalarky in "GPT-5"]]></title><description><![CDATA[
<p>Ah, I probably should have listed some of the "assumptions" I'm developing it on top of:<p>1) Regarding the "generation is how learning occurs" claim, I'm going off of this:<p><a href="https://www.learningscientists.org/blog/2024/3/7/how-does-retrieval-improve-new-learning" rel="nofollow">https://www.learningscientists.org/blog/2024/3/7/how-does-re...</a><p>Granted, that article refers to retrieval specifically being one major way we learn, and of course learning incorporates many dimensions. But it seems a bit self-evident that retrieval occurs heavily during active problem solving (ie "generation"), and less so during passive learning (ie: just reading/consuming info).<p>From personal experience, I always noticed I learned much more by doing than by consuming documentation alone.<p>But yes, I admit this assumption and my own personal experience/bias is doing a lot of heavy lifting for me...<p>2) Regarding the "optimal AI productivity process" (AI Generates > Human Validates > Loop)<p>I'm using Karpathy's productivity loop described in his AI startup school talk last month here:<p><a href="https://youtu.be/LCEmiRjPEtQ?t=1327" rel="nofollow">https://youtu.be/LCEmiRjPEtQ?t=1327</a><p>Does this help make it more concrete Swyx (name dropping you here since I'm pretty sure you've got a social listener set for your handle ;)? Love to hear your thoughts straight from the hip based on your own personal experiences.<p>Full disclosure: I'm not trying to get too academic about this. In all honestly I'm really trying to get to an informal theory that's useful and practical enough that it can be turned into a regular business process for rapid professional development.</p>
]]></description><pubDate>Thu, 07 Aug 2025 23:52:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=44831831</link><dc:creator>billmalarky</dc:creator><comments>https://news.ycombinator.com/item?id=44831831</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44831831</guid></item><item><title><![CDATA[New comment by billmalarky in "GPT-5"]]></title><description><![CDATA[
<p>Hi Swyx I always appreciate your insights, something you wrote really resonated with a personal theory I've been developing:<p>>"While I never use AI for personal writing (because I have a strong belief in writing to think)"<p>The optimal AI productivity process is starting to look like:<p>AI Generates > Human Validates > Loop<p>Yet cognitive generation is how humans learn and develop cognitive strength, as well as how they maintain such strength.<p>Similar to how physical activity is how muscles/bone density/etc grow, and how body tissues maintain.<p>Physical technology freed us from hard physical labor that kept our bodies in shape -- at a cost of physical atrophy.<p>AI seems to have a similar effect for our minds. AI will accelerate our cognitive productivity, and allow for cognitive convenience -- at a cost of cognitive atrophy.<p>At present we must be intentional about building/maintaining physical strength (dedicated strength training, cardio, etc).<p>Soon we will need to be intentional about building/maintaining cognitive strength.<p>I suspect the workday/week of the future will be split on AI-on-a-leash work for optimal productivity, with carve-outs for dedicated AI-enhanced-learning solely for building/maintaining cognitive health (where productivity is not the goal, building/maintaining cognition is). Similar to how we carve out time for working out.<p>What are your thoughts on this? Based on what you wrote above, it seems you have similar feelings?<p>Is there a name for this theory?<p>If not can you coin one? You're great at that :)</p>
]]></description><pubDate>Thu, 07 Aug 2025 18:24:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=44828420</link><dc:creator>billmalarky</dc:creator><comments>https://news.ycombinator.com/item?id=44828420</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44828420</guid></item><item><title><![CDATA[New comment by billmalarky in "Gemini Embedding: Powering RAG and context engineering"]]></title><description><![CDATA[
<p>word ;)</p>
]]></description><pubDate>Fri, 01 Aug 2025 18:49:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=44760837</link><dc:creator>billmalarky</dc:creator><comments>https://news.ycombinator.com/item?id=44760837</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44760837</guid></item><item><title><![CDATA[New comment by billmalarky in "Gemini Embedding: Powering RAG and context engineering"]]></title><description><![CDATA[
<p>Search tool calling is RAG. Maybe we should call it a "RAG Agent" to be more en vogue heh. But RAG is not just similarity search on embeddings in vector DBs. RAG is any type of a retrieval + context injection step prior to inference.<p>Heck, the RAG Agent could run cosign diff on your vector db in addition to grep, FTS queries, KB api calls, whatever, to do wide recall (candidate generation) then rerank (relevance prioritization) all the results.<p>You are probably correct that for most use cases search tool calling makes more practical sense than embeddings similarity search to power RAG.</p>
]]></description><pubDate>Thu, 31 Jul 2025 18:47:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=44748805</link><dc:creator>billmalarky</dc:creator><comments>https://news.ycombinator.com/item?id=44748805</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44748805</guid></item><item><title><![CDATA[New comment by billmalarky in "The skill of the future is not 'AI', but 'Focus'"]]></title><description><![CDATA[
<p>I built a distributed software engineering firm pre-covid, so all of our clients were onsite even though we were full-remote. My engineers plugged into the engineering teams of our clients, so it's not like we were building on the side and just handing over deliverables, we had to fully integrate into the client teams.<p>So we had to solve this problem pre-covid, and the solution remained the same during the pandemic when every org went full remote (at least temporarily).<p>There is no "one size fits all approach" because each engineer is different. We had dozens of engineers on our team, and you learn that people are very diverse in how they think/operate.<p>But we came up with a framework that was really successful.<p>1) Good faith is required: you mention personnel abusing time/trust, that's a different issue entirely, no framework will be successful if people refuse to comply. This system only works if teammates trust the person. Terminate someone who can't be trusted.<p>2) "Know thyself": Many engineers wouldn't necessarily even know how THEY operated best (if they needed large chunks of focus time, or were fine multi-tasking, etc). We'd have them make a best guess when onboarding and then iterate and update as they figured out how they worked best.<p>3) Proactively Propagate Communication Standard: Most engineers would want large chunks of uninterrupted focus time, so we would tell them to EXPLICITLY tell their teammates or any other stakeholders WHEN they would be focusing and unresponsive (standardize it via schedule), and WHY (ie sell the idea). Bad feelings or optics are ALWAYS simply a matter of miscommunication so long as good faith exists. We'd also have them explain "escalation patterns", ie "if something is truly urgent, DM me on slack a few times and finally, call my phone."<p>4) Set comms status: Really this is just slack/teams. but basically as a soft reminder to stakeholders, set your slack status to "heads down building" or something so people remember that you aren't available due to focus time. It's really easy to sync slack status to calendar blocks to automate this.<p>We also found that breaking the day into async task time and sync task time really helped optimize. Async tasks are tasks that can get completed in small chunks of time like code review, checking email, slack, etc. These might be large time sinks in aggregate, but generally you can break into small time blocks and still be successful. We would have people set up their day so all the async tasks would be done when they are already paying a context switching cost. IE, scheduled agile cadence meetings etc. If you're doing a standup meeting, you're already gonna be knocked out of flow so might as well use this time to also do PR review, async comms, etc. Naturally we had people stack their meetings when possible instead of pepper throughout the day (more on how this was accomplished below).<p>Anyways, sometimes when an engineer of ours joined a new team, there might be a political challenge in not fitting into the existing "mold" of how that team communicated (if that team's comm standard didn't jive with our engineer's). This quickly resolved every single time when our engineer was proven out to be much more productive/effective than the existing engineers (who were kneecapped by the terrible distracting existing standard of meetings, constant slack interruptions, etc). We would even go as far as to tell stakeholders our engineers would not be attending less important meetings (not immediately, once we had already proven ourselves a bit). The optics around this weren't great at first, but again, our engineers would start 1.5-2X'ing productivity of the in-house engineers, and political issues melt away very quickly.<p>TL;DR - Operate in good faith, decide your own best communication standard, propagate the standard out to your stakeholders explicitly, deliver and people will respect you and also your comms standard.</p>
]]></description><pubDate>Sun, 20 Apr 2025 18:05:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=43745358</link><dc:creator>billmalarky</dc:creator><comments>https://news.ycombinator.com/item?id=43745358</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43745358</guid></item><item><title><![CDATA[New comment by billmalarky in "Wasting Inferences with Aider"]]></title><description><![CDATA[
<p>I've been lucky enough to have a few conversations with Scott a month or so ago and he is doing some really compelling work around the AISDLC and creating a factory line approach to building software. Seriously folks, I recommend following this guy closely.<p>There's another guy in this space I know who's doing similar incredible things but he doesn't really speak about it publicly so don't want to discuss w/o his permission. I'm happy to make an introduction for those interested just hmu (check my profile for how).<p>Really excited to see you on the FP of HN Scott!</p>
]]></description><pubDate>Sun, 13 Apr 2025 15:20:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=43673418</link><dc:creator>billmalarky</dc:creator><comments>https://news.ycombinator.com/item?id=43673418</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43673418</guid></item><item><title><![CDATA[New comment by billmalarky in "But what if I want a faster horse?"]]></title><description><![CDATA[
<p>^ this guy knows Jobs To Be Done theory ;)<p>For those who don't, reading "Competing Against Luck" by Clayton Christensen will dramatically improve your ability to create successful products/services.</p>
]]></description><pubDate>Fri, 11 Apr 2025 19:45:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=43657716</link><dc:creator>billmalarky</dc:creator><comments>https://news.ycombinator.com/item?id=43657716</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43657716</guid></item><item><title><![CDATA[New comment by billmalarky in "Claude 3.7 Sonnet and Claude Code"]]></title><description><![CDATA[
<p>Yes. Absolutely it is. For different workloads it is an insanely effective tool.</p>
]]></description><pubDate>Tue, 25 Feb 2025 20:28:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=43176940</link><dc:creator>billmalarky</dc:creator><comments>https://news.ycombinator.com/item?id=43176940</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43176940</guid></item><item><title><![CDATA[New comment by billmalarky in "Claude 3.7 Sonnet and Claude Code"]]></title><description><![CDATA[
<p>Hi Paul, been following the aider project for about a year now to develop an understanding of how to build SWE agents.<p>I was at the AI Engineering Summit in NYC last week and met an (extremely senior) staff ai engineer doing somewhat unbelievable things with aider. Shocking things tbh.<p>Is there a good way to share stories about real-world aider projects like this with you directly (if I can get approval from him)? Not sure posting on public forum is appropriate but I think you would be really interested to hear how people are using this tool at the edge.</p>
]]></description><pubDate>Tue, 25 Feb 2025 20:25:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=43176914</link><dc:creator>billmalarky</dc:creator><comments>https://news.ycombinator.com/item?id=43176914</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43176914</guid></item><item><title><![CDATA[New comment by billmalarky in "Show HN: Missio – AI Agent and Copilot for Your APIs"]]></title><description><![CDATA[
<p>Hey! this is exciting to see! Hi Earl and Oisin! (I've had the pleasure of meeting Earl and Oisin face to face a few times. Really friendly and smart guys, fwiw based on my convos they are very serious about building a compelling product, excited to see it on hn!)</p>
]]></description><pubDate>Mon, 13 Jan 2025 21:33:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=42689730</link><dc:creator>billmalarky</dc:creator><comments>https://news.ycombinator.com/item?id=42689730</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42689730</guid></item><item><title><![CDATA[New comment by billmalarky in "We can all be AI engineers – and we can do it with open source models"]]></title><description><![CDATA[
<p>You find issues when they surface during your actual use case (and by "smoke testing" around your real-world use case). You can often "fix" issues in the base model with additional training (supervised fine-tuning, reinforcement learning w/ DPO, etc).<p>There's a lot of tooling out there making this accessible to someone with a solid full-stack engineering background.<p>Training an LLM from scratch is a different beast, but that knowledge honestly isn't too practical for everyday engineers given even if you had the knowledge you wouldn't necessarily have the resources necessary to train a competitive model. Of course you could command a high salary working for the orgs who do have these resources! One caveat is there are orgs doing serious post-training even with unsupervised techniques to take a base model and reeaaaaaally bake in domain-specific knowledge/context. Honestly I wonder if even that is unaccessible to pull off. You get a lot of wiggle-room and margin for error when post-training a well-built base model because of transfer learning.</p>
]]></description><pubDate>Thu, 14 Nov 2024 17:34:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=42138706</link><dc:creator>billmalarky</dc:creator><comments>https://news.ycombinator.com/item?id=42138706</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42138706</guid></item><item><title><![CDATA[New comment by billmalarky in "Using reinforcement learning and $4.80 of GPU time to find the best HN post"]]></title><description><![CDATA[
<p>This post is using regression to build a reward model. The reward model will then be used (in a future post) to build the overall RL system.<p>Here's the relevant text from the article:<p>>In this post we’ll discuss how to build a reward model that can predict the upvote count that a specific HN story will get. And in follow-up posts in this series, we’ll use that reward model along with reinforcement learning to create a model that can write high-value HN stories!</p>
]]></description><pubDate>Mon, 28 Oct 2024 20:08:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=41975635</link><dc:creator>billmalarky</dc:creator><comments>https://news.ycombinator.com/item?id=41975635</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41975635</guid></item><item><title><![CDATA[New comment by billmalarky in "LLM Fine-Tuning Best Practices: Base Models Proprietary/Open Source, Large/Small"]]></title><description><![CDATA[
<p>Hey HN! I've been working on a series capturing the industry best practices in producing fine-tuned LLMs. Our first post covered training data preparation, and for this one we'll be talking about how to select your base model and hyperparameters, covering both closed and open models as well as models of different sizes. This was created in collaboration with Kyle Corbitt at OpenPipe (kcorbitt) who will be in the comments as well!<p>We cover:<p>- Proprietary (well mainly OpenAI) vs open source models: OAI has really great performance w/ relatively small training datasets but you hit a ceiling on max performance sooner than you do w/ open source models. We're not sure exactly what drives this, but it very well could be that OAI has made some technical decisions under the hood that leans towards this "Red Mage" approach in order to serve a broader audience (ie users w/ less training data).<p>- Large vs Small Models: Main thing is larger models typically let you "get away" with less training data all things equal. But where possible you want to deploy the smallest model that achieves acceptable performance in prod. Smaller is less "costly" across a variety of dimensions (not just price).<p>- Hyperparameter Tuning: We cover this in some detail for those who are curious, but to be frank hparam tuning is generally not a high ROI use of your engineering time. Plow that time into dataset curation instead :)<p>Hope this is useful for folks! Happy to answer questions here (and hope to learn something new too)</p>
]]></description><pubDate>Wed, 28 Aug 2024 12:39:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=41378810</link><dc:creator>billmalarky</dc:creator><comments>https://news.ycombinator.com/item?id=41378810</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41378810</guid></item><item><title><![CDATA[LLM Fine-Tuning Best Practices: Base Models Proprietary/Open Source, Large/Small]]></title><description><![CDATA[
<p>Article URL: <a href="https://openpipe.ai/blog/fine-tuning-best-practices-chapter-2-models">https://openpipe.ai/blog/fine-tuning-best-practices-chapter-2-models</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=41378809">https://news.ycombinator.com/item?id=41378809</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Wed, 28 Aug 2024 12:39:59 +0000</pubDate><link>https://openpipe.ai/blog/fine-tuning-best-practices-chapter-2-models</link><dc:creator>billmalarky</dc:creator><comments>https://news.ycombinator.com/item?id=41378809</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41378809</guid></item><item><title><![CDATA[New comment by billmalarky in "LLM Fine-Tuning Best Practices for Training Data Curation"]]></title><description><![CDATA[
<p>I recently interviewed Kyle Corbitt (YC23 Founder) who has been deeply involved in the LLM fine-tuning space the last couple years. Much like with pre-training models, most of the performance gains ultimately delivered from a fine-tuned model occur as a result of well planned and executed training data curation.<p>I whipped up this article sharing important best practices patterns that have emerged in Kyle's experience observing the fine-tuning of thousands of models across a wide variety of downstream tasks. Some validate long-held understandings in the space, others were quite surprising to me (especially the sample efficiency of modern SOTA LLMs!)<p>Hope sharing this knowledge helps someone out there! And please share additional insight you have in comments so I can learn even more about this topic.</p>
]]></description><pubDate>Fri, 02 Aug 2024 16:03:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=41139962</link><dc:creator>billmalarky</dc:creator><comments>https://news.ycombinator.com/item?id=41139962</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41139962</guid></item><item><title><![CDATA[LLM Fine-Tuning Best Practices for Training Data Curation]]></title><description><![CDATA[
<p>Article URL: <a href="https://openpipe.ai/blog/fine-tuning-best-practices-series-introduction-and-chapter-1-training-data">https://openpipe.ai/blog/fine-tuning-best-practices-series-introduction-and-chapter-1-training-data</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=41139961">https://news.ycombinator.com/item?id=41139961</a></p>
<p>Points: 1</p>
<p># Comments: 2</p>
]]></description><pubDate>Fri, 02 Aug 2024 16:03:18 +0000</pubDate><link>https://openpipe.ai/blog/fine-tuning-best-practices-series-introduction-and-chapter-1-training-data</link><dc:creator>billmalarky</dc:creator><comments>https://news.ycombinator.com/item?id=41139961</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41139961</guid></item><item><title><![CDATA[New comment by billmalarky in "Show HN: How we leapfrogged traditional vector based RAG with a 'language map'"]]></title><description><![CDATA[
<p>Greatly appreciate the suggestion! but I recently joined another YC backed Gen AI startup in the fine-tuning space (OpenPipe) :-D<p>Speaking of, there's a good chance fine tuned models will be a component of your fully-optimized codebase -> wiki automation process at some point in the future. Likely to increase consistency/reliability of LLM responses as clear patterns start to emerge in the process. If y'all decide to layer that on or even explore it as an optimization strategy hit us (or me directly) up. We love collaborating w/ engineers working on problems at the edge like this, aside from how engaging the problems themselves are, it also helps us build our best possible product too!<p>Very excited to follow your journey! just sent you a LI request.<p>Thanks again so much for sharing your wisdom!</p>
]]></description><pubDate>Fri, 19 Jul 2024 19:09:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=41009963</link><dc:creator>billmalarky</dc:creator><comments>https://news.ycombinator.com/item?id=41009963</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41009963</guid></item></channel></rss>