<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: bashfulpup</title><link>https://news.ycombinator.com/user?id=bashfulpup</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 08 Apr 2026 00:55:46 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=bashfulpup" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by bashfulpup in "LLM Workflows then Agents: Getting Started with Apache Airflow"]]></title><description><![CDATA[
<p>Other guy said it right. These work and are fine but you lose the legacy stuff. If you know your limits and where the eventual system will end up it's great and probably better.<p>If you are building a expandable long term system and you want all the goodies baked in choose airflow.<p>Pretty much the same as any architecture choice. Ugly/hard often means control and features, pretty/easy means less of both.<p>On the surface the differences are not very noticable other than the learning curve of getting started.</p>
]]></description><pubDate>Wed, 02 Apr 2025 21:10:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=43561656</link><dc:creator>bashfulpup</dc:creator><comments>https://news.ycombinator.com/item?id=43561656</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43561656</guid></item><item><title><![CDATA[New comment by bashfulpup in "OpenAI closes $40B funding round, startup now valued at $300B"]]></title><description><![CDATA[
<p>Eh, can you really though?<p>Let's be real if OAI is losing money on a 200$ subscription with hyper advanced effeciency methods are you really going to save money?<p>You should also enjoy the free VC money while it lasts. Just like cheap uber rides were great until the vc money dried up.<p>I've hosted and run a lot of large LLM experiments myself. You are in no way "saving" money doing so. It's also a giant pain to be avoided if possible.<p>Best thing to do right now is enjoy the cheap AI and when the free money stops use the winning mature open source platform.</p>
]]></description><pubDate>Tue, 01 Apr 2025 00:37:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=43541605</link><dc:creator>bashfulpup</dc:creator><comments>https://news.ycombinator.com/item?id=43541605</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43541605</guid></item><item><title><![CDATA[New comment by bashfulpup in "LLM Workflows then Agents: Getting Started with Apache Airflow"]]></title><description><![CDATA[
<p>This space is honestly a mess. I did an in depth survey around 1.5 yrs ago and my eventual conclusion was just to build with airflow.<p>You either get simplicity with the caveate that your systems need to perfectly align.<p>Or you get complexity but will work with basically anything (airflow).</p>
]]></description><pubDate>Tue, 01 Apr 2025 00:14:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=43541460</link><dc:creator>bashfulpup</dc:creator><comments>https://news.ycombinator.com/item?id=43541460</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43541460</guid></item><item><title><![CDATA[New comment by bashfulpup in "Most AI value will come from broad automation, not from R & D"]]></title><description><![CDATA[
<p>Very typical SV argument that R&D is "complex" and everything else is "simple".<p>Would it blow your mind if I told you 10yrs ago that we'd have AI that can do math/code better than 99% of humans but ordering a hotdog on doordash would be cutting edge and barely doable?<p>I don't disagree that "common" tasks are more valuable. I only argue that the argument these are easily automatable is a viewpoint based on ignorance. RPA has been around for over a decade and is not used in many tasks. AI is largely the same, until we get massive unrestriced access to the data for it we will not automate it.</p>
]]></description><pubDate>Sat, 22 Mar 2025 22:37:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=43449207</link><dc:creator>bashfulpup</dc:creator><comments>https://news.ycombinator.com/item?id=43449207</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43449207</guid></item><item><title><![CDATA[New comment by bashfulpup in "FOSS infrastructure is under attack by AI companies"]]></title><description><![CDATA[
<p>Anything we humans deem private in nature from other humans.</p>
]]></description><pubDate>Thu, 20 Mar 2025 15:21:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=43424594</link><dc:creator>bashfulpup</dc:creator><comments>https://news.ycombinator.com/item?id=43424594</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43424594</guid></item><item><title><![CDATA[New comment by bashfulpup in "FOSS infrastructure is under attack by AI companies"]]></title><description><![CDATA[
<p>The entire reason bots are so agressive is because they are cheap to run.<p>If a GPU was required per scrape then >90% simply couldn't afford it at scale.</p>
]]></description><pubDate>Thu, 20 Mar 2025 15:12:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=43424506</link><dc:creator>bashfulpup</dc:creator><comments>https://news.ycombinator.com/item?id=43424506</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43424506</guid></item><item><title><![CDATA[New comment by bashfulpup in "Ask HN: Any insider takes on Yann LeCun's push against current architectures?"]]></title><description><![CDATA[
<p>We degrade, and I think we are far more valuable than one model.</p>
]]></description><pubDate>Fri, 14 Mar 2025 22:54:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=43368190</link><dc:creator>bashfulpup</dc:creator><comments>https://news.ycombinator.com/item?id=43368190</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43368190</guid></item><item><title><![CDATA[New comment by bashfulpup in "Ask HN: Any insider takes on Yann LeCun's push against current architectures?"]]></title><description><![CDATA[
<p>Also possible and a fair point. My point is that it's a "tiny" solution that we can scale.<p>I could revise that by saying a kid with a whiteboard.<p>It's an einstein×10 moment so who know when that'll happen.</p>
]]></description><pubDate>Fri, 14 Mar 2025 22:53:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=43368179</link><dc:creator>bashfulpup</dc:creator><comments>https://news.ycombinator.com/item?id=43368179</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43368179</guid></item><item><title><![CDATA[New comment by bashfulpup in "Ask HN: Any insider takes on Yann LeCun's push against current architectures?"]]></title><description><![CDATA[
<p>He's right but at the same time wrong. Current AI methods are essentially scaled up methods that we learned decades ago.<p>These long horizon (agi) problems have been there since the very beginning. We have never had a solution to them. RL assumes we know the future which is a poor proxy. These energy based methods fundamentally do very little that an RNN didn't do long ago.<p>I worked on higher dimensionality methods which is a very different angle. My take is that it's about the way we scale dependencies between connections. The human brain makes and breaks a massive amount of nueron connections daily. Scaling the dimensionality would imply that a single connection could be scalled to encompass significantly more "thoughts" over time.<p>Additionally the true to solution to these problems are likely to be solved by a kid with a laptop as much as an top researcher. You find the solution to CL on a small AI model (mnist) you solve it at all scales.</p>
]]></description><pubDate>Fri, 14 Mar 2025 22:09:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=43367802</link><dc:creator>bashfulpup</dc:creator><comments>https://news.ycombinator.com/item?id=43367802</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43367802</guid></item><item><title><![CDATA[New comment by bashfulpup in "Microsoft is plotting a future without OpenAI"]]></title><description><![CDATA[
<p>Long explanation. Simple terms, you can't use a fixed box to solve an unbounded problem space. If your problem fits within the box it works, if it doesn't, you need CL.<p>I tried to solve this via expanding the embedding/retrieval space but realized it's the same as CL and in my definition of it I was trying to solve AGI. I did a lot of unique algorithms and architectures but Unsuprisingly, I never solved this.<p>I am thankful I finally understood this quote.<p>"The first gulp from the glass of natural sciences will turn you into an atheist, but at the bottom of the glass God is waiting for you."</p>
]]></description><pubDate>Sat, 08 Mar 2025 21:12:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=43303588</link><dc:creator>bashfulpup</dc:creator><comments>https://news.ycombinator.com/item?id=43303588</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43303588</guid></item><item><title><![CDATA[New comment by bashfulpup in "Microsoft is plotting a future without OpenAI"]]></title><description><![CDATA[
<p>In context learning, learning via training. Both are things we barely understand the mechanism of.<p>RAG is a basically a perfect example to understand the limits of in context learning and AI in general. It's faults are easier to understand but the same as any AI vs AGI problem.<p>I could go on but CL is a massive gap of our knowledge and likely the only thing missing to AGI.</p>
]]></description><pubDate>Sat, 08 Mar 2025 05:58:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=43297860</link><dc:creator>bashfulpup</dc:creator><comments>https://news.ycombinator.com/item?id=43297860</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43297860</guid></item><item><title><![CDATA[New comment by bashfulpup in "Microsoft is plotting a future without OpenAI"]]></title><description><![CDATA[
<p>Continual Learning, it's a barrier that's been there from the very start and we've never had a solution to it.<p>There are no solutions even at the small scale. We fundamentally don't understand what it is or how to do it.<p>If you could solve it perfectly on Mnist just scale and then we get AGI.</p>
]]></description><pubDate>Fri, 07 Mar 2025 21:07:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=43294669</link><dc:creator>bashfulpup</dc:creator><comments>https://news.ycombinator.com/item?id=43294669</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43294669</guid></item><item><title><![CDATA[New comment by bashfulpup in "Microsoft is plotting a future without OpenAI"]]></title><description><![CDATA[
<p>That implies learning. Solve continual learning and you have agi.<p>Wouldn't it amaze you if you learned 10 years ago that we would have AI that could do math and code better than 99% of all humans. And at the same time they could barely order you a hotdog on doordash.<p>Fundamental ability is lacking. AGI is just as likely to be solved by Openai as it is by a college student with a laptop. Could be 1yr or 50yrs we cannot predict when.</p>
]]></description><pubDate>Fri, 07 Mar 2025 21:04:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=43294643</link><dc:creator>bashfulpup</dc:creator><comments>https://news.ycombinator.com/item?id=43294643</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43294643</guid></item><item><title><![CDATA[New comment by bashfulpup in "Avoiding outrage fatigue while staying informed"]]></title><description><![CDATA[
<p>Clear your history often. My youtube is actually incredible, massive variety and useful topics.<p>I clear it about once every 2 weeks or month depending on how many of the same topics I see.<p>It works really well in that if you ignore the content you saw before it forces the algorithm to find unique content because it thinks you don't like the stuff you've seen.<p>That and cleaning your subscription list. Easily the best platform I have as of now because of that.</p>
]]></description><pubDate>Wed, 05 Feb 2025 20:37:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=42954748</link><dc:creator>bashfulpup</dc:creator><comments>https://news.ycombinator.com/item?id=42954748</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42954748</guid></item><item><title><![CDATA[New comment by bashfulpup in "Ask HN: SWEs how do you future-proof your career in light of LLMs?"]]></title><description><![CDATA[
<p>Long horizon problems are a completely unsolved problem in AI.<p>See the GAIA benchmark. While this surely will be beat soon enough, the point is that we do exponentially longer horizon tasks than that benchmark every single day.<p>It's very possible we will move away from raw code implementation, but the core concepts of solving long horizon problems via multiple interconnected steps are exponentially far away. If AI can achieve that, then we are all out of a job, not just some of us.<p>Take 2 competing companies that have a duopoly on a market.<p>Company 1 uses AI and fires 80% their workforce.<p>Company 2 uses ai and keeps their workforce.<p>AI in its current form is a multiplier, we will see company two massively outcompete the first as each employee now performs 3-10 people's tasks. Therefore, Company two's output is exponentially increased per person. As a result, it significantly weakens the first company. Standard market forces haven't changed.<p>The reality, as I see it, is that interns will now be performing at Senior SWE, senior SWE engineers will now be performing at VP of engineering levels, and VP's of engineering will now be performing at nation state levels of output.<p>We will enter an age where goliath companies will be common place. Hundreds or even thousands of mega trillion dollar companies. Billion dollar startups will be expected almost at launch.<p>Again, unless we magically find a solution to long horizon problems (which we haven't even slightly found). That technology could be 1 year or 100 years away. We're waiting on our generation's Einstein to discover it.</p>
]]></description><pubDate>Tue, 17 Dec 2024 02:12:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=42437672</link><dc:creator>bashfulpup</dc:creator><comments>https://news.ycombinator.com/item?id=42437672</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42437672</guid></item><item><title><![CDATA[New comment by bashfulpup in "Full LLM training and evaluation toolkit"]]></title><description><![CDATA[
<p>Pythia is stupidly easy to use.<p>Then hookup a simple test harness.
- this is like a grand total of 3 commands
- git pull, install, point and run a model</p>
]]></description><pubDate>Sun, 24 Nov 2024 20:13:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=42230143</link><dc:creator>bashfulpup</dc:creator><comments>https://news.ycombinator.com/item?id=42230143</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42230143</guid></item><item><title><![CDATA[New comment by bashfulpup in "Fraud, so much fraud"]]></title><description><![CDATA[
<p>I love this, do tell the direction to be nudged in.<p>I wish to experience this new level of understanding.</p>
]]></description><pubDate>Fri, 27 Sep 2024 21:29:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=41675715</link><dc:creator>bashfulpup</dc:creator><comments>https://news.ycombinator.com/item?id=41675715</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41675715</guid></item><item><title><![CDATA[New comment by bashfulpup in "Learning to Reason with LLMs"]]></title><description><![CDATA[
<p>There is little to no research that shows modern AI can perform even the most simple long-running task without training data on that exact problem.<p>To my knowledge, there is no current AI system that can replace a white collar worker in any multistep task. The only thing they can do is support the worker.<p>Most jobs are safe for the forseable future. If your job is highly repetitive and a company can produce a perfect dataset of it, I'd worry.<p>Jobs like a factory worker and call center support are in danger. But the work is perfectly monitorable.<p>Watch the GAIA benchmark. It's not nearly the complexity of a real-world job, but it would signal the start of an actual agentic system being possible.</p>
]]></description><pubDate>Thu, 12 Sep 2024 20:22:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=41525109</link><dc:creator>bashfulpup</dc:creator><comments>https://news.ycombinator.com/item?id=41525109</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41525109</guid></item><item><title><![CDATA[New comment by bashfulpup in "Solving the out-of-context chunk problem for RAG"]]></title><description><![CDATA[
<p>Again, that's why I said it is challenging.<p>I regularly do fine tuning on a model with fine results and little damage to the base functionality.<p>It is possible, but it's too complex for the majority of users. It requires a lot of work per dataset you want trained on.</p>
]]></description><pubDate>Thu, 25 Jul 2024 07:11:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=41065645</link><dc:creator>bashfulpup</dc:creator><comments>https://news.ycombinator.com/item?id=41065645</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41065645</guid></item><item><title><![CDATA[New comment by bashfulpup in "Solving the out-of-context chunk problem for RAG"]]></title><description><![CDATA[
<p>This is true if you don't know what you're doing, so it is good advice for the vast majority.<p>Fine tuning is just training. You can completely change the model if you want make learn anything you want.<p>But there are MANY challenges in doing so.</p>
]]></description><pubDate>Thu, 25 Jul 2024 06:27:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=41065334</link><dc:creator>bashfulpup</dc:creator><comments>https://news.ycombinator.com/item?id=41065334</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41065334</guid></item></channel></rss>