<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: lubesGordi</title><link>https://news.ycombinator.com/user?id=lubesGordi</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 08:12:00 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=lubesGordi" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by lubesGordi in "GPT-5.4"]]></title><description><![CDATA[
<p>It's funny that the context window size is such a thing still.  Like the whole LLM 'thing' is compression.  Why can't we figure out some equally brilliant way of handling context besides just storing text somewhere and feeding it to the llm? RAG is the best attempt so far. We need something like a dynamic in flight llm/data structure being generated from the context that the agent can query as it goes.</p>
]]></description><pubDate>Thu, 05 Mar 2026 21:55:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47267854</link><dc:creator>lubesGordi</dc:creator><comments>https://news.ycombinator.com/item?id=47267854</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47267854</guid></item><item><title><![CDATA[New comment by lubesGordi in "No Skill. No Taste"]]></title><description><![CDATA[
<p>I don't know man.  I'm writing a flashcard app, and I like it.  It makes me happy and it works the way I want.  Exactly how I want.  BC I could never get into quizlet.  Whatever.  Maybe others will like it, maybe not, I don't care.<p>Taste is subjective.  Having 1 million todo apps, great.  Maybe someone I know will find one they like and tell me about it.  Maybe I'll find one that doesn't suck.  Maybe I'll just make my own.<p>One thing I won't do though, is complain about how there's now 1 million todo apps that aren't up to my standards.  Everyone being able to make their own apps however they want is a beautiful thing.</p>
]]></description><pubDate>Fri, 20 Feb 2026 17:01:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47090599</link><dc:creator>lubesGordi</dc:creator><comments>https://news.ycombinator.com/item?id=47090599</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47090599</guid></item><item><title><![CDATA[New comment by lubesGordi in "Malicious skills targeting Claude Code and Moltbot users"]]></title><description><![CDATA[
<p>To your point, from the article: "To me, giving a Claude skill all your credentials, and access to everything important to you, and then managing it all via Telegram seems ludicrous, but who am I to judge."</p>
]]></description><pubDate>Fri, 30 Jan 2026 19:03:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46828472</link><dc:creator>lubesGordi</dc:creator><comments>https://news.ycombinator.com/item?id=46828472</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46828472</guid></item><item><title><![CDATA[New comment by lubesGordi in "The highest quality codebase"]]></title><description><![CDATA[
<p>So now you know.  You can get claude to write you a ton of unit tests and also improve your static typing situation.  Now you can restrict your prompt!</p>
]]></description><pubDate>Thu, 11 Dec 2025 18:03:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46234768</link><dc:creator>lubesGordi</dc:creator><comments>https://news.ycombinator.com/item?id=46234768</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46234768</guid></item><item><title><![CDATA[New comment by lubesGordi in "Jepsen: NATS 2.12.1"]]></title><description><![CDATA[
<p>I don't know about Jetstream, but redis cluster would only ack writes after replicating to a majority of nodes.  I think there is some config on standalone redis too where you can ack after fsync (which apparently still doesn't guarantee anything because of buffering in the OS).  
In any case, understanding what the ack implies is important, and I'd be frustrated if jetstream docs were not clear on that.</p>
]]></description><pubDate>Mon, 08 Dec 2025 21:33:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46197937</link><dc:creator>lubesGordi</dc:creator><comments>https://news.ycombinator.com/item?id=46197937</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46197937</guid></item><item><title><![CDATA[New comment by lubesGordi in "Alphabet tops $100B quarterly revenue for first time, cloud grows 34%"]]></title><description><![CDATA[
<p>This makes sense to me.  Where I work our ai team set up a couple h100 cards and are hosting a newer model that uses up around 80GB vram.  You can see the gpu utilization on graphana go to like 80% for seconds as it processes a single request.  That was very surprising to me.  This is $30k worth of hardware that can support only a couple users and maybe only 1 if you have an agent going.  Now, maybe we're doing something wrong, but it's hard to imagine anyone is going to make money on hosting billions of dollars of these cards when you're making $20 a month per card.  I guess it depends on how active your users are.  Hard to imagine anthropic is right side up here.</p>
]]></description><pubDate>Thu, 30 Oct 2025 13:56:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=45760077</link><dc:creator>lubesGordi</dc:creator><comments>https://news.ycombinator.com/item?id=45760077</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45760077</guid></item><item><title><![CDATA[New comment by lubesGordi in "DeepSeek OCR"]]></title><description><![CDATA[
<p>So in terms of OCR, does the neural network 'map' the words into an embedding directly, or is it getting a bunch of words like "Hamlet's monologue" and mapping that to an embedding?  Basically what I'm asking is if the neural network image encoder is essentially doing OCR 'internally' when it is coming up with the embedding (if that makes any sense).</p>
]]></description><pubDate>Mon, 20 Oct 2025 16:50:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=45646092</link><dc:creator>lubesGordi</dc:creator><comments>https://news.ycombinator.com/item?id=45646092</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45646092</guid></item><item><title><![CDATA[New comment by lubesGordi in "Andrej Karpathy – It will take a decade to work through the issues with agents"]]></title><description><![CDATA[
<p>13 minutes in Andrej is talking about how the models don't even really need the knowledge, it would be better to have just a core that has the algorithms it's learned, a "cognitive core."  That sounds awesome, and would shrink the size of the models for sure.  You don't need the entire knowledge of the internet compressed down and stashed in vram somewhere.  Lots of implications.</p>
]]></description><pubDate>Fri, 17 Oct 2025 21:27:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=45622289</link><dc:creator>lubesGordi</dc:creator><comments>https://news.ycombinator.com/item?id=45622289</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45622289</guid></item><item><title><![CDATA[New comment by lubesGordi in "Show HN: I built a web framework in C"]]></title><description><![CDATA[
<p>Well I don't know about others here, but I think its cool.  If you can make the setup super readable and get the performance of C then why not?  Especially now when you can get claude to write a bunch of the framework for you.  Add in whatever you need whenever you need it and you automatically have a platform independent web framework that's no bigger than what you need and likely decently performant.</p>
]]></description><pubDate>Thu, 09 Oct 2025 13:52:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=45527728</link><dc:creator>lubesGordi</dc:creator><comments>https://news.ycombinator.com/item?id=45527728</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45527728</guid></item><item><title><![CDATA[New comment by lubesGordi in "Cormac McCarthy's personal library"]]></title><description><![CDATA[
<p>I think its just a simple matter of aesthetics.  Some people find violence ugly, and don't like looking at it.  Some people think that by looking at it you're somehow coming to a greater understanding of the world or something.  Maybe that is the case for some super sheltered individuals, but I doubt it's the case on the whole.<p>If anyone has any ideas on what the point of violence in art is, I'm open to hearing it.  Obviously horror is a genre and so is gore, and people seem to enjoy being shocked.  I don't think that is what McCarthy was going for though.  And he wasn't going for the vengeance-catharsis angle like Tarantino either.</p>
]]></description><pubDate>Thu, 02 Oct 2025 18:34:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=45453543</link><dc:creator>lubesGordi</dc:creator><comments>https://news.ycombinator.com/item?id=45453543</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45453543</guid></item><item><title><![CDATA[New comment by lubesGordi in "Tau² benchmark: How a prompt rewrite boosted GPT-5-mini by 22%"]]></title><description><![CDATA[
<p>My hypothesis: the length of the prompt shrunk, yet maintained the same amount of information.</p>
]]></description><pubDate>Wed, 17 Sep 2025 17:29:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=45278816</link><dc:creator>lubesGordi</dc:creator><comments>https://news.ycombinator.com/item?id=45278816</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45278816</guid></item><item><title><![CDATA[New comment by lubesGordi in "95% of Companies See 'Zero Return' on $30B Generative AI Spend"]]></title><description><![CDATA[
<p>Agreed agentic coding is a huge change.  Smart startups will be flying but aren't representative.  Big companies won't change because the staff will just spend more time shopping online instead of doing more than what is asked of them.  Maybe increased retail spend is a better measure of AI efficacy.</p>
]]></description><pubDate>Thu, 21 Aug 2025 16:56:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=44975109</link><dc:creator>lubesGordi</dc:creator><comments>https://news.ycombinator.com/item?id=44975109</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44975109</guid></item><item><title><![CDATA[New comment by lubesGordi in "AWS CEO says using AI to replace junior staff is 'Dumbest thing I've ever heard'"]]></title><description><![CDATA[
<p>Does the Unreal API change a bit over versions?  I've noticed when asking to do a simple telnet server in Rust it was hallucinating like crazy but when I went to the documentation it was clear the api was changing a lot from version to version.  I don't think they do well with API churn.  That's my hypothesis anyway.</p>
]]></description><pubDate>Thu, 21 Aug 2025 16:44:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=44974911</link><dc:creator>lubesGordi</dc:creator><comments>https://news.ycombinator.com/item?id=44974911</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44974911</guid></item><item><title><![CDATA[New comment by lubesGordi in "AWS CEO says using AI to replace junior staff is 'Dumbest thing I've ever heard'"]]></title><description><![CDATA[
<p>I know, I don't understand what problems people are having with getting usable code.  Maybe the models don't work well with certain languages?  Works great with C++.  I've gotten thousands of lines of clean compiling on the first try and obviously correct code from ChatGPT, Gemini, and Claude.<p>I've been assuming the people who are having issues are junior devs, who don't know the vocabulary well enough yet to steer these things in the right direction.  I wouldn't say I'm a prompt wizard, but I do understand context and the surface area of the things I'm asking the llm to do.</p>
]]></description><pubDate>Thu, 21 Aug 2025 15:42:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=44974177</link><dc:creator>lubesGordi</dc:creator><comments>https://news.ycombinator.com/item?id=44974177</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44974177</guid></item><item><title><![CDATA[New comment by lubesGordi in "Live coding interviews measure stress, not coding skills"]]></title><description><![CDATA[
<p>I'd guess that people fail the mentioned test specifically because they forget how to determine if a number is even.  How often do you do that in real life?  I haven't had to do that professionally in probably my entire career (>10years) and for the last 10 years I've written C++ nearly every single day.  Tell me I can't code because I forget %2==0?  Seriously?<p>Being a senior engineer means having confronted a shitload of different minutia type problems from network stuff, compiler bugs, language nuances, threading nuances, etc. and having spent time figuring each one out.  Not that senior engineers can reiterate all that minutia off the top of their head, but it gives them a good intuition for how to solve problems when they hit them making them significantly more productive than junior devs that have to figure them out from scratch.<p>I don't understand what's so hard about this, testing algorithmic knowledge tests if the individual studies and implements algorithms.  It's very simple.  And yes, people have stress reactions in test type settings (I don't for paper tests in school but apparently I do for live coding mostly because I don't know how to implement different algs).<p>Stress during interviews is insane.  Once I was at the tail end of solving an interview problem but it came down to multiplying 9 * 3 and my brain wouldn't fucking do it (apparently I can't fucking multiply).</p>
]]></description><pubDate>Fri, 01 Aug 2025 14:13:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=44757145</link><dc:creator>lubesGordi</dc:creator><comments>https://news.ycombinator.com/item?id=44757145</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44757145</guid></item><item><title><![CDATA[New comment by lubesGordi in "Live coding interviews measure stress, not coding skills"]]></title><description><![CDATA[
<p>Just make the take home assume that AI is going to be used.</p>
]]></description><pubDate>Fri, 01 Aug 2025 14:03:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=44757021</link><dc:creator>lubesGordi</dc:creator><comments>https://news.ycombinator.com/item?id=44757021</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44757021</guid></item><item><title><![CDATA[New comment by lubesGordi in "Improving performance of rav1d video decoder"]]></title><description><![CDATA[
<p>Still you don't necessarily need to have dynamic memory allocations if the number of deltas you have is bounded.  In some codecs I could definitely see those having a varying size depending on the amount of change going on in the scene.<p>I'm not a codec developer, I'm only coming at this from an outside/intuitive perspective.  Generally, performance concerned parties want to minimize heap allocations, so I'm interested in this as how it applies in codec architecture.  Codecs seem so complex to me, with so much inscrutable shit going on, but then heap allocations aren't optimized out?  Seems like there has to be a very good reason for this.</p>
]]></description><pubDate>Thu, 22 May 2025 15:59:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=44063338</link><dc:creator>lubesGordi</dc:creator><comments>https://news.ycombinator.com/item?id=44063338</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44063338</guid></item><item><title><![CDATA[New comment by lubesGordi in "Improving performance of rav1d video decoder"]]></title><description><![CDATA[
<p>See this is interesting to me. I understand the desire to dynamically allocate buffers at runtime to capture variable size deltas.  That's cool, but also still maybe technically unnecessary?  Because like you say, at 4k and over 8MB per frame; you still can't allocate over a limit.  So likely a codec would have some boundary set on that anyway.  Why not just pre-allocate at compile time?  For sure this results in a complex data structure.  Functionally it could be the same and we would elide the cost of dynamic memory allocations.  What I'm suggesting is probably complex, I'm sure.<p>In any case I get what you're saying and I understand why codecs are going to be dynamically allocating memory, so thanks for that.</p>
]]></description><pubDate>Thu, 22 May 2025 15:54:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=44063296</link><dc:creator>lubesGordi</dc:creator><comments>https://news.ycombinator.com/item?id=44063296</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44063296</guid></item><item><title><![CDATA[New comment by lubesGordi in "Improving performance of rav1d video decoder"]]></title><description><![CDATA[
<p>Hey maybe we can discuss why I'm being downvoted?  This is a technical discussion and I'm contributing.  If you disagree then say why.  I'm not stating anything as fact that isn't fact. I am getting downvoted for asking a question.</p>
]]></description><pubDate>Thu, 22 May 2025 15:39:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=44063139</link><dc:creator>lubesGordi</dc:creator><comments>https://news.ycombinator.com/item?id=44063139</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44063139</guid></item><item><title><![CDATA[New comment by lubesGordi in "Improving performance of rav1d video decoder"]]></title><description><![CDATA[
<p>Requiring dynamic state seems not obvious to me.  At the end of the day you have a fixed number of pixels on the screen.  If every single pixel changes from frame to frame that should constitute the most work your codec has to do, no?  I'm not a codec writer but that's my intuition based on the assumption that codecs are basically designed to minimize the amount of 'work' being done from frame to frame.</p>
]]></description><pubDate>Thu, 22 May 2025 13:49:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=44062041</link><dc:creator>lubesGordi</dc:creator><comments>https://news.ycombinator.com/item?id=44062041</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44062041</guid></item></channel></rss>