<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: bob1029</title><link>https://news.ycombinator.com/user?id=bob1029</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 23:56:36 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=bob1029" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by bob1029 in "One Brain to Query: Wiring a 60-Person Company into a Single Slack Bot"]]></title><description><![CDATA[
<p>If you tell GPT5.x that there is a database it can query by calling ExecuteSql(query), but you don't bother explaining anything about the schema, it will try to figure things out ad-hoc. This has advantages for token budget because it will tend to only lookup the metadata for tables that seem relevant to the user's query.<p>If you have a gigantic data warehouse with 1000+ tables, there's no way you could fit all of that info into a system prompt without completely jacking something up in the blackbox. So, why bother trying?<p>Consider that the user's specific request serves as an additional constraint that can be used to your advantage to dramatically reduce the search space. Building a single prompt / schema description that will magically work for all potential user requests is a cursed mission by comparison.</p>
]]></description><pubDate>Thu, 09 Apr 2026 20:14:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47709204</link><dc:creator>bob1029</dc:creator><comments>https://news.ycombinator.com/item?id=47709204</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47709204</guid></item><item><title><![CDATA[New comment by bob1029 in "ChatGPT Pro now starts at $100/month"]]></title><description><![CDATA[
<p>The model can call a copy of itself as a tool (i.e., we maintain actual stack frames in the hosting layer). Explicit tools are made available: Call(prompt) & Return(result).<p>The user's conversation happens at level 0. Any actual tool use is only permitted at stack depths > 0. When the model calls the Return tool at stack depth 0 we end that logical turn of conversation and the argument to the tool is presented to the user. The user can then continue the conversation if desired with all prior top level conversation available in-scope.<p>It's effectively the exact same experience as ChatGPT, but each time the user types a message an entire depth-first search process kicks off that can take several minutes to complete each time.</p>
]]></description><pubDate>Thu, 09 Apr 2026 20:04:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47709047</link><dc:creator>bob1029</dc:creator><comments>https://news.ycombinator.com/item?id=47709047</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47709047</guid></item><item><title><![CDATA[New comment by bob1029 in "ChatGPT Pro now starts at $100/month"]]></title><description><![CDATA[
<p>GPT5.4 with any effort level is scary when you combine it with tricks like symbolic recursion. I actually had to <i>reduce</i> the effort level to get the model to stop trying to one shot everything. I struggled to come up with BS test cases it couldn't dunk in some clever way. Turning down the reasoning effort made it explore the space better.</p>
]]></description><pubDate>Thu, 09 Apr 2026 18:53:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47708044</link><dc:creator>bob1029</dc:creator><comments>https://news.ycombinator.com/item?id=47708044</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47708044</guid></item><item><title><![CDATA[New comment by bob1029 in "Small Engines"]]></title><description><![CDATA[
<p>I've got a Honeywell digital controller on my hot water heater. It's powered by the thermocouple. It can make troubleshooting a lot easier because it has flashing lights for diagnostics.</p>
]]></description><pubDate>Thu, 09 Apr 2026 18:34:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47707710</link><dc:creator>bob1029</dc:creator><comments>https://news.ycombinator.com/item?id=47707710</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47707710</guid></item><item><title><![CDATA[New comment by bob1029 in "One Brain to Query: Wiring a 60-Person Company into a Single Slack Bot"]]></title><description><![CDATA[
<p>> The bot took two and a half weeks to build; the data infrastructure under it took two years.<p>This is the key lesson that everyone needs to step back and pay attention to here. The data is still king. If you have a clean relational database that contains <i>all</i> of your enterprise's information, pointing a modern LLM (i.e., late 2025+) at it without any further guidance often yields very good outcomes. Outcomes that genuinely shocked me no fewer than 6 months ago.<p>I am finding that 100 tables exposed as 1 tool performs significantly better than 100 tables exposed as 10~100 tools. Any time you find yourself tempted to patch things with more system prompt tokens or additional tools, you should push yourself to solve things in the other ways. More targeted & detailed error feedback from existing tools often goes a lot further than additional lines of aggressively worded prose.<p>I think one big fat SQL database is probably getting close to the best possible way to organize everything for an agent to consume. I am not going to die on any specific vendor's hill, but SQL in general is such a competent solution to the problem of incrementally revealing the domain knowledge to the agent. You can even incrementalize the schema description process itself by way of the system tables. Intentionally <i>not</i> providing a schema description tool/document/prompt seems to perform better with the latest models than the other way around.</p>
]]></description><pubDate>Thu, 09 Apr 2026 16:45:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47705944</link><dc:creator>bob1029</dc:creator><comments>https://news.ycombinator.com/item?id=47705944</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47705944</guid></item><item><title><![CDATA[New comment by bob1029 in "Small Engines"]]></title><description><![CDATA[
<p>> could we make a 2 inch diameter turbine engine reliably?<p><a href="https://en.wikipedia.org/wiki/Capstone_Green_Energy" rel="nofollow">https://en.wikipedia.org/wiki/Capstone_Green_Energy</a></p>
]]></description><pubDate>Thu, 09 Apr 2026 15:14:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47704842</link><dc:creator>bob1029</dc:creator><comments>https://news.ycombinator.com/item?id=47704842</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47704842</guid></item><item><title><![CDATA[New comment by bob1029 in "Claude Managed Agents Overview"]]></title><description><![CDATA[
<p>Custom agents using the low level completion APIs tend to outperform these generic tools, especially when you are working with complex problems.<p>It's hard to beat domain specific code. I can avoid massive prompts and token bloat if my execution environment, tools and error feedback provide effectively the same constraints.<p>If I had to pick only one tool for a generic agent to use, it would definitely be ExecuteSqlQuery (or a superset like ExecuteShell). If you gave me an agent framework and this is all it could do, I'd probably be ok for quite a while. SQL can absorb the domain specific concerns quite well. Consider that tool definitions also consume tokens.</p>
]]></description><pubDate>Thu, 09 Apr 2026 07:13:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47700302</link><dc:creator>bob1029</dc:creator><comments>https://news.ycombinator.com/item?id=47700302</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47700302</guid></item><item><title><![CDATA[New comment by bob1029 in "US cities are axing Flock Safety surveillance technology"]]></title><description><![CDATA[
<p>> no choice but to accept its installation<p>You might be shocked to discover there are subdivisions so affluent they can afford physical armed security and access control structures with far more invasive identification and logging procedures.</p>
]]></description><pubDate>Wed, 08 Apr 2026 14:22:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47690647</link><dc:creator>bob1029</dc:creator><comments>https://news.ycombinator.com/item?id=47690647</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47690647</guid></item><item><title><![CDATA[New comment by bob1029 in "Git commands I run before reading any code"]]></title><description><![CDATA[
<p>And in every codebase I've been in charge of, each PR has one or more issue # linked which describe every possible antagonizing detail behind that work.<p>I understand this isn't inline with traditional git scm, but it's a very powerful workflow if you are OK with some hybridization.</p>
]]></description><pubDate>Wed, 08 Apr 2026 13:41:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47690115</link><dc:creator>bob1029</dc:creator><comments>https://news.ycombinator.com/item?id=47690115</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47690115</guid></item><item><title><![CDATA[New comment by bob1029 in "SQLite in Production: Lessons from Running a Store on a Single File"]]></title><description><![CDATA[
<p>I agree these exist but I don't know about "most". Licensing costs are almost always a drop in the bucket compared to things like your salary.</p>
]]></description><pubDate>Wed, 08 Apr 2026 13:22:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47689886</link><dc:creator>bob1029</dc:creator><comments>https://news.ycombinator.com/item?id=47689886</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47689886</guid></item><item><title><![CDATA[New comment by bob1029 in "Protect your shed"]]></title><description><![CDATA[
<p>I'm actually grateful (today) for the lightning strike that nuked my old pile of servers at home. It freed me from the whole thing in one step. I was completely disabused of any notion that I had control over anything at that point.<p>You might think you are protected with UPSes and what not, but nothing will stop the electromagnetic effects if it hits within a few feet. Every piece of copper is going to get lit up. No solution is 100% guaranteed here, but EC2 and snapshots is a hell of a lot more likely to survive a single event like that.</p>
]]></description><pubDate>Wed, 08 Apr 2026 09:39:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47687724</link><dc:creator>bob1029</dc:creator><comments>https://news.ycombinator.com/item?id=47687724</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47687724</guid></item><item><title><![CDATA[New comment by bob1029 in "SQLite in Production: Lessons from Running a Store on a Single File"]]></title><description><![CDATA[
<p>I think SQLite is fantastic but it does start to fall apart at the edges sometimes.<p>What is more interesting to me is the fact that everyone seems to think of Postgres as the obvious alternative to SQLite. It is certainly <i>an</i> alternative. For me, the most opposite thing of SQLite is something like Oracle or MSSQL.<p>The complexity being relatively constant is the part I care about most here. Running a paid, COTS database engine on a blessed OS tends to be a little bit easier than an OSS solution that can run on toasters and drones. Especially, if you are using replication, high availability, etc.<p>The business liability coverage seems to track proportionally with how much money you spend on the solution. SQLite offers zero guarantees accordingly. You don't have a support contract or an account manager you can get upset with. Depending on the nature of the business this could be preferable or adverse. It really depends.<p>For serious regulated business with oppressive audit cycles, SQLite trends toward liability more than asset if it's being used as a system of record. That it merely works and performs well is often not sufficient for acceptance. I'm not saying that Postgres isn't capable of passing an intense audit, but I am saying that it might be easier to pass it if you used MSSQL. The cost of having your staff tied up with compliance should be considered when making technology choices in relevant businesses.</p>
]]></description><pubDate>Wed, 08 Apr 2026 08:58:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47687327</link><dc:creator>bob1029</dc:creator><comments>https://news.ycombinator.com/item?id=47687327</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47687327</guid></item><item><title><![CDATA[New comment by bob1029 in "Breaking the console: a brief history of video game security"]]></title><description><![CDATA[
<p>The modern consoles are pretty close to perfect with how they use PKI and certificates. Even if you clone the cryptographic identity of a valid console, the vendor can quickly detect this impossible access scenario.</p>
]]></description><pubDate>Tue, 07 Apr 2026 14:25:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47675915</link><dc:creator>bob1029</dc:creator><comments>https://news.ycombinator.com/item?id=47675915</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47675915</guid></item><item><title><![CDATA[New comment by bob1029 in "Every GPU That Mattered"]]></title><description><![CDATA[
<p>The 8800 GT is easily the most impactful GPU in my mind. The combination of that video card with valve's Orange Box was insane value proposition at the time.<p>I'd put the 5700xt at #2 for being the longest lived GPU I've owned by a very wide margin. It's still in use today.</p>
]]></description><pubDate>Tue, 07 Apr 2026 09:06:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47672497</link><dc:creator>bob1029</dc:creator><comments>https://news.ycombinator.com/item?id=47672497</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47672497</guid></item><item><title><![CDATA[New comment by bob1029 in "People Love to Work Hard"]]></title><description><![CDATA[
<p>I struggle with the lack of possibility that others might exist who have different preferences and experiences.<p>I enjoy working hard on problems that most would label as miserable. Not just computer problems. The money is genuinely not that important to me. A Porsche would only get me into trouble faster. I don't desire that kind of status game energy in my life anymore. I used to but not anymore. I've been inside these fancy houses. It's not appealing to me at all. Infrasonic room modes are not my jam.<p>Solving problems in 2 days that an entire development team couldn't solve in 2 months is far more of an adrenaline rush for me. No amount of money can buy this. You have to bust your ass and live it every day to be able to do it. Disrupting an entire entrenched power hierarchy with one cheeky pull request is peak happiness for me. I enjoy getting management riled up regarding the apparent productivity disparity between their full-time W2 employment pool and the one cowboy 1099 who works barely 3 hours per week.</p>
]]></description><pubDate>Tue, 07 Apr 2026 07:02:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47671645</link><dc:creator>bob1029</dc:creator><comments>https://news.ycombinator.com/item?id=47671645</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47671645</guid></item><item><title><![CDATA[New comment by bob1029 in "Claude Code Down"]]></title><description><![CDATA[
<p>I tried the agnostic thing for a while, but there are enough quirks between the providers that I gave up trying to normalize it. GPT5.x wipes the floor with other models for my specific tool calling scenarios. I am not going to waste time trying to bridge arbitrary and evolving gaps between providers.<p>I put my Amex details into OAI, I get tokens, it just works. I really don't understand what the hell is going on with Claude. The $200/m thing is so confusing to me. I'd rather just go buy however many tokens I plan to use. $200 worth of OAI tokens would go a really long way for me (much longer than a month), but perhaps I am holding it wrong.</p>
]]></description><pubDate>Mon, 06 Apr 2026 16:11:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47662810</link><dc:creator>bob1029</dc:creator><comments>https://news.ycombinator.com/item?id=47662810</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47662810</guid></item><item><title><![CDATA[New comment by bob1029 in "The end of password pain: building frictionless authentication at the Guardian"]]></title><description><![CDATA[
<p>Not specifically but it's the same idea. CIMD is perhaps one step too far for the cases I've worked with. We seem to prefer an out-of-band process for establishing trust. Two CTOs exchanging FQDNs at lunch is a fairly robust model.</p>
]]></description><pubDate>Mon, 06 Apr 2026 14:18:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47661248</link><dc:creator>bob1029</dc:creator><comments>https://news.ycombinator.com/item?id=47661248</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47661248</guid></item><item><title><![CDATA[New comment by bob1029 in "What being ripped off taught me"]]></title><description><![CDATA[
<p>I've started operating in really granular units of work. Like less than $1000 per. Cash on delivery. This won't work with all clients and all jobs, but there are places where it does work very well. Advantages include being able to avoid paper contracts altogether. Verbal agreements and a 4 column xlsx that is reviewed monthly are all that seem to be required with some of my clients.<p>If I don't get paid for one day of work, I will probably get over it in a few hours. If I don't get paid for six months of work, we will have a serious problem. The tighter and more incremental we can make the delivery process, the less likely anyone gets screwed.<p>If a party is pushing hard for long-term contracts or large up-front sums of payment, I would walk away from that transaction unless there was a literal golden goose sitting in their lap.</p>
]]></description><pubDate>Mon, 06 Apr 2026 13:44:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47660854</link><dc:creator>bob1029</dc:creator><comments>https://news.ycombinator.com/item?id=47660854</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47660854</guid></item><item><title><![CDATA[New comment by bob1029 in "The end of password pain: building frictionless authentication at the Guardian"]]></title><description><![CDATA[
<p>I've been enjoying modern machine-to-machine flows. Trading trusted URLs for client ids is a really secure model. Especially if you go the extra mile with role based machine auth to cloud key stores. You can do the entire thing without a single secret string. I'd much rather prove I can control a URL than ensure a piece of information never leaks out.</p>
]]></description><pubDate>Mon, 06 Apr 2026 08:48:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47658384</link><dc:creator>bob1029</dc:creator><comments>https://news.ycombinator.com/item?id=47658384</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47658384</guid></item><item><title><![CDATA[New comment by bob1029 in "Microsoft hasn't had a coherent GUI strategy since Petzold"]]></title><description><![CDATA[
<p>Winforms is still compelling to me. Now that we have WebView2, building complex hybrid apps is trivial. I know I could easily go pure web, but something about having proper native chrome feels better as a user. All of my clients/users are on windows so I stopped fighting this battle years ago. I've recently been using .NET10 + Winforms + WebView2 to build custom ai assistants for others. I cannot imagine how much it would suck to iterate the presentation of a conversation history in pure win forms. The hybrid stuff is crazy productive.</p>
]]></description><pubDate>Mon, 06 Apr 2026 08:24:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47658251</link><dc:creator>bob1029</dc:creator><comments>https://news.ycombinator.com/item?id=47658251</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47658251</guid></item></channel></rss>