<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: eclark</title><link>https://news.ycombinator.com/user?id=eclark</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 02 May 2026 08:56:24 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=eclark" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by eclark in "Refuse to let your doctor record you"]]></title><description><![CDATA[
<p>Be careful with initial impressions of metrics. We as humans have a heavy tenancy to anchor to our first judgments or impression. We see a win and assume the win is long term, with no downsides, and dependent on the new information/change.<p>So combine that with the Hawthorne effect and new business or health initiatives that can look great simply because participants notice change and notice the increased attention. However many human patterns have a tendency to regress to the mean.<p>Personally I have seen this a lot with developer tools and DevOps. A new SEV/incident/disaster happens and everyone rushes to create or onboard to a tool that would help. Around the office everyone raves about it and is sure that it would fix all issues. And the number of commits goes up, or the number of SEV's in an area decreases for a while. People were paying attention, after a while the tool starts to slow down or not be as used. It's got rough edges that weren't seen or scenarios that were supposed to be supported never get fully integrated. Eventually the patterns regress, but with more tools and more complexity.<p>- <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC1936999/" rel="nofollow">https://pmc.ncbi.nlm.nih.gov/articles/PMC1936999/</a><p>- <a href="https://arxiv.org/abs/2102.12893" rel="nofollow">https://arxiv.org/abs/2102.12893</a></p>
]]></description><pubDate>Fri, 24 Apr 2026 16:37:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47892552</link><dc:creator>eclark</dc:creator><comments>https://news.ycombinator.com/item?id=47892552</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47892552</guid></item><item><title><![CDATA[New comment by eclark in "Poker Tournament for LLMs"]]></title><description><![CDATA[
<p>Early game bluffs are essentially lies that you tell through the rest of the streets. In order to keep your opponents from knowing when you have premium starting hands, it's required to play some ranges, sometimes as if they were a different range. E.g., 10% of the time, I will bluff and act like I have AK, KK, AA,  QQ. On the next street, I will need to continue that; otherwise, it becomes not profitable (opponents only need to wait one bet to know if I am bluffing). I have to evolve the lie as well. If cards come out that make my story more or less likely/profitable/possible, then I need to adjust the lie, not revert to the truth or the opponent's truth.<p>To see that LLMs aren't capable of this, I present all of the prompt jailbreaks that rely on repeated admonitions. And that makes sense if you think about the training data. There's not a lot of human writing that takes a fact and then confidently asserts the opposite as data mounts.<p>LLMs produce the most likely response from the input embeddings. Almost always, the easiest is that the next token is in agreement of the other tokens in the sequence. The problem in poker is that a good amount of the tokens in the sequence are masked and/or controlled by a villain who is actively trying to deceive.<p>Also, notice that I'm careful to say LLM's and not generalize to all attention head + MLP models. As attention with softmax and dot product is a good universal function. Instead, it's the large language model part that makes the models not great fits for poker. Human text doesn't have a latent space that's written about enough and thoroughly enough to have poker solved in there.</p>
]]></description><pubDate>Wed, 29 Oct 2025 19:29:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=45751830</link><dc:creator>eclark</dc:creator><comments>https://news.ycombinator.com/item?id=45751830</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45751830</guid></item><item><title><![CDATA[New comment by eclark in "Poker Tournament for LLMs"]]></title><description><![CDATA[
<p>No the widths are not wide enough to explore. The number of possible game states can explode beyond the number of atoms in the universe pretty easily, especially if you use deep stacks with small big blinds.<p>For example when computing the counterfactual tree for 9 way preflop. 9 players have up to 6 different times that they can be asked to perform an action (seat 0 can bet 1, seat 1 raises min, seat 2 calls, back to seat 0 raises min, with seat 1 calling, and seat 2 raising min, etc). Each of those actions has check, fold, bet min, raise the min (starting blinds of 100 are pretty high all ready), raise one more than the min, raise two more than the min, ... raise all in (with up to a million chips).<p>(1,000,000.00 - 999,900.00) ^ 6 times per round ^ 9 players  That's just for pre flop. Postflop, River, Turn, Showdown. Now imagine that we have to simulate which cards they have and which order they come in the streets (that greatly changes the value of the pot).<p>As for LLMs being great at range stats, I would point you to the latest research by UChicago. Text trained LLMs are horrible at multiplication. Try getting any of them to multiply any non-regular number by e or pi. <a href="https://computerscience.uchicago.edu/news/why-cant-powerful-llms-learn-multiplication/" rel="nofollow">https://computerscience.uchicago.edu/news/why-cant-powerful-...</a><p>Don't get what I'm saying wrong though. Masked attention and sequence-based context models are going to be critical to machines solving hidden information problems like this. Large Language Models trained on the web crawl and the stack with text input will not be those models though.</p>
]]></description><pubDate>Tue, 28 Oct 2025 18:19:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=45736702</link><dc:creator>eclark</dc:creator><comments>https://news.ycombinator.com/item?id=45736702</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45736702</guid></item><item><title><![CDATA[New comment by eclark in "Poker Tournament for LLMs"]]></title><description><![CDATA[
<p>> Why wouldn't something like an RL environment allow them to specialize in poker playing, gaining those skills as necessary to increase score in that environment?<p>I think an RL environment is needed to solve poker with an ML model. I also think that like chess, you need the model to do some approximate work. General-purpose LLMs trained on text corpus are bad at math, bad at accuracy, and struggle to stay on task while exploring.<p>So a purpose built model with a purpose built exploring harness is likely needed. I've built the basis of an RL like environment, and the basis of learning agents in rust for poker. Next steps to come.</p>
]]></description><pubDate>Tue, 28 Oct 2025 15:23:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=45734062</link><dc:creator>eclark</dc:creator><comments>https://news.ycombinator.com/item?id=45734062</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45734062</guid></item><item><title><![CDATA[New comment by eclark in "Poker Tournament for LLMs"]]></title><description><![CDATA[
<p>To play GTO currently you need to play hand ranges. (For example when looking at a hand I would think: I could have AKs-ATs, QQ-99, and she/he could have JT-98s, 99-44, so my next move will act like I have strength and they don't because the board doesn't contain any low cards).  We have do this since you can't always bet 4x pot when you have aces, the opponents will always know your hand strength directly.<p>LLM's aren't capable of this deception. They can't be told that they have some thing, pretend like they have something else, and then revert to gound truth. Their egar nature with large context leads to them getting confused.<p>On top of that there's a lot of precise math. In no limit the bets are not capped, so you can bet 9.2 big blinds in a spot. That could be profitable because your opponents will call and lose (eg the players willing to pay that sometimes have hands that you can beat). However betting 9.8 big blinds might be enough to scare off the good hands. So there's a lot of probiblity math with multiplication.<p>Deep math with multiplication and accuracy are not the forte of llm's.</p>
]]></description><pubDate>Tue, 28 Oct 2025 15:18:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=45733995</link><dc:creator>eclark</dc:creator><comments>https://news.ycombinator.com/item?id=45733995</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45733995</guid></item><item><title><![CDATA[New comment by eclark in "Poker Tournament for LLMs"]]></title><description><![CDATA[
<p>No it's far from trivial for three reasons.<p>First being the hidden information, you don't know your opponents hand holdings; that is to say everyone in the game has a different information set.<p>The second is that there's a variable number of players in the game at any time. Heads up games are closer to solved. Mid ring games have had some decent attempts made. Full ring with 9 players is hard, and academic papers on it are sparse.<p>The third is the potential number of actions. For no limit games there's a lot of potential actions, as you can bet in small decimal increments of a big blind. Betting 4.4 big blinds could be correct and profitable, while betting 4.9 big blinds could be losing, so there's a lot to explore.</p>
]]></description><pubDate>Tue, 28 Oct 2025 15:08:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=45733840</link><dc:creator>eclark</dc:creator><comments>https://news.ycombinator.com/item?id=45733840</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45733840</guid></item><item><title><![CDATA[New comment by eclark in "Poker Tournament for LLMs"]]></title><description><![CDATA[
<p>Text trained LLM's are likely not a good solution for optimal play, just as in chess the position changes too much, there's too much exploration, and too much accuracy needed.<p>CFR is still the best, however, like chess, we need a network that can help evaluate the position. Unlike chess, the hard part isn't knowing a value; it's knowing what the current game position is. For that, we need something unique.<p>I'm pretty convinced that this is solvable. I've been working on rs-poker for quite a while. Right now we have a whole multi-handed arena implemented, and a multi-threaded counterfactual framework (multi-threaded, with no memory fragmentation, and good cache coherency)<p>With BERT and some clever sequence encoding we can create a powerful agent. If anyone is interested, my email is: elliott.neil.clark@gmail.com</p>
]]></description><pubDate>Tue, 28 Oct 2025 14:57:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=45733698</link><dc:creator>eclark</dc:creator><comments>https://news.ycombinator.com/item?id=45733698</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45733698</guid></item><item><title><![CDATA[New comment by eclark in "Poker Tournament for LLMs"]]></title><description><![CDATA[
<p>They would need to lie, which they can't currently do. To play at our current best, our approximation of optimal play involves ranges. Thinking about your hand as being any one of a number of cards. Then imagine that you have combinations of those hands, and decide what you would do. That process of exploration by imagination doesn't work with an eager LLM using huge encoded context.</p>
]]></description><pubDate>Tue, 28 Oct 2025 14:51:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=45733632</link><dc:creator>eclark</dc:creator><comments>https://news.ycombinator.com/item?id=45733632</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45733632</guid></item><item><title><![CDATA[New comment by eclark in "Poker Tournament for LLMs"]]></title><description><![CDATA[
<p>I am the author/maintainer of rs-poker ( <a href="https://github.com/elliottneilclark/rs-poker" rel="nofollow">https://github.com/elliottneilclark/rs-poker</a> ). I've been working on algorithmic poker for quite a while. This isn't the way to do it. LLMs would need to be able to do math, lie, and be random. None of which are they currently capable.<p>We know how to compute the best moves in poker (it's computationally challenging; the more choices and players are present, the more likely it is that most attempts only even try at heads-up).<p>With all that said, I do think there's a way to use attention and BERT to solve poker (when trained on non-text sequences). We need a better corpus of games and some training time on unique models. If anyone is interested, my email is elliott.neil.clark @ gmail.com</p>
]]></description><pubDate>Tue, 28 Oct 2025 14:48:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=45733585</link><dc:creator>eclark</dc:creator><comments>https://news.ycombinator.com/item?id=45733585</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45733585</guid></item><item><title><![CDATA[New comment by eclark in "Flightcontrol: A PaaS that deploys to your AWS account"]]></title><description><![CDATA[
<p>I think 'Batteries Included' would interest you, then. Like this, it's installable on AWS. It's a whole platform PaaS + AI + more built on open source. So Kubernetes is at the core, but with tons of automation and UI. Dev environments are Kubernetes in Docker (Kind-based).<p>- <a href="https://github.com/batteries-included/batteries-included/" rel="nofollow">https://github.com/batteries-included/batteries-included/</a>
- <a href="https://www.batteriesincl.com/" rel="nofollow">https://www.batteriesincl.com/</a></p>
]]></description><pubDate>Mon, 06 Oct 2025 14:07:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=45491622</link><dc:creator>eclark</dc:creator><comments>https://news.ycombinator.com/item?id=45491622</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45491622</guid></item><item><title><![CDATA[New comment by eclark in "DJ With Apple Music launches to enable subscribers to mix their own sets"]]></title><description><![CDATA[
<p>> The feature is integrated with DJ software and hardware platforms AlphaTheta<p>They called out AlphaTheta, so here's hoping that it is. That would make my decision to move off of Spotify for personal streaming even easier</p>
]]></description><pubDate>Thu, 27 Mar 2025 01:55:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=43489659</link><dc:creator>eclark</dc:creator><comments>https://news.ycombinator.com/item?id=43489659</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43489659</guid></item><item><title><![CDATA[New comment by eclark in "Meta’s Hyperscale Infrastructure: Overview and Insights"]]></title><description><![CDATA[
<p>Thanks! They have built an impressive business and tool.</p>
]]></description><pubDate>Tue, 11 Feb 2025 17:55:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=43015847</link><dc:creator>eclark</dc:creator><comments>https://news.ycombinator.com/item?id=43015847</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43015847</guid></item><item><title><![CDATA[New comment by eclark in "Meta’s Hyperscale Infrastructure: Overview and Insights"]]></title><description><![CDATA[
<p>While I was at FB (it wasn't Meta then), I saw what a superpower the infrastructure is there. Product engineers build things of a scale in days. While I was there, I got to be tech lead for several different teams (2x distributed dbs, 1x Dev Efficiency, 1x Ads), some of which are called out by name here.<p>Shout out to the HBase and ZippyDB teams! This is the first public acknowledgment that ZippyDB was converged upon.<p>It's also super cool to see the Developer Efficiency pushes called out. 10,000 Services pushed daily, or every commit is so impressive.<p>When I left FB, I couldn't find anything close. So, I'm building the infra that I was missing as a startup.  Batteries Included. <a href="https://www.batteriesincl.com/" rel="nofollow">https://www.batteriesincl.com/</a>  <a href="https://github.com/batteries-included/batteries-included/">https://github.com/batteries-included/batteries-included/</a></p>
]]></description><pubDate>Tue, 11 Feb 2025 16:33:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=43014829</link><dc:creator>eclark</dc:creator><comments>https://news.ycombinator.com/item?id=43014829</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43014829</guid></item><item><title><![CDATA[Contextual Information Makes Platforms More Stable]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.batteriesincl.com/posts/context-ui-for-stability">https://www.batteriesincl.com/posts/context-ui-for-stability</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43005918">https://news.ycombinator.com/item?id=43005918</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 10 Feb 2025 22:09:26 +0000</pubDate><link>https://www.batteriesincl.com/posts/context-ui-for-stability</link><dc:creator>eclark</dc:creator><comments>https://news.ycombinator.com/item?id=43005918</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43005918</guid></item><item><title><![CDATA[New comment by eclark in "Ask HN: Has anyone tried alternative company models (like a co-op) for SaaS?"]]></title><description><![CDATA[
<p>I work on a startup where the entire self-hosted SaaS is permissively licensed.<p><a href="https://github.com/batteries-included/batteries-included">https://github.com/batteries-included/batteries-included</a>
<a href="https://www.batteriesincl.com/" rel="nofollow">https://www.batteriesincl.com/</a>
<a href="https://www.batteriesincl.com/LICENSE-1.0" rel="nofollow">https://www.batteriesincl.com/LICENSE-1.0</a><p>I started the company because I wanted to give the infrastructure team that FAANG companies have to smaller enterprises. Most of the best infrastructure is open source but too complicated to use or maintain. So we've built a full platform that will run on any Kubernetes cluster, giving a company a push-button infrastructure with everything built on open source. So you get Heroku with single sign-on and no CLI needed. Or you get a full RAG stack with model hosting on your EKS cluster.<p>Since most of the services and projects we're building on top of are open source, we wanted to give the code to the world while being sustainable in the long term as a team. I had also been a part of Cloudera, and I had seen the havoc that open core had on the long-term success of Hadoop. So, I wanted something different for licensing. We ended up with a license that somewhat resembles the FSL but fixes its major (in my opinion) problem. We don't use the competing use clause instead opting for a total install size requirement.<p>I'm happy to chat with anyone about this, my email is in my profile. Good Luck nd I hope it works for you.</p>
]]></description><pubDate>Sat, 18 Jan 2025 17:15:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=42749758</link><dc:creator>eclark</dc:creator><comments>https://news.ycombinator.com/item?id=42749758</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42749758</guid></item><item><title><![CDATA[New comment by eclark in "The Red Beads Experiment (2019)"]]></title><description><![CDATA[
<p>I was at a conference where this was presented by John <a href="https://www.amazon.com/Journey-Profound-Knowledge-Altered-Industry/dp/1950508838" rel="nofollow">https://www.amazon.com/Journey-Profound-Knowledge-Altered-In...</a><p>It’s a fun little eye opener that starts conversations. I wish more of those conversations ended up moving decision makers</p>
]]></description><pubDate>Tue, 17 Dec 2024 07:19:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=42439112</link><dc:creator>eclark</dc:creator><comments>https://news.ycombinator.com/item?id=42439112</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42439112</guid></item><item><title><![CDATA[New comment by eclark in "Open-Source Software Is in Crisis"]]></title><description><![CDATA[
<p>To put this in terms of why.<p>When MS rolls up, they say we are charging for your usage of the MS database, Office, Outlook, Microsoft Windows 11, and the security promises. They are explicit that developing with and on Microsoft allows you access to the ecosystem. So the total bill is high, but part of that bill is a gateway into everyone else using Office, Outlook, Excel, Visual Studio, or SharePoint. The world runs on Excel and MS enterprise sales know that. They are negotiating a contract for one-of-a-kind software and access to the world of MS.<p>Redhat rolls up saying we want to charge you. They don't get to say that if you don't pay, the company will lose access to the software or the ecosystem. They don't get to say they are gatekeepers to other Linux users. Redhat can't claim to be giving the database or the development environment; everyone thinks they are free. If you stop paying Redhat, you probably can find an almost package for package compatible alternative in a rolling release (source: watched that happen multiple times CentOS, et al). So instead Redhat sells a contract for service, support, and indemnity. Those are great products and Red Hat will continue for a long time. They will just have very different staying power when contracts are renewed. They will have very different revenue growth.<p>It's not how I want it to be, just how I see it.<p>Source: Worked at MS and have friends who are former Redhat.</p>
]]></description><pubDate>Thu, 14 Nov 2024 19:43:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=42140323</link><dc:creator>eclark</dc:creator><comments>https://news.ycombinator.com/item?id=42140323</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42140323</guid></item><item><title><![CDATA[New comment by eclark in "Open-Source Software Is in Crisis"]]></title><description><![CDATA[
<p>Hey Cosmin long time!<p>I agree the contract should be clear up front. Changing expectations later is a big problem. People want to give away the software for a while, using it as a loss leader to get attention while not being honest about their later need for money to fund the ongoing concern.<p>I tried to write a little bit about that in my post here: <a href="https://www.batteriesincl.com/posts/fairsource" rel="nofollow">https://www.batteriesincl.com/posts/fairsource</a><p>I was starting Batteries Included and had been writing it in Elixir. I want to give back to the community, show how to use Phoenix/Live view, and be transparent about what users are running, etc. However, I also know that if this will work long-term, I can not give it away to everyone forever. So it's better to be honest about things as early as possible.<p>We paid a very smart lawyer to draft the best compromise we could as early as possible.<p><a href="https://www.batteriesincl.com/LICENSE-1.0" rel="nofollow">https://www.batteriesincl.com/LICENSE-1.0</a><p>This means we can develop in the open here: <a href="https://github.com/batteries-included/batteries-included">https://github.com/batteries-included/batteries-included</a> while also giving it away long term and still being honest that this will require some long-term revenue stream. That revenue stream will come from the companies using it on larger installs.</p>
]]></description><pubDate>Thu, 14 Nov 2024 19:26:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=42140118</link><dc:creator>eclark</dc:creator><comments>https://news.ycombinator.com/item?id=42140118</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42140118</guid></item><item><title><![CDATA[New comment by eclark in "Open-Source Software Is in Crisis"]]></title><description><![CDATA[
<p>Linux powers just about every major datacenter in there world. Every ML model was trained on Linux. However if you tried to make a company as powerful and successful as Microsoft you would fail.<p>Red Hat is the only company that has really made a living off of Linux. Even then, their contracts are orders of magnitudes less than the exact same customers will be paying Microsoft.<p>Linux is successful and remarkable and every company sees the value in having it around. So there's a shared mutual need. However, that doesn't mean that anyone can make it into more than barely scraping by as a going business.</p>
]]></description><pubDate>Thu, 14 Nov 2024 19:14:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=42139998</link><dc:creator>eclark</dc:creator><comments>https://news.ycombinator.com/item?id=42139998</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42139998</guid></item><item><title><![CDATA[New comment by eclark in "Open-Source Software Is in Crisis"]]></title><description><![CDATA[
<p>Open source has a lot of issues:<p>- The anchoring principal. Once you have set the that the software is free, humans expect it to be free forever and all related things are judged off the initial impressions' price bucket. Humans will never want to pay for it later. It's judged worthless.<p>Open-core and closed-source addons and support models have misaligned principles. The community wants things to be easy to use and opinionated, while the OSS company wants to include as many customers as possible who need or want your help with their niche choices.<p>- Sustainability is awful. If you start an open source project you're either going to burn yourself and the community out, or require funding to do it as a day job. So, if you want the project not to stop early, you need money to pay for developers to make software better.<p>- Larger companies want something opinionated but rarely what's good for most of the community. So eventually when big tech/big industry is paying for developers to work on the project, there's a point where the large company will want their cake and the community is hostage. Do that enough times and the large company forks internally and the community fractures or withers out.<p>Source: I was at Cloudera while the Big Data craze took off. Then, I did open source for large tech.</p>
]]></description><pubDate>Thu, 14 Nov 2024 17:26:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=42138618</link><dc:creator>eclark</dc:creator><comments>https://news.ycombinator.com/item?id=42138618</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42138618</guid></item></channel></rss>