<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: JoelEinbinder</title><link>https://news.ycombinator.com/user?id=JoelEinbinder</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 24 Apr 2026 19:55:03 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=JoelEinbinder" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by JoelEinbinder in "GPT-5.5"]]></title><description><![CDATA[
<p>My understanding is that existing rail lines aren't flat/straight enough for high speed rail. There's no point to a bullet train if it has to constantly slow down for corners/hills.</p>
]]></description><pubDate>Fri, 24 Apr 2026 00:03:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47883841</link><dc:creator>JoelEinbinder</dc:creator><comments>https://news.ycombinator.com/item?id=47883841</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47883841</guid></item><item><title><![CDATA[New comment by JoelEinbinder in "Show HN: Lightpanda, an open-source headless browser in Zig"]]></title><description><![CDATA[
<p>The benchmark shows lower ram usage on a very simple demo website. I expect that if the benchmark ran on a random set of real websites, ram usage would not be meaningfully lower than Chrome. Happy to be impressed and wrong if it remains lower.</p>
]]></description><pubDate>Fri, 24 Jan 2025 17:09:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=42815163</link><dc:creator>JoelEinbinder</dc:creator><comments>https://news.ycombinator.com/item?id=42815163</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42815163</guid></item><item><title><![CDATA[New comment by JoelEinbinder in "Show HN: Lightpanda, an open-source headless browser in Zig"]]></title><description><![CDATA[
<p>When I've talked to people running this kind of ai scraping/agent workflow, the costs of the AI parts dwarf that of the web browser parts. This causes computational cost of the browser to become irrelevant. I'm curious what situation you got yourself in where optimizing the browser results in meaningful savings. I'd also like to be in that place!<p>I think your ram usage benchmark is deceptive. I'd expect a minimal browser to have much lower peak memory usage than chrome on a minimal website. But it should even out or get worse as the websites get richer. The nature of web scraping is that the worst sites take up the vast majority of your cpu cycles. I don't think lowering the ram usage of the browser process will have much real world impact.</p>
]]></description><pubDate>Fri, 24 Jan 2025 16:29:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=42814724</link><dc:creator>JoelEinbinder</dc:creator><comments>https://news.ycombinator.com/item?id=42814724</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42814724</guid></item><item><title><![CDATA[New comment by JoelEinbinder in "We're forking Flutter"]]></title><description><![CDATA[
<p>Google's monorepo of closed source code.</p>
]]></description><pubDate>Mon, 28 Oct 2024 20:36:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=41975984</link><dc:creator>JoelEinbinder</dc:creator><comments>https://news.ycombinator.com/item?id=41975984</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41975984</guid></item><item><title><![CDATA[Show HN: I made an online Python REPL]]></title><description><![CDATA[
<p>I added Python support into my terminal, and then split it out into its own website as a REPL powered by Pyodide. It has autocomplete, expandable objects, syntax highlighting, inline matplotlib charts, and many other features.<p>While I mainly use it in my terminal to test things in my local environment, I think the online version is pretty neat.<p>Blog post: <a href="https://joel.tools/repl/" rel="nofollow">https://joel.tools/repl/</a>
REPL: <a href="https://python.joel.tools/" rel="nofollow">https://python.joel.tools/</a>
Pyodide: <a href="https://pyodide.org/en/stable/" rel="nofollow">https://pyodide.org/en/stable/</a></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=41857983">https://news.ycombinator.com/item?id=41857983</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 16 Oct 2024 11:45:24 +0000</pubDate><link>https://python.joel.tools/</link><dc:creator>JoelEinbinder</dc:creator><comments>https://news.ycombinator.com/item?id=41857983</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41857983</guid></item><item><title><![CDATA[New comment by JoelEinbinder in "Waymo and Hyundai enter multi-year, strategic partnership"]]></title><description><![CDATA[
<p>Seems like Hyundai own 33% of Kia, rather than it just being a brand under the same company like Lexus/Toyota. They share some things and compete on others.</p>
]]></description><pubDate>Fri, 04 Oct 2024 14:36:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=41741886</link><dc:creator>JoelEinbinder</dc:creator><comments>https://news.ycombinator.com/item?id=41741886</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41741886</guid></item><item><title><![CDATA[New comment by JoelEinbinder in "Using the Infinite Bookspace to Reason About Language Models"]]></title><description><![CDATA[
<p>I think Hacker News might appreciate some of the behind the scenes of this post.<p>Getting this page to load quickly was not trivial. The initial dataset of books starting sentences was over 20 megabytes. By only sending the unique prefix of each book, I was able to get that to be much smaller. Using a custom format, sorting the prefixes, and gzipping got the size down to 114kb. About 3 bytes per book. The full first sentences are downloaded on demand as the books are filtered down.<p>Rendering the books requires 5 million triangles. I used WebGL 2's drawArraysInstanced method. This allows me to define the book geometry only once, and each book is just defined by it's rotation/position/color. Then it's just a matter of keeping the fragment shader simple.<p>Going into this project, I wasn't sure if it was possible. But I have left feeling really impressed with how capable the web is these days if you are willing to push a bit.</p>
]]></description><pubDate>Sat, 24 Aug 2024 18:35:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=41340345</link><dc:creator>JoelEinbinder</dc:creator><comments>https://news.ycombinator.com/item?id=41340345</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41340345</guid></item><item><title><![CDATA[Using the Infinite Bookspace to Reason About Language Models]]></title><description><![CDATA[
<p>Article URL: <a href="https://joel.tools/bookspace/">https://joel.tools/bookspace/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=41340316">https://news.ycombinator.com/item?id=41340316</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Sat, 24 Aug 2024 18:31:53 +0000</pubDate><link>https://joel.tools/bookspace/</link><dc:creator>JoelEinbinder</dc:creator><comments>https://news.ycombinator.com/item?id=41340316</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41340316</guid></item><item><title><![CDATA[New comment by JoelEinbinder in "Ask HN: Would competitive chess be better with one allowed take-back per turn?"]]></title><description><![CDATA[
<p>It would make competitive chess even more draw-ish. It is much easier to see when you accidentally get into a losing position than when you miss a winning idea. So the take back would be used defensively.</p>
]]></description><pubDate>Sat, 24 Aug 2024 16:40:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=41339444</link><dc:creator>JoelEinbinder</dc:creator><comments>https://news.ycombinator.com/item?id=41339444</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41339444</guid></item><item><title><![CDATA[New comment by JoelEinbinder in "Are you better than a language model at predicting the next word?"]]></title><description><![CDATA[
<p>On the full set of 1000 questions, the language models are getting 30-35% correct.  With patience, humans can do 40-50%.<p>The language models were prompted with the text + each candidate answer, and the one with the lowest perplexity was picked. I tried to avoid instruction tuned models wherever possible to avoid the "voice" problem.</p>
]]></description><pubDate>Sun, 18 Aug 2024 05:31:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=41280314</link><dc:creator>JoelEinbinder</dc:creator><comments>https://news.ycombinator.com/item?id=41280314</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41280314</guid></item><item><title><![CDATA[New comment by JoelEinbinder in "Are you better than a language model at predicting the next word?"]]></title><description><![CDATA[
<p>What scores are you getting using this technique?</p>
]]></description><pubDate>Sun, 18 Aug 2024 04:03:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=41280041</link><dc:creator>JoelEinbinder</dc:creator><comments>https://news.ycombinator.com/item?id=41280041</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41280041</guid></item><item><title><![CDATA[New comment by JoelEinbinder in "Are you better than a language model at predicting the next word?"]]></title><description><![CDATA[
<p>After the quiz, the source is linked along with the full comment.</p>
]]></description><pubDate>Sun, 18 Aug 2024 03:52:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=41280011</link><dc:creator>JoelEinbinder</dc:creator><comments>https://news.ycombinator.com/item?id=41280011</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41280011</guid></item><item><title><![CDATA[New comment by JoelEinbinder in "Are you better than a language model at predicting the next word?"]]></title><description><![CDATA[
<p>The prompts you see in the quiz are from real hacker news comments. Whatever word the commenter said next is the "correct" word.</p>
]]></description><pubDate>Sun, 18 Aug 2024 03:17:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=41279903</link><dc:creator>JoelEinbinder</dc:creator><comments>https://news.ycombinator.com/item?id=41279903</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41279903</guid></item><item><title><![CDATA[New comment by JoelEinbinder in "Are you better than a language model at predicting the next word?"]]></title><description><![CDATA[
<p>Temperature doesn't play a role here, because the LLM is not being sampled (other than to generate the candidate answers). Instead the answer the llm picks is decided by computing the complexity for the full prompt + answer string.</p>
]]></description><pubDate>Sun, 18 Aug 2024 03:16:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=41279896</link><dc:creator>JoelEinbinder</dc:creator><comments>https://news.ycombinator.com/item?id=41279896</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41279896</guid></item><item><title><![CDATA[New comment by JoelEinbinder in "Are you better than a language model at predicting the next word?"]]></title><description><![CDATA[
<p>If I used old comments then it's likely that the models will have trained on them. I haven't tested if that makes a difference though.</p>
]]></description><pubDate>Sat, 17 Aug 2024 21:23:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=41278090</link><dc:creator>JoelEinbinder</dc:creator><comments>https://news.ycombinator.com/item?id=41278090</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41278090</guid></item><item><title><![CDATA[New comment by JoelEinbinder in "Are you better than a language model at predicting the next word?"]]></title><description><![CDATA[
<p>The language model generating the candidate answers generates tokens until a full word is produced. The language models picking their answer choose the completion that results in the lowest perplexity independent of the tokenization.</p>
]]></description><pubDate>Sat, 17 Aug 2024 20:50:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=41277848</link><dc:creator>JoelEinbinder</dc:creator><comments>https://news.ycombinator.com/item?id=41277848</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41277848</guid></item><item><title><![CDATA[New comment by JoelEinbinder in "Are you better than a language model at predicting the next word?"]]></title><description><![CDATA[
<p>The LLM didn’t generate the next word. Hacker News commenters did. You can see the source of the comment on the results screen.</p>
]]></description><pubDate>Sat, 17 Aug 2024 20:24:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=41277668</link><dc:creator>JoelEinbinder</dc:creator><comments>https://news.ycombinator.com/item?id=41277668</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41277668</guid></item><item><title><![CDATA[New comment by JoelEinbinder in "Are you better than a language model at predicting the next word?"]]></title><description><![CDATA[
<p>If you want to practice it one question at at time, you set the question count to 1.
<a href="https://joel.tools/smarter/?questions=1" rel="nofollow">https://joel.tools/smarter/?questions=1</a><p>When I tested it this way it resulted in less of an emotional reaction.</p>
]]></description><pubDate>Sat, 17 Aug 2024 19:59:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=41277466</link><dc:creator>JoelEinbinder</dc:creator><comments>https://news.ycombinator.com/item?id=41277466</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41277466</guid></item><item><title><![CDATA[New comment by JoelEinbinder in "Are you better than a language model at predicting the next word?"]]></title><description><![CDATA[
<p>That isn't how it's supposed to work. I mean sometimes you get a supper annoying prompt like ">", but if you guess the right answer it should give you the point. I just checked the two prompts like that, and they seem to work for me.</p>
]]></description><pubDate>Sat, 17 Aug 2024 19:47:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=41277369</link><dc:creator>JoelEinbinder</dc:creator><comments>https://news.ycombinator.com/item?id=41277369</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41277369</guid></item><item><title><![CDATA[New comment by JoelEinbinder in "Are you better than a language model at predicting the next word?"]]></title><description><![CDATA[
<p>I made a little game/quiz where you try to guess the next word in a bunch of Hacker News comments and compete against various language models. I used llama2 to generate three alternative completions for each comment creating a multiple choice question. For the local language models that you are competing against, I consider them having picked the answer with the lowest total perplexity of prompt + answer. I am able to replicate this behavior with the OpenAI models by setting a logit_bias that limits the llm to pick only one of the allowed answers. I tried just giving the full multiple choice question as a prompt and having it pick an answer, but that led to really poor results. So I'm not able to compare with Claude or any online LLMs that don't have logit_bias.<p>I wouldn't call the quiz fun exactly. After playing with it a lot I think I've been able to consistently get above 50% of questions right. I have slowed down a lot answering each question, which I think LLMs have trouble doing.</p>
]]></description><pubDate>Sat, 17 Aug 2024 19:23:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=41277195</link><dc:creator>JoelEinbinder</dc:creator><comments>https://news.ycombinator.com/item?id=41277195</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41277195</guid></item></channel></rss>