<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: john_horton</title><link>https://news.ycombinator.com/user?id=john_horton</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 10:08:38 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=john_horton" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[Add, delete and move data points to create a particular regression line]]></title><description><![CDATA[
<p>Article URL: <a href="https://line-fitter--johnhorton.replit.app/">https://line-fitter--johnhorton.replit.app/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46195819">https://news.ycombinator.com/item?id=46195819</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 08 Dec 2025 18:26:48 +0000</pubDate><link>https://line-fitter--johnhorton.replit.app/</link><dc:creator>john_horton</dc:creator><comments>https://news.ycombinator.com/item?id=46195819</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46195819</guid></item><item><title><![CDATA[The Coasean Singularity? Demand, Supply, and Market Design with AI Agents]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.nber.org/papers/w34468">https://www.nber.org/papers/w34468</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45948121">https://news.ycombinator.com/item?id=45948121</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 16 Nov 2025 20:28:09 +0000</pubDate><link>https://www.nber.org/papers/w34468</link><dc:creator>john_horton</dc:creator><comments>https://news.ycombinator.com/item?id=45948121</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45948121</guid></item><item><title><![CDATA[Superrational Reasoning in the Prisoner's Dilemma with LLMs]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.expectedparrot.com/content/clajelli/superrationality-game-theory">https://www.expectedparrot.com/content/clajelli/superrationality-game-theory</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45783363">https://news.ycombinator.com/item?id=45783363</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 01 Nov 2025 17:12:15 +0000</pubDate><link>https://www.expectedparrot.com/content/clajelli/superrationality-game-theory</link><dc:creator>john_horton</dc:creator><comments>https://news.ycombinator.com/item?id=45783363</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45783363</guid></item><item><title><![CDATA[A Poor Man's User Study with a Vision Model and E[P]]]></title><description><![CDATA[
<p>Article URL: <a href="https://twitter.com/johnjhorton/status/1943473769219002766">https://twitter.com/johnjhorton/status/1943473769219002766</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44527391">https://news.ycombinator.com/item?id=44527391</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 11 Jul 2025 00:58:26 +0000</pubDate><link>https://twitter.com/johnjhorton/status/1943473769219002766</link><dc:creator>john_horton</dc:creator><comments>https://news.ycombinator.com/item?id=44527391</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44527391</guid></item><item><title><![CDATA[A little bit of human-provided structure gives better LLM answers]]></title><description><![CDATA[
<p>Article URL: <a href="https://blog.expectedparrot.com/p/whit-diffie-erasure">https://blog.expectedparrot.com/p/whit-diffie-erasure</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43882857">https://news.ycombinator.com/item?id=43882857</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 03 May 2025 22:23:18 +0000</pubDate><link>https://blog.expectedparrot.com/p/whit-diffie-erasure</link><dc:creator>john_horton</dc:creator><comments>https://news.ycombinator.com/item?id=43882857</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43882857</guid></item><item><title><![CDATA[Using vision capable LLMs to transcribe a handwritten letter]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.expectedparrot.com/content/johnjhorton/grant-letter">https://www.expectedparrot.com/content/johnjhorton/grant-letter</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43083309">https://news.ycombinator.com/item?id=43083309</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 17 Feb 2025 21:09:29 +0000</pubDate><link>https://www.expectedparrot.com/content/johnjhorton/grant-letter</link><dc:creator>john_horton</dc:creator><comments>https://news.ycombinator.com/item?id=43083309</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43083309</guid></item><item><title><![CDATA[New comment by john_horton in "Ask HN: What is the best method for turning a scanned book as a PDF into text?"]]></title><description><![CDATA[
<p>This might be of interest: <a href="https://www.expectedparrot.com/content/johnjhorton/grant-letter" rel="nofollow">https://www.expectedparrot.com/content/johnjhorton/grant-let...</a>
It uses a collection of models to extract the text from a handwritten letter by US Grant. Probably overkill for something nicely printed.</p>
]]></description><pubDate>Mon, 17 Feb 2025 19:09:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=43082261</link><dc:creator>john_horton</dc:creator><comments>https://news.ycombinator.com/item?id=43082261</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43082261</guid></item><item><title><![CDATA[Little's Law]]></title><description><![CDATA[
<p>Article URL: <a href="https://en.wikipedia.org/wiki/Little%27s_law">https://en.wikipedia.org/wiki/Little%27s_law</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=41725707">https://news.ycombinator.com/item?id=41725707</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 02 Oct 2024 23:00:40 +0000</pubDate><link>https://en.wikipedia.org/wiki/Little%27s_law</link><dc:creator>john_horton</dc:creator><comments>https://news.ycombinator.com/item?id=41725707</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41725707</guid></item><item><title><![CDATA[LaTeX writing checker in Python using LLMs]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.expectedparrot.com/content/e8012216-2f36-42c2-9008-9ee040ffcf36?from_page=my_content">https://www.expectedparrot.com/content/e8012216-2f36-42c2-9008-9ee040ffcf36?from_page=my_content</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=41257302">https://news.ycombinator.com/item?id=41257302</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 15 Aug 2024 15:38:00 +0000</pubDate><link>https://www.expectedparrot.com/content/e8012216-2f36-42c2-9008-9ee040ffcf36?from_page=my_content</link><dc:creator>john_horton</dc:creator><comments>https://news.ycombinator.com/item?id=41257302</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41257302</guid></item><item><title><![CDATA[New comment by john_horton in "Python Library for Structured Data Extraction via LLM"]]></title><description><![CDATA[
<p>Hey thanks for noticing - here's the MIT licensed library it's based on: <a href="https://github.com/expectedparrot/edsl">https://github.com/expectedparrot/edsl</a></p>
]]></description><pubDate>Wed, 14 Aug 2024 15:11:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=41247040</link><dc:creator>john_horton</dc:creator><comments>https://news.ycombinator.com/item?id=41247040</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41247040</guid></item><item><title><![CDATA[New comment by john_horton in "Launch HN: Trellis (YC W24) – AI-powered workflows for unstructured data"]]></title><description><![CDATA[
<p>thanks! B/c it got some positive reaction here, I did a little thread on how you can turn this flow into an API: <a href="https://x.com/johnjhorton/status/1823672992624242895" rel="nofollow">https://x.com/johnjhorton/status/1823672992624242895</a></p>
]]></description><pubDate>Wed, 14 Aug 2024 11:15:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=41244806</link><dc:creator>john_horton</dc:creator><comments>https://news.ycombinator.com/item?id=41244806</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41244806</guid></item><item><title><![CDATA[New comment by john_horton in "Launch HN: Trellis (YC W24) – AI-powered workflows for unstructured data"]]></title><description><![CDATA[
<p>Very cool - I've been working on an open source python package that lets you do some similar things (<a href="https://github.com/expectedparrot/edsl">https://github.com/expectedparrot/edsl</a>).<p>Here's an example of the Enron email demo using the edsl syntax/package & a few different LLMs: 
<a href="https://www.expectedparrot.com/content/6607caa1-efc5-439f-8530-8fcc5625fdd7" rel="nofollow">https://www.expectedparrot.com/content/6607caa1-efc5-439f-85...</a></p>
]]></description><pubDate>Tue, 13 Aug 2024 23:57:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=41241137</link><dc:creator>john_horton</dc:creator><comments>https://news.ycombinator.com/item?id=41241137</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41241137</guid></item><item><title><![CDATA[New comment by john_horton in "Python package for administering surveys to LLMs"]]></title><description><![CDATA[
<p>Link to the package: <a href="https://github.com/expectedparrot/edsl">https://github.com/expectedparrot/edsl</a></p>
]]></description><pubDate>Thu, 18 Apr 2024 14:26:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=40076639</link><dc:creator>john_horton</dc:creator><comments>https://news.ycombinator.com/item?id=40076639</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40076639</guid></item><item><title><![CDATA[Python package for administering surveys to LLMs]]></title><description><![CDATA[
<p>Article URL: <a href="https://twitter.com/johnjhorton/status/1780937297967423707">https://twitter.com/johnjhorton/status/1780937297967423707</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=40076613">https://news.ycombinator.com/item?id=40076613</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Thu, 18 Apr 2024 14:23:39 +0000</pubDate><link>https://twitter.com/johnjhorton/status/1780937297967423707</link><dc:creator>john_horton</dc:creator><comments>https://news.ycombinator.com/item?id=40076613</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40076613</guid></item><item><title><![CDATA[Algorithmic Writing Assistance on Jobseekers’ Resumes Increases Hires]]></title><description><![CDATA[
<p>Article URL: <a href="https://twitter.com/emmavaninwegen/status/1618714903371722752">https://twitter.com/emmavaninwegen/status/1618714903371722752</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=34569313">https://news.ycombinator.com/item?id=34569313</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 29 Jan 2023 16:06:25 +0000</pubDate><link>https://twitter.com/emmavaninwegen/status/1618714903371722752</link><dc:creator>john_horton</dc:creator><comments>https://news.ycombinator.com/item?id=34569313</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34569313</guid></item><item><title><![CDATA[New comment by john_horton in "Large language models as simulated economic agents (2022) [pdf]"]]></title><description><![CDATA[
<p>Got it - so it is the "performativity critique" - the idea that the LLM "knows" economic theories and responds in accordance with those theories. I don't think that's very likely because a) econ writing is presumably a tiny, tiny fraction of the corpus and (b) it would imply an amazing degree of transfer learning e.g., it would know to apply "status quo bias" (because it ready the papers) to new scenarios. But as the paper makes clear, you can't use it to "confirm" theories but rather use it like economists use other models - to explore behavior and generate testable predictions cheaply that you can go test with actual humans in realistic scenarios. The last experiment in the paper is from an experiment in a working paper of mine. There's no way the LLM <i>knows</i> this result, but if I had reverse the temporal order (create the scenario w/ the LLM, then run the experiment), it could have guided what to look at. That's likely what's scientifically useful. Anyway, thanks for engaging.</p>
]]></description><pubDate>Sun, 15 Jan 2023 14:56:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=34389844</link><dc:creator>john_horton</dc:creator><comments>https://news.ycombinator.com/item?id=34389844</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34389844</guid></item><item><title><![CDATA[New comment by john_horton in "Large language models as simulated economic agents (2022) [pdf]"]]></title><description><![CDATA[
<p>I'm sorry I don't follow - is your claim, that, say, an AI agent exhibiting status quo bias in responding to decision scenarios (e.g., a preference for options posed as the status quo relative to a neutral framing - Figure 3) that the <i>reason</i> this happens, empirically, is because the LLM has been trained on text describing status quo bias? E.g., like if an apple fell to the ground in an game, it was because the physics engine had been programmed w/ laws of gravity?</p>
]]></description><pubDate>Sun, 15 Jan 2023 03:30:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=34386484</link><dc:creator>john_horton</dc:creator><comments>https://news.ycombinator.com/item?id=34386484</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34386484</guid></item><item><title><![CDATA[New comment by john_horton in "Large language models as simulated economic agents (2022) [pdf]"]]></title><description><![CDATA[
<p>Are you referring to this: "What kinds of experiments are likely to work well? Given current capabilities, games with complex instructions are not presently likely to work well, but with more advanced LLMs on the horizon, this is likely to change. I should also note that research questions like what is “the effect of x on y” are likely to work much better than questions like “what is the level of x?.” Consider that in my Kahneman et al. (1986) example, I can create AI “socialists” who are not too keen on the price system generally. If I polled them about who they want for president, there is no reason to think it would generalize to the population at large. But if my research question was “what is the effect of the size of the price increase on moral judgments” I might get be able to make progress. That being said, it might be possible to create agents
with the correct “weights” to get not just qualitative results but also quantitatively accurate results. I did not try, but one could imagine choosing population shares for the Charness and Rabin (2002) “types” to match moments with reality, then using that population for other scenarios." --- To clarify, this about what <i>research questions</i> are likely to work well here, not what questions posed to LLMs will work well.</p>
]]></description><pubDate>Sun, 15 Jan 2023 01:58:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=34386024</link><dc:creator>john_horton</dc:creator><comments>https://news.ycombinator.com/item?id=34386024</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34386024</guid></item><item><title><![CDATA[New comment by john_horton in "Large language models as simulated economic agents (2022) [pdf]"]]></title><description><![CDATA[
<p>It is so implausible that the training process that creates LLMs might learn features of human behavior that could then be uncovered via experimentation? I showed, empirically, that one can replicate several findings in behavioral economics with AI agents. Perhaps the model "knows" how to behave from these papers, but I think the more plausible interpretation is that it learned about human preferences (against price gouging, status quo bias, & so on) from its training. As such, it seems quite likely that there are other latent behaviors captured by LLMs and yet to be discovered.</p>
]]></description><pubDate>Sat, 14 Jan 2023 23:49:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=34385294</link><dc:creator>john_horton</dc:creator><comments>https://news.ycombinator.com/item?id=34385294</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34385294</guid></item><item><title><![CDATA[New comment by john_horton in "Large language models as simulated economic agents (2022) [pdf]"]]></title><description><![CDATA[
<p>that's a great suggestion - thanks!</p>
]]></description><pubDate>Sat, 14 Jan 2023 20:33:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=34383899</link><dc:creator>john_horton</dc:creator><comments>https://news.ycombinator.com/item?id=34383899</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34383899</guid></item></channel></rss>