<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: paraschopra</title><link>https://news.ycombinator.com/user?id=paraschopra</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 21 Apr 2026 11:34:30 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=paraschopra" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[AI Consciousness Requires Validated Models of Human Consciousness [pdf]]]></title><description><![CDATA[
<p>Article URL: <a href="https://lossfunk.com/papers/ai-consciousness.pdf">https://lossfunk.com/papers/ai-consciousness.pdf</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47833709">https://news.ycombinator.com/item?id=47833709</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 20 Apr 2026 13:04:11 +0000</pubDate><link>https://lossfunk.com/papers/ai-consciousness.pdf</link><dc:creator>paraschopra</dc:creator><comments>https://news.ycombinator.com/item?id=47833709</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47833709</guid></item><item><title><![CDATA[Rosetta Code – Programming Chrestomathy]]></title><description><![CDATA[
<p>Article URL: <a href="https://rosettacode.org/wiki/Rosetta_Code">https://rosettacode.org/wiki/Rosetta_Code</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47776153">https://news.ycombinator.com/item?id=47776153</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 15 Apr 2026 08:20:28 +0000</pubDate><link>https://rosettacode.org/wiki/Rosetta_Code</link><dc:creator>paraschopra</dc:creator><comments>https://news.ycombinator.com/item?id=47776153</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47776153</guid></item><item><title><![CDATA[The Unbearable Automaticity of Being [pdf]]]></title><description><![CDATA[
<p>Article URL: <a href="https://acmelab.yale.edu/sites/default/files/1999_the_unbearable_automaticity_of_being.pdf">https://acmelab.yale.edu/sites/default/files/1999_the_unbearable_automaticity_of_being.pdf</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47647124">https://news.ycombinator.com/item?id=47647124</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 05 Apr 2026 07:46:19 +0000</pubDate><link>https://acmelab.yale.edu/sites/default/files/1999_the_unbearable_automaticity_of_being.pdf</link><dc:creator>paraschopra</dc:creator><comments>https://news.ycombinator.com/item?id=47647124</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47647124</guid></item><item><title><![CDATA[New comment by paraschopra in "EsoLang-Bench: Evaluating Genuine Reasoning in LLMs via Esoteric Languages"]]></title><description><![CDATA[
<p>(founder of Lossfunk, the lab behind this research.)<p>Esolang-Bench went viral on X. A lot of discussion ensued; addressing some of the common points that came up. Addressing a few questions about our Esolang-Bench. Hope it helps.<p>a) Why do it? Does it measure anything useful?<p>It was a curiosity-driven project. We're interested in how humans exhibit sample-efficiency in learning and OOD generalization. So we simply asked: if models can zero/few shot correct answers for simple programming problems in Python, can they do the same in esoteric languages as well?<p>The benchmark is what it is. Different people can interpret its usefulness differently, and we encourage that.<p>b) But humans can't also write esoteric languages well. It's an unfair comparison.<p>Primarily, we're interested in measuring LLM capabilities. With the talk of ASI, it is supposed that their capabilities will soon be super-human. So, our primary motivation wasn't to compare to humans but to check what they can do this by-construction difficult benchmark.<p>However, we do believe that humans are able to teach themselves a new domain by transferring their old skills. So this benchmark was to set a starting point to explore how AI systems can do the same as well (which is what we're exploring now)<p>c) But Claude Code crushes it. You limited models artificially.<p>Yes, we tested models in zero and few shot capabilities. And in the agentic loop we describe in the paper, we limit the number of iterations. As we wrote above, we wanted to understand their performance from a comparative point of view (say on highly represented languages like Python) and that's by the benchmark by design is like this.<p>After the paper was finalized, we experimented with agentic systems where we gave models tools like bash and allowed unlimited iterations (but limited submission attempts). They indeed perform much better.<p>The question that's relevant is what makes these models perform so well when you give them tools and iterations v/s when you don't. Are they reasoning / learning like humans or is it something else?<p>d) So, are LLMs hyped? Or is our study clickbait?<p>The paper, code and benchmark are all open source.<p>We encourage whoever is interested to read it, and make up their own minds.<p>(We couldn't help notice that the <i>same</i> set of results were interpreted wildly differently within the community. A debate between opposing camps of LLMs ensued. Perhaps that's a good thing?)</p>
]]></description><pubDate>Fri, 20 Mar 2026 03:07:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47449994</link><dc:creator>paraschopra</dc:creator><comments>https://news.ycombinator.com/item?id=47449994</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47449994</guid></item><item><title><![CDATA[Bayesian teaching enables probabilistic reasoning in large language models]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.nature.com/articles/s41467-025-67998-6">https://www.nature.com/articles/s41467-025-67998-6</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47257551">https://news.ycombinator.com/item?id=47257551</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 05 Mar 2026 04:24:37 +0000</pubDate><link>https://www.nature.com/articles/s41467-025-67998-6</link><dc:creator>paraschopra</dc:creator><comments>https://news.ycombinator.com/item?id=47257551</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47257551</guid></item><item><title><![CDATA[Empirical evidence for consciousness without access]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.sciencedirect.com/science/article/pii/S0010027723001634">https://www.sciencedirect.com/science/article/pii/S0010027723001634</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47215832">https://news.ycombinator.com/item?id=47215832</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 02 Mar 2026 09:54:26 +0000</pubDate><link>https://www.sciencedirect.com/science/article/pii/S0010027723001634</link><dc:creator>paraschopra</dc:creator><comments>https://news.ycombinator.com/item?id=47215832</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47215832</guid></item><item><title><![CDATA[New comment by paraschopra in "Statement from Dario Amodei on our discussions with the Department of War"]]></title><description><![CDATA[
<p>I’m very happy that Anthropic chose not to cave into US Dept of War’s demands but their statement has an ambiguity.<p>Does this mean they’d be ok to have their models be used for mass surveillance & autonomous weapons against OTHER countries?<p>A clarification would help.</p>
]]></description><pubDate>Fri, 27 Feb 2026 03:32:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47176138</link><dc:creator>paraschopra</dc:creator><comments>https://news.ycombinator.com/item?id=47176138</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47176138</guid></item><item><title><![CDATA[New comment by paraschopra in "The First Fully General Computer Action Model"]]></title><description><![CDATA[
<p>Do you have more info on video encoding process?<p>You write:<p>>We created a model without this tradeoff by training our video encoder on a masked compression objective<p>And I understand why this would give you more detail per token, but how are you reducing total number of tokens?</p>
]]></description><pubDate>Thu, 26 Feb 2026 09:45:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47163947</link><dc:creator>paraschopra</dc:creator><comments>https://news.ycombinator.com/item?id=47163947</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47163947</guid></item><item><title><![CDATA[ConTraSt – database of empirical results on consciousness theories]]></title><description><![CDATA[
<p>Article URL: <a href="https://contrastdb.tau.ac.il/">https://contrastdb.tau.ac.il/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47163780">https://news.ycombinator.com/item?id=47163780</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 26 Feb 2026 09:18:58 +0000</pubDate><link>https://contrastdb.tau.ac.il/</link><dc:creator>paraschopra</dc:creator><comments>https://news.ycombinator.com/item?id=47163780</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47163780</guid></item><item><title><![CDATA[New comment by paraschopra in "The First Fully General Computer Action Model"]]></title><description><![CDATA[
<p>Curious - how much did this cost to train?</p>
]]></description><pubDate>Thu, 26 Feb 2026 09:17:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47163771</link><dc:creator>paraschopra</dc:creator><comments>https://news.ycombinator.com/item?id=47163771</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47163771</guid></item><item><title><![CDATA[Evaluating Prediction Markets]]></title><description><![CDATA[
<p>Article URL: <a href="https://sceneswithsimon.com/p/evaluating-prediction-markets">https://sceneswithsimon.com/p/evaluating-prediction-markets</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47135867">https://news.ycombinator.com/item?id=47135867</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 24 Feb 2026 11:39:38 +0000</pubDate><link>https://sceneswithsimon.com/p/evaluating-prediction-markets</link><dc:creator>paraschopra</dc:creator><comments>https://news.ycombinator.com/item?id=47135867</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47135867</guid></item><item><title><![CDATA[Show HN: Murmuration – AI visualizes your state of mind]]></title><description><![CDATA[
<p>Hi HN,<p>Over the weekend, I built a Chrome extension that transforms your ChatGPT and Claude conversation topics into beautiful, animated black-and-white visualizations on every new tab.<p>How It Works<p>- Scrapes conversation titles from ChatGPT and Claude sidebars via content scripts
- Generates self-contained HTML/CSS/JS art pieces via OpenRouter LLM API
- Displays art in a sandboxed iframe on every new tab, biased towards recent pieces (exponential decay with factor 0.95). The refresh button picks uniformly at random from all stored art. Up to 100 artifacts are kept.<p>Hope you like it! It's been three days for me, and I keep getting surprised by the generations I see :)<p>Note: you'll require an OpenRouter key and that it may cost ~$5/mo for generating 3 visuals/day with Sonnet 4.6.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47135171">https://news.ycombinator.com/item?id=47135171</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 24 Feb 2026 10:09:40 +0000</pubDate><link>https://github.com/paraschopra/murmuration</link><dc:creator>paraschopra</dc:creator><comments>https://news.ycombinator.com/item?id=47135171</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47135171</guid></item><item><title><![CDATA[RL Debate Series]]></title><description><![CDATA[
<p>Article URL: <a href="https://sensorimotorai.github.io/debates/">https://sensorimotorai.github.io/debates/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47121377">https://news.ycombinator.com/item?id=47121377</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 23 Feb 2026 12:21:58 +0000</pubDate><link>https://sensorimotorai.github.io/debates/</link><dc:creator>paraschopra</dc:creator><comments>https://news.ycombinator.com/item?id=47121377</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47121377</guid></item><item><title><![CDATA[The Wolfram S Combinator Challenge]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.combinatorprize.org/">https://www.combinatorprize.org/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47119665">https://news.ycombinator.com/item?id=47119665</a></p>
<p>Points: 87</p>
<p># Comments: 22</p>
]]></description><pubDate>Mon, 23 Feb 2026 08:39:47 +0000</pubDate><link>https://www.combinatorprize.org/</link><dc:creator>paraschopra</dc:creator><comments>https://news.ycombinator.com/item?id=47119665</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47119665</guid></item><item><title><![CDATA[The Myth of the Bayesian Brain]]></title><description><![CDATA[
<p>Article URL: <a href="https://link.springer.com/article/10.1007/s00421-025-05855-6">https://link.springer.com/article/10.1007/s00421-025-05855-6</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47119395">https://news.ycombinator.com/item?id=47119395</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 23 Feb 2026 08:03:54 +0000</pubDate><link>https://link.springer.com/article/10.1007/s00421-025-05855-6</link><dc:creator>paraschopra</dc:creator><comments>https://news.ycombinator.com/item?id=47119395</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47119395</guid></item><item><title><![CDATA[Map of All Theories of Consciousness]]></title><description><![CDATA[
<p>Article URL: <a href="https://loc.closertotruth.com/map">https://loc.closertotruth.com/map</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47084966">https://news.ycombinator.com/item?id=47084966</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 20 Feb 2026 07:48:07 +0000</pubDate><link>https://loc.closertotruth.com/map</link><dc:creator>paraschopra</dc:creator><comments>https://news.ycombinator.com/item?id=47084966</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47084966</guid></item><item><title><![CDATA[Can random experimental choice lead to better theories?]]></title><description><![CDATA[
<p>Article URL: <a href="https://journals.sagepub.com/doi/10.1177/26339137261421577">https://journals.sagepub.com/doi/10.1177/26339137261421577</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47074083">https://news.ycombinator.com/item?id=47074083</a></p>
<p>Points: 38</p>
<p># Comments: 29</p>
]]></description><pubDate>Thu, 19 Feb 2026 14:26:20 +0000</pubDate><link>https://journals.sagepub.com/doi/10.1177/26339137261421577</link><dc:creator>paraschopra</dc:creator><comments>https://news.ycombinator.com/item?id=47074083</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47074083</guid></item><item><title><![CDATA[Language Models Entangle Language and Culture]]></title><description><![CDATA[
<p>Article URL: <a href="https://arxiv.org/abs/2601.15337">https://arxiv.org/abs/2601.15337</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47061078">https://news.ycombinator.com/item?id=47061078</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 18 Feb 2026 14:09:59 +0000</pubDate><link>https://arxiv.org/abs/2601.15337</link><dc:creator>paraschopra</dc:creator><comments>https://news.ycombinator.com/item?id=47061078</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47061078</guid></item><item><title><![CDATA[New comment by paraschopra in "Show HN: Beautiful interactive explainers generated with Claude Code"]]></title><description><![CDATA[
<p>It generated this: <a href="https://paraschopra.github.io/explainers/optical-interferometry/" rel="nofollow">https://paraschopra.github.io/explainers/optical-interferome...</a><p>I haven't checked it, but I'm curious about your feedback.</p>
]]></description><pubDate>Wed, 18 Feb 2026 11:39:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47059978</link><dc:creator>paraschopra</dc:creator><comments>https://news.ycombinator.com/item?id=47059978</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47059978</guid></item><item><title><![CDATA[New comment by paraschopra in "Show HN: Beautiful interactive explainers generated with Claude Code"]]></title><description><![CDATA[
<p>yep, i was pretty surprised by audio widgets too.</p>
]]></description><pubDate>Wed, 18 Feb 2026 10:48:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47059638</link><dc:creator>paraschopra</dc:creator><comments>https://news.ycombinator.com/item?id=47059638</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47059638</guid></item></channel></rss>