<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: 100ideas</title><link>https://news.ycombinator.com/user?id=100ideas</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 16:53:25 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=100ideas" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[The Triangle of Everything]]></title><description><![CDATA[
<p>Article URL: <a href="https://mirror.xyz/avsa.eth/ZB9O324wEdVZT_GHEjae_pzJ9eiFzgKqOrr0XkwGm2Y">https://mirror.xyz/avsa.eth/ZB9O324wEdVZT_GHEjae_pzJ9eiFzgKqOrr0XkwGm2Y</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45381141">https://news.ycombinator.com/item?id=45381141</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 26 Sep 2025 00:33:59 +0000</pubDate><link>https://mirror.xyz/avsa.eth/ZB9O324wEdVZT_GHEjae_pzJ9eiFzgKqOrr0XkwGm2Y</link><dc:creator>100ideas</dc:creator><comments>https://news.ycombinator.com/item?id=45381141</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45381141</guid></item><item><title><![CDATA[New comment by 100ideas in "Dropbox discontinuing Vault, moving it to a normal folder"]]></title><description><![CDATA[
<p>what a non-answer answer!</p>
]]></description><pubDate>Thu, 30 Jan 2025 04:43:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=42874937</link><dc:creator>100ideas</dc:creator><comments>https://news.ycombinator.com/item?id=42874937</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42874937</guid></item><item><title><![CDATA[New comment by 100ideas in "UnitedHealth overcharged cancer patients for drugs by over 1,000%"]]></title><description><![CDATA[
<p>You just answered my question:<p>Is it the case that UnitedHealth and Cigna each own (or control) one of the "big three" PBMs? If so, that is a just crazy - the control insurance premium pricing, benefit decisions, AND the pricing of covered medications?<p>yadaebo wrote below "Medical Loss Ratio (MLR) is capped at 85% in the US which means 85% of revenue must go to patients". Does controlling a big PBM allow an insurance company a loophole?</p>
]]></description><pubDate>Wed, 15 Jan 2025 21:56:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=42717685</link><dc:creator>100ideas</dc:creator><comments>https://news.ycombinator.com/item?id=42717685</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42717685</guid></item><item><title><![CDATA[New comment by 100ideas in "Truly portable C applications"]]></title><description><![CDATA[
<p>Very interesting comments and moderation discussion on this article.</p>
]]></description><pubDate>Fri, 22 Nov 2024 11:18:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=42212882</link><dc:creator>100ideas</dc:creator><comments>https://news.ycombinator.com/item?id=42212882</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42212882</guid></item><item><title><![CDATA[New comment by 100ideas in "Solomonic learning: Large language models and the art of induction"]]></title><description><![CDATA[
<p>I think you both make valid points, but I also get the sense that the article is articulating insights gained from pure math explorations into the theoretical limitations of learning, which in the article can sound "turboencabulator-speak" when compressed into words.<p>Maybe I should have just linked to the research paper:<p>[B'MOJO: Hybrid state space realizations of foundation models with eidetic and fading memory](<a href="https://www.arxiv.org/abs/2407.06324" rel="nofollow">https://www.arxiv.org/abs/2407.06324</a>)</p>
]]></description><pubDate>Fri, 22 Nov 2024 04:30:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=42211207</link><dc:creator>100ideas</dc:creator><comments>https://news.ycombinator.com/item?id=42211207</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42211207</guid></item><item><title><![CDATA[New comment by 100ideas in "Solomonic learning: Large language models and the art of induction"]]></title><description><![CDATA[
<p>Oops, thanks. I changed it.</p>
]]></description><pubDate>Fri, 22 Nov 2024 04:23:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=42211177</link><dc:creator>100ideas</dc:creator><comments>https://news.ycombinator.com/item?id=42211177</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42211177</guid></item><item><title><![CDATA[New comment by 100ideas in "Solomonic learning: Large language models and the art of induction"]]></title><description><![CDATA[
<p>Basically, as LLMs scale up, the author (Soatto, VP at AWS) suggests they're beginning to resemble Solomonoff inference: hypothetically optimal but computationally infinite approach that executes all possible programs to match observed data. Repeating this approach for any given question by definition gives the best answer, yet requires no learning, since the entire process can be repeated for any query (thanks to infinite computation).<p>The article develops a theoretical framework contrasting traditional inductive learning (which emphasizes generalization over memorization) with transductive inference (which embraces memorization and reasoning). Here's a quote:<p>"What matters is that LLMs are inductively trained transductive-inference engines and can therefore support both forms of inference.[2] They are capable of performing inference by inductive learning, like any trained classifier, akin to Daniel Kahneman’s “system 1” behavior — the fast thinking of his book title Thinking Fast and Slow. But LLMs are also capable of rudimentary forms of transduction, such as in-context-learning and chain of thought, which we may call system 2 — slow-thinking — behavior. The more sophisticated among us have even taught LLMs to do deduction — the ultimate test for their emergent abilities."<p>Sadly, the opening quote is not elucidated.</p>
]]></description><pubDate>Fri, 22 Nov 2024 04:22:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=42211168</link><dc:creator>100ideas</dc:creator><comments>https://news.ycombinator.com/item?id=42211168</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42211168</guid></item><item><title><![CDATA[New comment by 100ideas in ""One year of research in neural networks is sufficient to believe in God.""]]></title><description><![CDATA[
<p>Yes, I should have made that clear in my first comment. Thanks for doing so. I used the quote in my title because I found it a fascinating way to start a technical blog post, and it made me want to read the article to understand what the author was planning to write from such a beginning.</p>
]]></description><pubDate>Fri, 22 Nov 2024 04:06:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=42211094</link><dc:creator>100ideas</dc:creator><comments>https://news.ycombinator.com/item?id=42211094</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42211094</guid></item><item><title><![CDATA[New comment by 100ideas in ""One year of research in neural networks is sufficient to believe in God.""]]></title><description><![CDATA[
<p>I found the opening quote of this article to be intriguing, especially since it was from a 1992 research lab:<p>“One year of research in neural networks is sufficient to believe in God.” The writing on the wall of John Hopfield’s lab at Caltech made no sense to me in 1992. Three decades later, and after years of building large language models, I see its sense if one replaces sufficiency with necessity: understanding neural networks as we teach them today requires believing in an immanent entity.</p>
]]></description><pubDate>Fri, 22 Nov 2024 03:44:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=42210979</link><dc:creator>100ideas</dc:creator><comments>https://news.ycombinator.com/item?id=42210979</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42210979</guid></item><item><title><![CDATA[Solomonic learning: Large language models and the art of induction]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.amazon.science/blog/solomonic-learning-large-language-models-and-the-art-of-induction">https://www.amazon.science/blog/solomonic-learning-large-language-models-and-the-art-of-induction</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42210978">https://news.ycombinator.com/item?id=42210978</a></p>
<p>Points: 5</p>
<p># Comments: 9</p>
]]></description><pubDate>Fri, 22 Nov 2024 03:44:11 +0000</pubDate><link>https://www.amazon.science/blog/solomonic-learning-large-language-models-and-the-art-of-induction</link><dc:creator>100ideas</dc:creator><comments>https://news.ycombinator.com/item?id=42210978</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42210978</guid></item><item><title><![CDATA[New comment by 100ideas in "Senate Vote Tomorrow Could Give Helping Hand to Patent Trolls"]]></title><description><![CDATA[
<p>Who are the lobbyists pushing this?</p>
]]></description><pubDate>Thu, 19 Sep 2024 10:42:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=41590458</link><dc:creator>100ideas</dc:creator><comments>https://news.ycombinator.com/item?id=41590458</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41590458</guid></item><item><title><![CDATA[New comment by 100ideas in "Gilead shot prevents all HIV cases in trial"]]></title><description><![CDATA[
<p>So the control was the previous GILEAD drug (presumably a weekly injection?) for the same condition (HIV+)?</p>
]]></description><pubDate>Sat, 22 Jun 2024 08:56:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=40757516</link><dc:creator>100ideas</dc:creator><comments>https://news.ycombinator.com/item?id=40757516</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40757516</guid></item><item><title><![CDATA[New comment by 100ideas in "The Geometry of Categorical and Hierarchical Concepts in Large Language Models"]]></title><description><![CDATA[
<p>reminds me of the anthropic's recent work on identifying the neuron sets that correlate to various semantic concepts in Claude: <a href="https://news.ycombinator.com/item?id=40429540">https://news.ycombinator.com/item?id=40429540</a> "Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet"</p>
]]></description><pubDate>Tue, 11 Jun 2024 07:37:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=40643600</link><dc:creator>100ideas</dc:creator><comments>https://news.ycombinator.com/item?id=40643600</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40643600</guid></item><item><title><![CDATA[New comment by 100ideas in "A Revolution in Biology?"]]></title><description><![CDATA[
<p>Ditto. The larger question is how does the linear sequence of DNA (primary sequence) ultimately drive the spatio-temporal development program that leads to mature differentiated cells working together at the organoid / organ / organism scale. How do cells know what to do in time and space as the organism grows? How is that logic encoded in the genome?<p>Eric Davidson did a bunch of pioneering work meticulously "debugging" this spatiotemporal genomic logic in the sea urchin. Pretty amazing. Eukaryotes like us have control elements directly upstream of our genes (trans-acting aka close acting) and also 100,000's of base pairs distant (cis-acting). The region of DNA directly preceding the beginning of an open reading frame at the start of a gene usually has a sequence of DNA motifs that bind proteins that can increase or decrease expression of the gene. Davidson and others showed that the transcription factor proteins that bind to these control motifs actually have additional other proteins that bind to them, in literally a layer on top, and that the sequences of proteins in this second layer recruit a tertiary layer of proteins that conditionally cause more or less gene expression, depending on their identities. You could say the secondary and tertiary layers are a form of "abstraction" in a literal sense, since they encode a hierarchy of logical operations.<p>Here's an open-access overview of Davidson's work which incidentally illuminates a lot of these concepts in more detail for a lay audience: "ERIC DAVIDSON: STEPS TO A GENE REGULATORY NETWORK FOR DEVELOPMENT" by Ellen Rothenberg, 2016; doi: 10.1016/j.ydbio.2016.01.020 <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4828313/" rel="nofollow">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4828313/</a><p>to see the decoded logic in pseudocode and with a diagram, see "cis-Regulatory control circuits in development", Howard and Davidson, 2004, Developmental Biology, vol 271, <a href="https://doi.org/10.1016/j.ydbio.2004.03.031" rel="nofollow">https://doi.org/10.1016/j.ydbio.2004.03.031</a> (open access)</p>
]]></description><pubDate>Tue, 11 Jun 2024 06:19:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=40643083</link><dc:creator>100ideas</dc:creator><comments>https://news.ycombinator.com/item?id=40643083</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40643083</guid></item><item><title><![CDATA[New comment by 100ideas in "A Canticle for Leibowitz"]]></title><description><![CDATA[
<p>one of my favorite books since high school. I remember mentioning it to my HS English teacher before class one day (20 years ago) and she amazed and surprised me by replying "oh that's one of my favorites!" It seemed like an esoteric book to me at the time, but her generation also knew of it. maybe the OP can share what they find interesting about it...</p>
]]></description><pubDate>Sat, 02 Mar 2024 06:40:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=39570415</link><dc:creator>100ideas</dc:creator><comments>https://news.ycombinator.com/item?id=39570415</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39570415</guid></item><item><title><![CDATA[New comment by 100ideas in "OpenAI's board has fired Sam Altman"]]></title><description><![CDATA[
<p>Dear god, how is that an advantage? Are we all here just rooting for techno-dictator supremacy?</p>
]]></description><pubDate>Sat, 18 Nov 2023 05:40:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=38315799</link><dc:creator>100ideas</dc:creator><comments>https://news.ycombinator.com/item?id=38315799</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38315799</guid></item><item><title><![CDATA[New comment by 100ideas in "OpenAI's board has fired Sam Altman"]]></title><description><![CDATA[
<p>This comment is tone-deaf to the unique (and effective? TBD) arrangement of the board OpenAI 501(c)3 without compensation and the company they regulate. Your comment strikes me as not appreciating the unusually civic-minded arrangement, at least superficially, that is enabling the current power play. Maybe read the boards letter more carefully and provide your reaction. You castigate them as “non-techies” - meaning… what?</p>
]]></description><pubDate>Sat, 18 Nov 2023 05:38:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=38315788</link><dc:creator>100ideas</dc:creator><comments>https://news.ycombinator.com/item?id=38315788</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38315788</guid></item><item><title><![CDATA[New comment by 100ideas in "Understand (1991)"]]></title><description><![CDATA[
<p>what's the deal with the host url?</p>
]]></description><pubDate>Fri, 07 Jul 2023 05:52:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=36627731</link><dc:creator>100ideas</dc:creator><comments>https://news.ycombinator.com/item?id=36627731</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36627731</guid></item><item><title><![CDATA[New comment by 100ideas in "FDA approves first gene therapy treatment of Duchenne muscular dystrophy"]]></title><description><![CDATA[
<p>In principle, these costs could be 1000x less. Much like minicomputer -> dell price shift.</p>
]]></description><pubDate>Fri, 23 Jun 2023 07:00:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=36443716</link><dc:creator>100ideas</dc:creator><comments>https://news.ycombinator.com/item?id=36443716</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36443716</guid></item><item><title><![CDATA[New comment by 100ideas in "Very Short Introductions"]]></title><description><![CDATA[
<p>VSI thermodynamics is my favorite so far, it covers statistical mechanics of gasses, Boltzmann distribution  defn of temperature… if you don’t know what these concepts are, I am confident you can get a good intro from the VSI thermodynamics book.</p>
]]></description><pubDate>Tue, 03 May 2022 06:44:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=31245420</link><dc:creator>100ideas</dc:creator><comments>https://news.ycombinator.com/item?id=31245420</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31245420</guid></item></channel></rss>