<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ComplexSystems</title><link>https://news.ycombinator.com/user?id=ComplexSystems</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 08 May 2026 13:59:32 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ComplexSystems" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by ComplexSystems in "Maybe you shouldn't install new software for a bit"]]></title><description><![CDATA[
<p>While I am sure FreeBSD is more secure than your average Linux distro, I sure hope they are using these new AI models to harden everything.</p>
]]></description><pubDate>Fri, 08 May 2026 05:09:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=48058816</link><dc:creator>ComplexSystems</dc:creator><comments>https://news.ycombinator.com/item?id=48058816</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48058816</guid></item><item><title><![CDATA[New comment by ComplexSystems in "Softmax, can you derive the Jacobian? And should you care?"]]></title><description><![CDATA[
<p>Good article, but<p>"We take the exponential of each input and normalize by the sum of all exponentials. This transforms a vector of arbitrary real numbers into values between 0 and 1 that sum to 1, it technically this is a pseudo-probability distribution (they're not derived from a probability space), but it's close enough to a probability distribution and for practical purposes they work just fine."<p>Why is this a "pseudo-probability distribution?"</p>
]]></description><pubDate>Fri, 01 May 2026 11:00:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47973304</link><dc:creator>ComplexSystems</dc:creator><comments>https://news.ycombinator.com/item?id=47973304</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47973304</guid></item><item><title><![CDATA[New comment by ComplexSystems in "GPT-5.5"]]></title><description><![CDATA[
<p>It seems like some kind of technique is needed that maximizes information transfer between huge LLM generated codebases and a human trying to make sense of them. Something beyond just deep diving into the codebase with no documentation.</p>
]]></description><pubDate>Thu, 23 Apr 2026 22:08:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47882777</link><dc:creator>ComplexSystems</dc:creator><comments>https://news.ycombinator.com/item?id=47882777</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47882777</guid></item><item><title><![CDATA[New comment by ComplexSystems in "Not all elementary functions can be expressed with exp-minus-log"]]></title><description><![CDATA[
<p>Yes it does; you can build the absolute value as sqrt(x²), and sqrt(x) and x² are both constructible using eml.</p>
]]></description><pubDate>Thu, 16 Apr 2026 17:30:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47796737</link><dc:creator>ComplexSystems</dc:creator><comments>https://news.ycombinator.com/item?id=47796737</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47796737</guid></item><item><title><![CDATA[New comment by ComplexSystems in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>This line of reasoning makes no sense when the AI can just be given access to a fuzzer. I would guess that it probably did have access to a fuzzer to put together some of these vulnerabilities.</p>
]]></description><pubDate>Tue, 07 Apr 2026 22:38:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47682256</link><dc:creator>ComplexSystems</dc:creator><comments>https://news.ycombinator.com/item?id=47682256</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47682256</guid></item><item><title><![CDATA[New comment by ComplexSystems in "Embracing Bayesian methods in clinical trials"]]></title><description><![CDATA[
<p>> So the data cannot possibly tell you anything about how likely is the observed outcome, because the observed outcome is the only outcome that you observe.<p>This could also be viewed as supporting the Bayesian perspective, where the observed data are <i>not</i> viewed as random variables - they are fixed. This is because, as you say, the observed outcome is the only outcome that you observe. It is the classical setting, in comparison, where we instead do our analysis by treating the sample as a random variable, placing the counterfactual on other non-observed values ("what if I had drawn a different sample?"), even though we didn't. Bayesian methods treat the data as gospel truth, and place the counterfactual on the different parameters ("what if the population were different?"), even though it isn't.<p>The other criticism you have is<p>> The problem with this approach is that we can only observe ONE level of treatment effectiveness, i.e., the level of treatment effectiveness that the treatment actually possesses. All other possible levels of effectiveness are entirely hypothetical.<p>This is true of both Bayesian and classical methods. We build models that would explain how different hypothetical levels of effectiveness would affect what data we should expect to see - that is the whole point. Classical methods also involve exploring scenarios in which purely hypothetical values of the parameter may be potentially true, and characterizing counterfactual samples that could have been drawn from them, even though in real life they couldn't have been.</p>
]]></description><pubDate>Sat, 28 Mar 2026 16:28:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47556064</link><dc:creator>ComplexSystems</dc:creator><comments>https://news.ycombinator.com/item?id=47556064</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47556064</guid></item><item><title><![CDATA[New comment by ComplexSystems in "Embracing Bayesian methods in clinical trials"]]></title><description><![CDATA[
<p>I found it surprising that this article persistently did not capitalize the word "Bayesian." Is this a new trend or something?</p>
]]></description><pubDate>Sat, 28 Mar 2026 16:06:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47555848</link><dc:creator>ComplexSystems</dc:creator><comments>https://news.ycombinator.com/item?id=47555848</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47555848</guid></item><item><title><![CDATA[New comment by ComplexSystems in "Woxi: Wolfram Mathematica Reimplementation in Rust"]]></title><description><![CDATA[
<p>I certainly don't. If a software developer has found a way to use these tools that works well for them and produces good results, that's a good thing.</p>
]]></description><pubDate>Sat, 28 Feb 2026 16:56:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47197568</link><dc:creator>ComplexSystems</dc:creator><comments>https://news.ycombinator.com/item?id=47197568</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47197568</guid></item><item><title><![CDATA[New comment by ComplexSystems in "Anthropic gives Opus 3 exit interview, "retirement" blog"]]></title><description><![CDATA[
<p>They are mathematical models of what human beings would say. That's it.</p>
]]></description><pubDate>Fri, 27 Feb 2026 01:25:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47175065</link><dc:creator>ComplexSystems</dc:creator><comments>https://news.ycombinator.com/item?id=47175065</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47175065</guid></item><item><title><![CDATA[New comment by ComplexSystems in "Ggml.ai joins Hugging Face to ensure the long-term progress of Local AI"]]></title><description><![CDATA[
<p>> the code is not public, so we can't know.<p>I feel like you're making this statement in bad faith, rather than honestly believing the developers of the forum software here have built in a clause to pin simonw's comments to the top.</p>
]]></description><pubDate>Fri, 20 Feb 2026 20:19:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47093339</link><dc:creator>ComplexSystems</dc:creator><comments>https://news.ycombinator.com/item?id=47093339</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47093339</guid></item><item><title><![CDATA[New comment by ComplexSystems in "Claude Opus 4.6"]]></title><description><![CDATA[
<p>Do you ever replace ChatGPT models with cheaper, distilled, quantized, etc ones to save cost?</p>
]]></description><pubDate>Thu, 05 Feb 2026 21:22:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=46905541</link><dc:creator>ComplexSystems</dc:creator><comments>https://news.ycombinator.com/item?id=46905541</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46905541</guid></item><item><title><![CDATA[New comment by ComplexSystems in "Claude Code's new hidden feature: Swarms"]]></title><description><![CDATA[
<p>How much does this setup cost? I don't think a regular Claude Max subscription makes this possible.</p>
]]></description><pubDate>Sun, 25 Jan 2026 09:53:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46752459</link><dc:creator>ComplexSystems</dc:creator><comments>https://news.ycombinator.com/item?id=46752459</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46752459</guid></item><item><title><![CDATA[New comment by ComplexSystems in "Erdos 281 solved with ChatGPT 5.2 Pro"]]></title><description><![CDATA[
<p>The model doesn't know what its training data is, nor does it know what sequences of tokens appeared verbatim in there, so this kind of thing doesn't work.</p>
]]></description><pubDate>Sun, 18 Jan 2026 16:37:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=46669229</link><dc:creator>ComplexSystems</dc:creator><comments>https://news.ycombinator.com/item?id=46669229</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46669229</guid></item><item><title><![CDATA[New comment by ComplexSystems in "AI generated music barred from Bandcamp"]]></title><description><![CDATA[
<p>It's really only about the flooding the marketplace part, not about the extracting volume without their consent part. The current set of GenAI music models may involve training a black box model on a huge data set of scraped music, but would the net effect on artists' economic situations be any different if an alternate method led to the same result? Suppose some huge AI corporation hired a bunch of musicians, music theory Ph. D's, Grammy winning engineers, signal processing gurus, whatever, and hand-built a totally explainable model, from first principles, that required no external training data. So now they can crowd artists out of the marketplace that way instead. I don't think it would be much better.</p>
]]></description><pubDate>Wed, 14 Jan 2026 04:40:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=46612419</link><dc:creator>ComplexSystems</dc:creator><comments>https://news.ycombinator.com/item?id=46612419</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46612419</guid></item><item><title><![CDATA[New comment by ComplexSystems in "“Erdos problem #728 was solved more or less autonomously by AI”"]]></title><description><![CDATA[
<p>If this isn't AGI, what is? It seems unavoidable that an AI which can prove complex mathematical theorems would lead to something like AGI very quickly.</p>
]]></description><pubDate>Sat, 10 Jan 2026 00:02:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=46561136</link><dc:creator>ComplexSystems</dc:creator><comments>https://news.ycombinator.com/item?id=46561136</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46561136</guid></item><item><title><![CDATA[New comment by ComplexSystems in "AI coding assistants are getting worse?"]]></title><description><![CDATA[
<p>Sometimes you will tell agents (or real devs) to do things they can't actually do because of some mistake on your end. Having it silently change things and cover the problem up is probably not the best way to handle that situation.</p>
]]></description><pubDate>Fri, 09 Jan 2026 08:53:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46551541</link><dc:creator>ComplexSystems</dc:creator><comments>https://news.ycombinator.com/item?id=46551541</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46551541</guid></item><item><title><![CDATA[New comment by ComplexSystems in "AI coding assistants are getting worse?"]]></title><description><![CDATA[
<p>I don't think this is odd at all. This situation will arise literally hundreds of times when coding some project. You absolutely want the agent - or any dev, whether real or AI - to recognize these situations and let you know when interfaces or data formats aren't what you expect them to be. You don't want them to just silently make something up without explaining somewhere that there's an issue with the file they are trying to parse.</p>
]]></description><pubDate>Fri, 09 Jan 2026 04:56:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46550201</link><dc:creator>ComplexSystems</dc:creator><comments>https://news.ycombinator.com/item?id=46550201</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46550201</guid></item><item><title><![CDATA[New comment by ComplexSystems in "The unreasonable effectiveness of the Fourier transform"]]></title><description><![CDATA[
<p>Signals can be approximately frequency and time bandlimited, though, meaning the set of values such that the absolute value exceeds any epsilon is compact in both domains. A Gaussian function is one example.</p>
]]></description><pubDate>Fri, 09 Jan 2026 04:50:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=46550176</link><dc:creator>ComplexSystems</dc:creator><comments>https://news.ycombinator.com/item?id=46550176</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46550176</guid></item><item><title><![CDATA[New comment by ComplexSystems in "Eat Real Food"]]></title><description><![CDATA[
<p>"Isn't that basically what we've been doing with dietary guidelines since the 80s?"<p>If by this you mean to ask if the new guidelines are the same as previous ones from the 80s, then no. The new pyramid is different, makes different recommendations (more meat, for instance, and less wheat and grains). The website linked to explicitly shows how it is different from the previous "food pyramid" guidelines.</p>
]]></description><pubDate>Wed, 07 Jan 2026 21:27:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=46533137</link><dc:creator>ComplexSystems</dc:creator><comments>https://news.ycombinator.com/item?id=46533137</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46533137</guid></item><item><title><![CDATA[New comment by ComplexSystems in "Mamdani Will Be Sworn in at Abandoned Subway Station Beneath City Hall"]]></title><description><![CDATA[
<p><a href="https://archive.is/20251230100610/https://www.nytimes.com/2025/12/29/nyregion/mamdani-subway-sworn-in-mayor.html" rel="nofollow">https://archive.is/20251230100610/https://www.nytimes.com/20...</a></p>
]]></description><pubDate>Wed, 31 Dec 2025 01:08:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46440178</link><dc:creator>ComplexSystems</dc:creator><comments>https://news.ycombinator.com/item?id=46440178</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46440178</guid></item></channel></rss>