<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: codexon</title><link>https://news.ycombinator.com/user?id=codexon</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 09 May 2026 03:08:29 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=codexon" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by codexon in "Meta’s renewed commitment to jemalloc"]]></title><description><![CDATA[
<p>This is similar to what I experienced when I tested mimalloc many years ago. If it was faster, it wasn't faster by much, and had pretty bad worst cases.</p>
]]></description><pubDate>Mon, 16 Mar 2026 21:20:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47405045</link><dc:creator>codexon</dc:creator><comments>https://news.ycombinator.com/item?id=47405045</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47405045</guid></item><item><title><![CDATA[New comment by codexon in "Meta’s renewed commitment to jemalloc"]]></title><description><![CDATA[
<p>Agreed mostly. Going from standard library to something like jemalloc or tcmalloc will give you around 5-10% wins which can be significant, but the difference between those generic allocators seem small. I just made a slab allocator recently for a custom data type and got speedups of 100% over malloc.</p>
]]></description><pubDate>Mon, 16 Mar 2026 21:17:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47404999</link><dc:creator>codexon</dc:creator><comments>https://news.ycombinator.com/item?id=47404999</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47404999</guid></item><item><title><![CDATA[New comment by codexon in "Meta’s renewed commitment to jemalloc"]]></title><description><![CDATA[
<p>> It was mainly pushed as easy to adopt, easy to use, easy to statically link, etc.<p>That is true of basically every single malloc replacement out there, that is not a uniquely defining feature.</p>
]]></description><pubDate>Mon, 16 Mar 2026 21:14:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47404963</link><dc:creator>codexon</dc:creator><comments>https://news.ycombinator.com/item?id=47404963</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47404963</guid></item><item><title><![CDATA[New comment by codexon in "Meta’s renewed commitment to jemalloc"]]></title><description><![CDATA[
<p>I've benchmarked them every few years, they never seem to differ by more than a few percent, and jemalloc seems to fragment and leak the least for processes running for months.<p>Mimalloc made the claim that they were the fastest/best when they released and that didn't hold up to real world testing, so I am not inclined to trust it now.</p>
]]></description><pubDate>Mon, 16 Mar 2026 19:51:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47403943</link><dc:creator>codexon</dc:creator><comments>https://news.ycombinator.com/item?id=47403943</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47403943</guid></item><item><title><![CDATA[New comment by codexon in "Meta’s renewed commitment to jemalloc"]]></title><description><![CDATA[
<p>It still beat mimalloc when I checked 4-5 years ago.</p>
]]></description><pubDate>Mon, 16 Mar 2026 19:19:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47403491</link><dc:creator>codexon</dc:creator><comments>https://news.ycombinator.com/item?id=47403491</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47403491</guid></item><item><title><![CDATA[New comment by codexon in "Meta’s renewed commitment to jemalloc"]]></title><description><![CDATA[
<p>I've been using jemalloc for over 10 years and don't really see a need for it to be updated. It always holds up in benchmarks against any new flavor of the month malloc that comes out.<p>Last time I checked mimalloc which was admittedly a while ago, probably 5 years, it was noticebly worse and I saw a lot of people on their github issues agreeing with me so I just never looked at it again.</p>
]]></description><pubDate>Mon, 16 Mar 2026 19:01:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47403260</link><dc:creator>codexon</dc:creator><comments>https://news.ycombinator.com/item?id=47403260</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47403260</guid></item><item><title><![CDATA[New comment by codexon in "AI Added 'Basically Zero' to US Economic Growth Last Year, Goldman Sachs Says"]]></title><description><![CDATA[
<p>I'm not arguing about growth. I was addressing this statement which seems to presume that AI has no effect if the job can't be removed.<p>> If AI can't do 100% of a job then you can't remove the job.</p>
]]></description><pubDate>Tue, 24 Feb 2026 01:19:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47131601</link><dc:creator>codexon</dc:creator><comments>https://news.ycombinator.com/item?id=47131601</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47131601</guid></item><item><title><![CDATA[New comment by codexon in "AI Added 'Basically Zero' to US Economic Growth Last Year, Goldman Sachs Says"]]></title><description><![CDATA[
<p>Claude code will prompt you and explain to you what practice fits a situation. It might not do it perfectly, but the foundations are there.</p>
]]></description><pubDate>Tue, 24 Feb 2026 00:58:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47131412</link><dc:creator>codexon</dc:creator><comments>https://news.ycombinator.com/item?id=47131412</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47131412</guid></item><item><title><![CDATA[New comment by codexon in "AI Added 'Basically Zero' to US Economic Growth Last Year, Goldman Sachs Says"]]></title><description><![CDATA[
<p>AI is already aware of the best practices. It does not just blindly do what you ask of it in the simplest way.</p>
]]></description><pubDate>Tue, 24 Feb 2026 00:42:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47131271</link><dc:creator>codexon</dc:creator><comments>https://news.ycombinator.com/item?id=47131271</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47131271</guid></item><item><title><![CDATA[New comment by codexon in "AI Added 'Basically Zero' to US Economic Growth Last Year, Goldman Sachs Says"]]></title><description><![CDATA[
<p>I'm not sure that's entirely true. For most things, checking if a solution is correct is much easier than implementing it (page looks wrong, can't login etc...)</p>
]]></description><pubDate>Tue, 24 Feb 2026 00:23:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47131072</link><dc:creator>codexon</dc:creator><comments>https://news.ycombinator.com/item?id=47131072</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47131072</guid></item><item><title><![CDATA[ChatGPT finds an error in Terence Tao's math research]]></title><description><![CDATA[
<p>https://www.erdosproblems.com/forum/thread/783<p>> Ah, GPT is right, there is a fatal sign error in the way I tried to handle small primes. There were no obvious fixes, so I ended up going back to Hildebrand's paper to see how he handled small primes, and it turned out that he could do it using a neat inequality ρ(u1)ρ(u2)≥ρ(u1u2) for the Dickman function (a consequence of the log-concavity of this function). Using this, and implementing the previous simplifications, I now have a repaired argument.
TerenceTao</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47131047">https://news.ycombinator.com/item?id=47131047</a></p>
<p>Points: 42</p>
<p># Comments: 8</p>
]]></description><pubDate>Tue, 24 Feb 2026 00:19:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47131047</link><dc:creator>codexon</dc:creator><comments>https://news.ycombinator.com/item?id=47131047</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47131047</guid></item><item><title><![CDATA[New comment by codexon in "AI Added 'Basically Zero' to US Economic Growth Last Year, Goldman Sachs Says"]]></title><description><![CDATA[
<p>You can replace it with a much lower paid employee though.</p>
]]></description><pubDate>Tue, 24 Feb 2026 00:13:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47131001</link><dc:creator>codexon</dc:creator><comments>https://news.ycombinator.com/item?id=47131001</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47131001</guid></item><item><title><![CDATA[New comment by codexon in "OpenClaw is changing my life"]]></title><description><![CDATA[
<p>I think some of it might be genuine. For people that don't code (like management), going from 0 to being able to create a landing page that looks like it came from a big corporation is a miracle.<p>They are not able to comprehend that for anything more complicated than that, the code might compile, but the logical errors and failure to implement the specs start piling up.</p>
]]></description><pubDate>Sun, 08 Feb 2026 21:45:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46938834</link><dc:creator>codexon</dc:creator><comments>https://news.ycombinator.com/item?id=46938834</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46938834</guid></item><item><title><![CDATA[New comment by codexon in "Top AI models fail at >96% of tasks"]]></title><description><![CDATA[
<p>Check the link to the study. It has been updated for Opus 4.5.</p>
]]></description><pubDate>Sun, 08 Feb 2026 20:24:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46938158</link><dc:creator>codexon</dc:creator><comments>https://news.ycombinator.com/item?id=46938158</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46938158</guid></item><item><title><![CDATA[New comment by codexon in "Top AI models fail at >96% of tasks"]]></title><description><![CDATA[
<p>This paper creates a new benchmark comprised of real remote work tasks sourced from the remote working website Upwork. The best commercial LLMs like Opus, GPT, Gemini, and Grok were tested.<p>Models released a few days ago, Opus 4.6 and GPT 5.3, haven't been tested yet, but given the performance on other micro-benchmarks, they will probably not be much different on this benchmark.</p>
]]></description><pubDate>Sat, 07 Feb 2026 21:20:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=46928173</link><dc:creator>codexon</dc:creator><comments>https://news.ycombinator.com/item?id=46928173</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46928173</guid></item><item><title><![CDATA[Top AI models fail at >96% of tasks]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/">https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46928172">https://news.ycombinator.com/item?id=46928172</a></p>
<p>Points: 24</p>
<p># Comments: 10</p>
]]></description><pubDate>Sat, 07 Feb 2026 21:20:06 +0000</pubDate><link>https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/</link><dc:creator>codexon</dc:creator><comments>https://news.ycombinator.com/item?id=46928172</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46928172</guid></item><item><title><![CDATA[New comment by codexon in "Claude Opus 4.6"]]></title><description><![CDATA[
<p><a href="https://arxiv.org/abs/2510.26787" rel="nofollow">https://arxiv.org/abs/2510.26787</a><p>Testing the top llms on wework, the highest performing one only succeeded with a rate of 2.5%<p>Can you imagine not being fired when you can only do 2.5% of all tasks?<p>This study is dated October 30th, very recent.</p>
]]></description><pubDate>Sat, 07 Feb 2026 04:54:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46921425</link><dc:creator>codexon</dc:creator><comments>https://news.ycombinator.com/item?id=46921425</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46921425</guid></item><item><title><![CDATA[New comment by codexon in "Claude Opus 4.6"]]></title><description><![CDATA[
<p>You don't need to be a genius or rocket scientist to write code, but llm don't even reach the bar for anything but the most simple things. Take a look at the video I posted earlier for an example.<p>And specialised models for programming HAVE plateaued.<p><a href="https://livebench.ai/#/?sort=Agentic+Coding+Average" rel="nofollow">https://livebench.ai/#/?sort=Agentic+Coding+Average</a><p>From Claude 4.1 to 4.5 was only an 18% gain, and from 4.5 to 4.6 it even DECLINED. Codex 5.1 to 5.2 also shows a decline.</p>
]]></description><pubDate>Sat, 07 Feb 2026 01:34:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=46920413</link><dc:creator>codexon</dc:creator><comments>https://news.ycombinator.com/item?id=46920413</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46920413</guid></item><item><title><![CDATA[New comment by codexon in "Claude Opus 4.6"]]></title><description><![CDATA[
<p>Top AI researchers like Yann LeCunn have said that LLMs are a dead end.<p>It seems to me that LLM performance is plateuing and not improving exponentially anymore. This recent hubbub about rewriting a worse GCC for $20,000 is another example of overhype and regurgitating training data.<p>You don't know for sure if it is going to "snow" (AI reaches general intelligence) Snow happens frequently, AI reaching general intelligence has never happened. If it ever happens, 99% of jobs are gone and there is really nothing you can do to prepare for this other than maybe buy guns and ammo, and even that might not do anything to robotic soldiers.<p>People were worried about AI taking their jobs 60 years ago when perceptrons came out, and anyone who avoided a tech career because of that back then would have lost out majorly.</p>
]]></description><pubDate>Fri, 06 Feb 2026 19:42:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46917209</link><dc:creator>codexon</dc:creator><comments>https://news.ycombinator.com/item?id=46917209</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46917209</guid></item><item><title><![CDATA[New comment by codexon in "Claude Opus 4.6"]]></title><description><![CDATA[
<p>Even for coding, it seems to still make A LOT of mistakes.<p><a href="https://youtu.be/8brENzmq1pE?t=1544" rel="nofollow">https://youtu.be/8brENzmq1pE?t=1544</a><p>I feel like everyone is counting chickens before they hatch here with all the doomsday predictions and extrapolating LLM capability into infinity.<p>People that seem to overhype this seem to either be non-technical or are just making landing pages.</p>
]]></description><pubDate>Fri, 06 Feb 2026 01:37:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=46907914</link><dc:creator>codexon</dc:creator><comments>https://news.ycombinator.com/item?id=46907914</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46907914</guid></item></channel></rss>