<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: moolimon</title><link>https://news.ycombinator.com/user?id=moolimon</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 13:43:12 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=moolimon" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[Chatgcc – A comedy C compiler which asks ChatGPT to generate assembly]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/Sawyer-Powell/chatgcc">https://github.com/Sawyer-Powell/chatgcc</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42988734">https://news.ycombinator.com/item?id=42988734</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 09 Feb 2025 05:37:46 +0000</pubDate><link>https://github.com/Sawyer-Powell/chatgcc</link><dc:creator>moolimon</dc:creator><comments>https://news.ycombinator.com/item?id=42988734</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42988734</guid></item><item><title><![CDATA[New comment by moolimon in "Efficient Reasoning with Hidden Thinking"]]></title><description><![CDATA[
<p>I feel like this is the obvious next step for chain of thought reasoning. Excited to see work on models that try and transform the intermediate thinking space tokens, down to language. Allowing us to still try and see what's happening inside the "mind" of the LLM, if that process is even possible to map to language anymore. I also wonder what the implications of this research are on chain of thought reasoning with reinforcement learning, since from my understanding many of the reward mechanisms set up during reinforcement learning are around the structure of thought process.</p>
]]></description><pubDate>Mon, 03 Feb 2025 16:22:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=42919823</link><dc:creator>moolimon</dc:creator><comments>https://news.ycombinator.com/item?id=42919823</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42919823</guid></item><item><title><![CDATA[New comment by moolimon in "Recent results show that LLMs struggle with compositional tasks"]]></title><description><![CDATA[
<p>The main thesis here seems to be that LLMs behave like almost all other machine learning models, in that they are doing pattern matching on their input data, and short circuiting to a statistically likely result. Chain of thought reasoning is still bound by this basic property of reflexive pattern matching, except the LLM is forced to go through a process of iteratively refining the domain it does matching on.<p>Chain of thought is interesting, because you can combine it with reinforcement learning to get models to solve (seemingly) arbitrarily hard problems. This comes with the caveat that you need some reward model for all RL. This means you need a clear definition of success, and some way of rewarding being closer to success, to actually solve those problems.<p>Framing transformer based models as pattern matchers makes all the sense in the world. Pattern matching is obviously vital to human problem solving skills too. Interesting to think about what structures human intelligence has that these models don't. For one, humans can integrate absolutely gargantuan amounts of information extremely efficiently.</p>
]]></description><pubDate>Sun, 02 Feb 2025 06:10:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=42906389</link><dc:creator>moolimon</dc:creator><comments>https://news.ycombinator.com/item?id=42906389</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42906389</guid></item></channel></rss>