<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: MakeAJiraTicket</title><link>https://news.ycombinator.com/user?id=MakeAJiraTicket</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 04 May 2026 16:00:52 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=MakeAJiraTicket" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by MakeAJiraTicket in "The 'Hidden' Costs of Great Abstractions"]]></title><description><![CDATA[
<p>I'm not in your situation, but I've hit the bottom of the despair and found the inner "fuck it we ball" within me. I don't know what's an option for you, but I'm learning bartending, stocking shelves, and having irresponsible sex with the young women I work with in retail.<p>I enjoy software development and hopefully one day I will return to it, but I am but one tiny kernel of corn in such a mighty ocean of shit so I might as well right the waves instead of fighting them. Maybe your calling is scamming Indians or scamming Americans or scamming Indian scammers. You aren't alone but the attitude you have will never stop mattering. See if you want to go back to school, start a tutoring program for kids. Motivation is for morons, do something.</p>
]]></description><pubDate>Mon, 04 May 2026 04:28:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=48004632</link><dc:creator>MakeAJiraTicket</dc:creator><comments>https://news.ycombinator.com/item?id=48004632</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48004632</guid></item><item><title><![CDATA[New comment by MakeAJiraTicket in "Can LLMs reason about math? The Subtraction Trick Test"]]></title><description><![CDATA[
<p>This is the expected result. "Do you see the connection?" is where it failed to actually bridge the connection. I don't know if pro mode is relevant, but they require someone prodding from the perspective of already knowing the invention to reach it themselves.<p>They capture the gestalt of reasoning, they can reason in patterns that we encoded with language, but they can't do genuine reasoning.</p>
]]></description><pubDate>Fri, 27 Feb 2026 19:05:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47184252</link><dc:creator>MakeAJiraTicket</dc:creator><comments>https://news.ycombinator.com/item?id=47184252</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47184252</guid></item><item><title><![CDATA[New comment by MakeAJiraTicket in "Can LLMs reason about math? The Subtraction Trick Test"]]></title><description><![CDATA[
<p>Thank you! Gemini has consistently been the best performer that I've tried, but they always require the connection to be made explicit. The point of the test is that it is very low complexity and is very targeted toward what can be considered reasoning and these models can't produce the connection without prodding.<p>In the ideal case of reasoning you would simply present the methods and they'd bridge the gap independently when it is brought to the forefront of their context together, but it doesn't happen.</p>
]]></description><pubDate>Fri, 27 Feb 2026 04:50:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47176654</link><dc:creator>MakeAJiraTicket</dc:creator><comments>https://news.ycombinator.com/item?id=47176654</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47176654</guid></item><item><title><![CDATA[Can LLMs reason about math? The Subtraction Trick Test]]></title><description><![CDATA[
<p>Article URL: <a href="https://haversine.substack.com/p/can-llms-reason-about-math-the-subtraction">https://haversine.substack.com/p/can-llms-reason-about-math-the-subtraction</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47176334">https://news.ycombinator.com/item?id=47176334</a></p>
<p>Points: 5</p>
<p># Comments: 5</p>
]]></description><pubDate>Fri, 27 Feb 2026 04:01:03 +0000</pubDate><link>https://haversine.substack.com/p/can-llms-reason-about-math-the-subtraction</link><dc:creator>MakeAJiraTicket</dc:creator><comments>https://news.ycombinator.com/item?id=47176334</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47176334</guid></item><item><title><![CDATA[New comment by MakeAJiraTicket in "Why LLMs cannot reach GenAI, but why it looked like they could"]]></title><description><![CDATA[
<p>The author explicitly states that there are other types of thought, this is about structured reasoning.</p>
]]></description><pubDate>Sat, 11 Oct 2025 19:36:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=45552022</link><dc:creator>MakeAJiraTicket</dc:creator><comments>https://news.ycombinator.com/item?id=45552022</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45552022</guid></item><item><title><![CDATA[New comment by MakeAJiraTicket in "LLMs are mortally terrified of exceptions"]]></title><description><![CDATA[
<p>LLM actions are divorced from that reward function, it's not something they consult or consider. Reward function in that context doesn't make sense.</p>
]]></description><pubDate>Fri, 10 Oct 2025 23:38:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=45545028</link><dc:creator>MakeAJiraTicket</dc:creator><comments>https://news.ycombinator.com/item?id=45545028</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45545028</guid></item><item><title><![CDATA[Why LLMs cannot reach GenAI, but why it looked like they could]]></title><description><![CDATA[
<p>Article URL: <a href="https://haversine.substack.com/p/fools-gold">https://haversine.substack.com/p/fools-gold</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45544414">https://news.ycombinator.com/item?id=45544414</a></p>
<p>Points: 1</p>
<p># Comments: 2</p>
]]></description><pubDate>Fri, 10 Oct 2025 22:12:41 +0000</pubDate><link>https://haversine.substack.com/p/fools-gold</link><dc:creator>MakeAJiraTicket</dc:creator><comments>https://news.ycombinator.com/item?id=45544414</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45544414</guid></item><item><title><![CDATA[New comment by MakeAJiraTicket in "Figure 03, our 3rd generation humanoid robot"]]></title><description><![CDATA[
<p>The Telexistence demo isn't so bad, but I have no idea why we're trying to make human robots generally. The human shape sucks at a most things, and we already have people treating roombas and GPT like their boyfriends or pets...</p>
]]></description><pubDate>Fri, 10 Oct 2025 04:04:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=45535266</link><dc:creator>MakeAJiraTicket</dc:creator><comments>https://news.ycombinator.com/item?id=45535266</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45535266</guid></item><item><title><![CDATA[New comment by MakeAJiraTicket in "LLMs are mortally terrified of exceptions"]]></title><description><![CDATA[
<p>I have a function that compares letters to numbers for the Major System and it's like 40 lines of code and copilot starts trying to add "guard rails" for "future proofing" as if we're adding more numbers or letters in the future.<p>It's so annoying.</p>
]]></description><pubDate>Fri, 10 Oct 2025 03:52:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=45535233</link><dc:creator>MakeAJiraTicket</dc:creator><comments>https://news.ycombinator.com/item?id=45535233</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45535233</guid></item><item><title><![CDATA[New comment by MakeAJiraTicket in "LLMs are mortally terrified of exceptions"]]></title><description><![CDATA[
<p>Defensive programming is considered "correct" by the people doing the reinforcing, and is a huge part of the corpus that LLM's are trained on. For example, most python code doesn't do manual index management, so when it sees manual index management it is much more likely to freak out and hallucinate a bug. It will randomly promote "silent failure" even when a "silent failure" results in things like infinite loops, because it was trained on a lot of tutorial python code and "industry standard" gets more reinforcement during training.<p>These aren't operating on reward functions because there's no internal model to reward. It's word prediction, there's no intelligence.</p>
]]></description><pubDate>Fri, 10 Oct 2025 03:49:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=45535220</link><dc:creator>MakeAJiraTicket</dc:creator><comments>https://news.ycombinator.com/item?id=45535220</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45535220</guid></item></channel></rss>