<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: resident423</title><link>https://news.ycombinator.com/user?id=resident423</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 26 Apr 2026 11:53:13 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=resident423" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by resident423 in "Amateur armed with ChatGPT solves an Erdős problem"]]></title><description><![CDATA[
<p>No, but I'm interested to know what it is?</p>
]]></description><pubDate>Sun, 26 Apr 2026 04:54:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47907484</link><dc:creator>resident423</dc:creator><comments>https://news.ycombinator.com/item?id=47907484</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47907484</guid></item><item><title><![CDATA[New comment by resident423 in "Amateur armed with ChatGPT solves an Erdős problem"]]></title><description><![CDATA[
<p>I haven't used stable diffusion enough to have a strong opinion on it. But my thinking is LLMs have only recently started contributing novel solutions to problems, so maybe there is some threshold above which there's less sloppy remixing of training data and more ability to form novel insights, and image generators haven't crossed this line yet.</p>
]]></description><pubDate>Sun, 26 Apr 2026 04:23:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47907323</link><dc:creator>resident423</dc:creator><comments>https://news.ycombinator.com/item?id=47907323</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47907323</guid></item><item><title><![CDATA[New comment by resident423 in "Amateur armed with ChatGPT solves an Erdős problem"]]></title><description><![CDATA[
<p>Remember when people thought solving Erdos problems required intelligence? Is there anything an LLM could ever do that would cound as intelligence? Surely the trend has to break at some point, if so what would be the thing that crosses the line to into real intelligence?</p>
]]></description><pubDate>Sun, 26 Apr 2026 04:10:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47907260</link><dc:creator>resident423</dc:creator><comments>https://news.ycombinator.com/item?id=47907260</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47907260</guid></item><item><title><![CDATA[New comment by resident423 in "Amateur armed with ChatGPT solves an Erdős problem"]]></title><description><![CDATA[
<p>Solving open math problems is strong evidence of intelligence so there's not really any need for rationalization? I don't understand why intelligence would require intent or motive? Isn't intent just the behaviour of making a specific thing happen rather than other things?</p>
]]></description><pubDate>Sun, 26 Apr 2026 04:05:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47907237</link><dc:creator>resident423</dc:creator><comments>https://news.ycombinator.com/item?id=47907237</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47907237</guid></item><item><title><![CDATA[New comment by resident423 in "Amateur armed with ChatGPT solves an Erdős problem"]]></title><description><![CDATA[
<p>There's probably a limit to how intelligent something can be with no long term memory, but solving Erdos problems in 80 minutes is clearly not above it, and I think the true limit is probably much higher than that.</p>
]]></description><pubDate>Sun, 26 Apr 2026 03:48:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47907160</link><dc:creator>resident423</dc:creator><comments>https://news.ycombinator.com/item?id=47907160</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47907160</guid></item><item><title><![CDATA[New comment by resident423 in "Amateur armed with ChatGPT solves an Erdős problem"]]></title><description><![CDATA[
<p>What your describing sounds more like the model is lacking awareness than lacking intelligence? Why does it need to know it solved the problem to be intelligent?</p>
]]></description><pubDate>Sun, 26 Apr 2026 03:10:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47906932</link><dc:creator>resident423</dc:creator><comments>https://news.ycombinator.com/item?id=47906932</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47906932</guid></item><item><title><![CDATA[New comment by resident423 in "Amateur armed with ChatGPT solves an Erdős problem"]]></title><description><![CDATA[
<p>I wonder if the rationalizations people come up with for why this isn't real intelligence will be as creative as ChatGPTs solution.</p>
]]></description><pubDate>Sun, 26 Apr 2026 02:50:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47906812</link><dc:creator>resident423</dc:creator><comments>https://news.ycombinator.com/item?id=47906812</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47906812</guid></item><item><title><![CDATA[New comment by resident423 in "Palantir employees are starting to wonder if they're the bad guys"]]></title><description><![CDATA[
<p>Can both not be true?</p>
]]></description><pubDate>Fri, 24 Apr 2026 03:43:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47885249</link><dc:creator>resident423</dc:creator><comments>https://news.ycombinator.com/item?id=47885249</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47885249</guid></item><item><title><![CDATA[New comment by resident423 in "GPT-5.5"]]></title><description><![CDATA[
<p>Why hopefully?</p>
]]></description><pubDate>Fri, 24 Apr 2026 03:16:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47885065</link><dc:creator>resident423</dc:creator><comments>https://news.ycombinator.com/item?id=47885065</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47885065</guid></item><item><title><![CDATA[New comment by resident423 in "Meta to start capturing employee mouse movements, keystrokes for AI training"]]></title><description><![CDATA[
<p>There are also large organizations at Meta focussed on the optimal distribution of scam ads to the elderly.<p><a href="https://www.reuters.com/investigations/meta-is-earning-fortune-deluge-fraudulent-ads-documents-show-2025-11-06/" rel="nofollow">https://www.reuters.com/investigations/meta-is-earning-fortu...</a></p>
]]></description><pubDate>Tue, 21 Apr 2026 23:45:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47856461</link><dc:creator>resident423</dc:creator><comments>https://news.ycombinator.com/item?id=47856461</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47856461</guid></item><item><title><![CDATA[New comment by resident423 in "Meta to start capturing employee mouse movements, keystrokes for AI training"]]></title><description><![CDATA[
<p>Meta employees are not typically known for their deep concerns about privacy.</p>
]]></description><pubDate>Tue, 21 Apr 2026 23:42:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47856416</link><dc:creator>resident423</dc:creator><comments>https://news.ycombinator.com/item?id=47856416</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47856416</guid></item><item><title><![CDATA[New comment by resident423 in "Notion leaks email addresses of all editors of any public page"]]></title><description><![CDATA[
<p>Companies will only care if they have a reason to. People need to start caring about their privacy and security and be willing to change product if they have to. We can blame companies and insist they start caring, but this makes no difference to them, people complain for a while and then they move on and the earnings remain unchanged.</p>
]]></description><pubDate>Sun, 19 Apr 2026 22:27:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47828200</link><dc:creator>resident423</dc:creator><comments>https://news.ycombinator.com/item?id=47828200</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47828200</guid></item><item><title><![CDATA[New comment by resident423 in "College instructor turns to typewriters to curb AI-written work"]]></title><description><![CDATA[
<p>I think the AI writing becoming better means it will appear more human rather than like better AI writing. I think the difference in feeling is similar to early attempts to generate faces with AI, which also seemed wierdly wrong in ways which were hard to describe, but now it's very hard to tell them apart.</p>
]]></description><pubDate>Sun, 19 Apr 2026 02:32:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47821345</link><dc:creator>resident423</dc:creator><comments>https://news.ycombinator.com/item?id=47821345</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47821345</guid></item><item><title><![CDATA[New comment by resident423 in "College instructor turns to typewriters to curb AI-written work"]]></title><description><![CDATA[
<p>Is there really much point though? I think AI will keep improving, and there will be more and more incentive to use an AI which costs $20/month, instead of a human writer that costs $30/hour. If someone want's an article written, and if people like the AI article as much as the human one, what stops anyone everyone using AI?<p>The only answer I can think of is that people must believe AI writing will stay below human level for many years, but if so why?</p>
]]></description><pubDate>Sun, 19 Apr 2026 02:00:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47821210</link><dc:creator>resident423</dc:creator><comments>https://news.ycombinator.com/item?id=47821210</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47821210</guid></item><item><title><![CDATA[New comment by resident423 in "The future of everything is lies, I guess: Where do we go from here?"]]></title><description><![CDATA[
<p>A year ago it couldn't do tasks like this at all, what makes you beleive it can progress only this far but no further?<p>Random number generators can't solve open math problems, but it looks like AI agents can? [1]<p>[1] <a href="https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cycles.pdf" rel="nofollow">https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cyc...</a></p>
]]></description><pubDate>Thu, 16 Apr 2026 22:52:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47800532</link><dc:creator>resident423</dc:creator><comments>https://news.ycombinator.com/item?id=47800532</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47800532</guid></item></channel></rss>