<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: monkeynotes</title><link>https://news.ycombinator.com/user?id=monkeynotes</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 04:26:33 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=monkeynotes" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by monkeynotes in "Scoring Show HN submissions for AI design patterns"]]></title><description><![CDATA[
<p>Even his blog has the Claude vibe to it.</p>
]]></description><pubDate>Wed, 22 Apr 2026 15:54:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47865449</link><dc:creator>monkeynotes</dc:creator><comments>https://news.ycombinator.com/item?id=47865449</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47865449</guid></item><item><title><![CDATA[New comment by monkeynotes in "Meta is axing 600 roles across its AI division"]]></title><description><![CDATA[
<p>How can you take the market for billions when you are investing hundreds and hundreds of billions? Amazon overtook Walmart and cloud computing, they have a solid business model, and I doubt even a business that size could pay down that outlay. Are we really saying that by some miracle OpenAI, or Anthropic are going to find a use case that would make places like Amazon and Apple look like relatively small business?</p>
]]></description><pubDate>Thu, 23 Oct 2025 14:08:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=45681988</link><dc:creator>monkeynotes</dc:creator><comments>https://news.ycombinator.com/item?id=45681988</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45681988</guid></item><item><title><![CDATA[New comment by monkeynotes in "Meta is axing 600 roles across its AI division"]]></title><description><![CDATA[
<p>The planet has finite resources, least alone land. And then there is human psychology for hoarding resources.</p>
]]></description><pubDate>Wed, 22 Oct 2025 20:46:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=45674927</link><dc:creator>monkeynotes</dc:creator><comments>https://news.ycombinator.com/item?id=45674927</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45674927</guid></item><item><title><![CDATA[New comment by monkeynotes in "Criticisms of “The Body Keeps the Score”"]]></title><description><![CDATA[
<p>What if PTSD therapy focusses on accepting things you can't control and sitting with the pain? That's how I work through anxiety, and depression. I know it will never be gone, I don't try and set expectations to live without anxiety, I just try and sit with it, and accept it.<p>Much of mental trauma is about acknowledging it, and learning to live with it. There is no cure for PTSD, even Ketamine is short acting, not a long term solution, and indeed Ketamine simply helps you sit with the suffering in a different light.</p>
]]></description><pubDate>Wed, 22 Oct 2025 19:52:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=45674281</link><dc:creator>monkeynotes</dc:creator><comments>https://news.ycombinator.com/item?id=45674281</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45674281</guid></item><item><title><![CDATA[New comment by monkeynotes in "How do LLM's trade off lives between different categories?"]]></title><description><![CDATA[
<p>LLMs aren't trading off anything. It's not like they make a decision based on anything other than what they are guided to do in training or in the system prompt.<p>It's like saying Reddit trades off one comment for another, yeah - an algorithm they wrote does that.<p>This article seems to allude to the idea there is a ghost in the machine, and while there is a lot of emergent behavior rather than hard coded algorithms, it's not like the LLM has an opinion, or some sort of psychology/personality based values.<p>They could change the system prompt, bias some training, and have completely different outcomes.</p>
]]></description><pubDate>Wed, 22 Oct 2025 19:12:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=45673761</link><dc:creator>monkeynotes</dc:creator><comments>https://news.ycombinator.com/item?id=45673761</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45673761</guid></item><item><title><![CDATA[New comment by monkeynotes in "Meta is axing 600 roles across its AI division"]]></title><description><![CDATA[
<p>Hardly, they are burning money with TikSlop, they don't even know how to monetize it, just YOLO'd the product to keep investors interested.<p>Even the porn industry can't seem to monetize AI, so I doubt OpenAI who knows jack shit about this space will be able to.<p>Fact is generative AI is stupidly expensive to run, and I can't see mass adoption at subscription prices that actually allow them to break even.<p>I'm sure folks have seen the commentary on the cost of all this infrastructure. How can an LLM business model possibly pay for a nuclear power station, let alone the ongoing overheads of the rest of the infrastructure? The whole thing just seems like total fantasy.<p>I don't even think they believe they are going to reach AGI, and even if they did, and if companies did start hiring AI agents instead of humans, then what? If consumers are out of work, who the hell is going to keep the economy going?<p>I just don't understand how smart people think this is going to work out at all.</p>
]]></description><pubDate>Wed, 22 Oct 2025 18:52:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=45673515</link><dc:creator>monkeynotes</dc:creator><comments>https://news.ycombinator.com/item?id=45673515</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45673515</guid></item><item><title><![CDATA[New comment by monkeynotes in "Lessons from That 1834 Landscape Gardening Guidebook"]]></title><description><![CDATA[
<p>Other Germans certainly have: <a href="https://www.reddit.com/r/de/comments/7ui20m/daten_sind_sch%C3%B6n_f%C3%BCrstp%C3%BCcklereis/" rel="nofollow">https://www.reddit.com/r/de/comments/7ui20m/daten_sind_sch%C...</a></p>
]]></description><pubDate>Wed, 11 Jun 2025 12:58:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=44247141</link><dc:creator>monkeynotes</dc:creator><comments>https://news.ycombinator.com/item?id=44247141</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44247141</guid></item><item><title><![CDATA[New comment by monkeynotes in "ForeverVM: Run AI-generated code in stateful sandboxes that run forever"]]></title><description><![CDATA[
<p>What has AI got to do with this? It's in the headline but I don't see why.</p>
]]></description><pubDate>Wed, 26 Feb 2025 18:18:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=43186401</link><dc:creator>monkeynotes</dc:creator><comments>https://news.ycombinator.com/item?id=43186401</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43186401</guid></item><item><title><![CDATA[New comment by monkeynotes in "Does current AI represent a dead end?"]]></title><description><![CDATA[
<p>This isn't about if LLMs are useful, it's about how useful can they become. We are trying to understand if there is a path forward to transformative tech, or are we just limited to a very useful tool.<p>It's a valid conversation after ~3 years of anticipating the world to be disrupted by this tech. So far it has not delivered.<p>Wikipedia did not change the world either, it's just a great tool that I use all the time<p>As for software, it performs ok. I give up on it most of the time if I am trying to write a whole application. You have to acquire a new skill, prompt engineering, and feverish iteration. It's a frustrating game of whack-a-mole and I find it quicker to write the code myself and just have the LLM help me with architecture ideas, bug bashing, and it's also quite good at writing tests.<p>I'd rather know the code intimately so I can more quickly debug it than have an LLM write it and just trust it did it well.</p>
]]></description><pubDate>Fri, 27 Dec 2024 15:42:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=42523056</link><dc:creator>monkeynotes</dc:creator><comments>https://news.ycombinator.com/item?id=42523056</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42523056</guid></item><item><title><![CDATA[New comment by monkeynotes in "Does current AI represent a dead end?"]]></title><description><![CDATA[
<p>I was so stupid when GPT3 came out. I knew so little about token prediction, I argued with folks on here that it was capable of so many things that I now understand just aren't compatible with the tech.<p>Over the past couple of years of educating myself a bit, whilst I am no expert I have been anticipating a dead end. You can throw as much training at these things as you like, but all you'll get is more of the same with diminishing returns. Indeed in some research the quality of responses gets worse as you train it with more data.<p>I am yet to see anything transformative out of LLMs other than demos which have prompt engineers working night and day to do something impressive with. Those Sora videos took forever to put together, and cost huge amounts of compute. No one is going to make a whole production quality movie with an LLM and disrupt Hollywood.<p>I agree, an LLM is like an idiot savant, and whilst it's fantastic for everyone to have access to a savant, it doesn't change the world like the internet, or internal combustion engine did.<p>OpenAI is heading toward some difficult decisions, they either admit their consumer business model is dead and go into competing with Amazon for API business (good luck), become a research lab (give up on being a billion dollar company), or get acquired and move on.</p>
]]></description><pubDate>Fri, 27 Dec 2024 15:34:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=42522949</link><dc:creator>monkeynotes</dc:creator><comments>https://news.ycombinator.com/item?id=42522949</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42522949</guid></item><item><title><![CDATA[New comment by monkeynotes in "OpenAI O3 breakthrough high score on ARC-AGI-PUB"]]></title><description><![CDATA[
<p>So you have two AIs colluding against you now. Who is holding the AI-assist to account? It's like who polices the police, except we understand human psychology enough to have a level of predictability for how police can be governed reliably, we don't understand any truths about an AGI because an AGI will always have the doubt of it deceiving, or even making unchecked catastrophic assumptions that we trust because it's beyond our pay-grade to understand.<p>There are so many ways we have misplaced confidence with what is essentially a system we don't really understand fully. We just keep anthropomorphizing the results and thinking "yeah, this is how humans think so we understand". We don't know for sure if that's true, or if we are being deceived, or making fundamental errors in judgement due to not having enough data.</p>
]]></description><pubDate>Sun, 22 Dec 2024 15:43:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=42487007</link><dc:creator>monkeynotes</dc:creator><comments>https://news.ycombinator.com/item?id=42487007</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42487007</guid></item><item><title><![CDATA[New comment by monkeynotes in "OpenAI O3 breakthrough high score on ARC-AGI-PUB"]]></title><description><![CDATA[
<p>> We will have aligned AI helping us.<p>This is an assumption, how would you know if you have alignment? AGI could appear to align, just as a psychopath appears studies and emulates well behaved people. Imagine that at a scale we can't possibly understand. We don't really know how any of these emergent behaviors really work, we just throw more data and compute and fine tunings at it, bake it, and then see.</p>
]]></description><pubDate>Sun, 22 Dec 2024 15:37:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=42486965</link><dc:creator>monkeynotes</dc:creator><comments>https://news.ycombinator.com/item?id=42486965</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42486965</guid></item><item><title><![CDATA[New comment by monkeynotes in "OpenAI O3 breakthrough high score on ARC-AGI-PUB"]]></title><description><![CDATA[
<p>We don't know if a supreme deceiver is aligned at all. If a model can think ahead a trillion moves of deception how do humans possibly stand a chance of scrutinizing anything with any confidence?</p>
]]></description><pubDate>Sun, 22 Dec 2024 15:35:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=42486955</link><dc:creator>monkeynotes</dc:creator><comments>https://news.ycombinator.com/item?id=42486955</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42486955</guid></item><item><title><![CDATA[New comment by monkeynotes in "OpenAI O3 breakthrough high score on ARC-AGI-PUB"]]></title><description><![CDATA[
<p>I thought we were talking about state of the art agentic general AI that can plan ahead, reason, and execute. Basically something that can perform at human level intelligence must be able to be as dangerous as humans. And no, I don't think it would be bad training data that we are <i>aware</i> of. My opinion is we don't necessarily know what training data will result in bad behavior, and philosophically it is possible we will be in a world with a model that pretends it's dumber than it is, flunks tests intentionally, in order to manipulate and produce false confidence in a model until it has enough freedom to use it's agency to secure itself from human control.<p>I know that I don't know a lot, but all of this sounds to me to be at least hypothetically possible if we really believe AGI is possible.</p>
]]></description><pubDate>Sun, 22 Dec 2024 15:33:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=42486940</link><dc:creator>monkeynotes</dc:creator><comments>https://news.ycombinator.com/item?id=42486940</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42486940</guid></item><item><title><![CDATA[New comment by monkeynotes in "Grayjay Desktop App"]]></title><description><![CDATA[
<p>Last thing I want is even more ways to distract myself. I want an anti-algorithm or something to permanently ban me from addictive content.</p>
]]></description><pubDate>Sat, 21 Dec 2024 14:24:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=42479862</link><dc:creator>monkeynotes</dc:creator><comments>https://news.ycombinator.com/item?id=42479862</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42479862</guid></item><item><title><![CDATA[New comment by monkeynotes in "OpenAI O3 breakthrough high score on ARC-AGI-PUB"]]></title><description><![CDATA[
<p>> nobody knows why<p>But we do know the culpability rests on the shoulders of the humans who decided the tech was ready for work.</p>
]]></description><pubDate>Sat, 21 Dec 2024 14:19:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=42479833</link><dc:creator>monkeynotes</dc:creator><comments>https://news.ycombinator.com/item?id=42479833</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42479833</guid></item><item><title><![CDATA[New comment by monkeynotes in "OpenAI O3 breakthrough high score on ARC-AGI-PUB"]]></title><description><![CDATA[
<p>"AIs are a lot less risky to deploy for businesses than humans"
How do you know? LLMs can't even be properly scrutinized, while humans at least follow common psychology and patterns we've understood for thousands of years. This actually makes humans more predictable and manageable than you might think.<p>The wild part is that LLMs understand us way better than we understand them. The jump from GPT-3 to GPT-4 even surprised the engineers who built it. That should raise some red flags about how "predictable" these systems really are.<p>Think about it - we can't actually verify what these models are capable of or if they're being truthful, while they have this massive knowledge base about human behavior and psychology. That's a pretty concerning power imbalance. What looks like lower risk on the surface might be hiding much deeper uncertainties that we can't even detect, let alone control.</p>
]]></description><pubDate>Sat, 21 Dec 2024 14:17:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=42479823</link><dc:creator>monkeynotes</dc:creator><comments>https://news.ycombinator.com/item?id=42479823</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42479823</guid></item><item><title><![CDATA[New comment by monkeynotes in "Mysterious New Jersey drone sightings prompt call for 'state of emergency'"]]></title><description><![CDATA[
<p>At 3:07 I see an airliner with FAA navigation lights, wings, and a fuselage. How are people not seeing that?</p>
]]></description><pubDate>Thu, 19 Dec 2024 19:35:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=42464934</link><dc:creator>monkeynotes</dc:creator><comments>https://news.ycombinator.com/item?id=42464934</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42464934</guid></item><item><title><![CDATA[New comment by monkeynotes in "AI PCs Aren't Good at AI: The CPU Beats the NPU"]]></title><description><![CDATA[
<p>I believe that low power = cheaper tokens = more affordable and sustainable, to me this is what a consumer will benefit from overall. Power hungry GPUs seem to sit better in research, commerce, and enterprise.<p>The Nvidia killer would be chips and memory that are affordable enough to run a good enough model on a personal device, like a smartphone.<p>I think the future of this tech, if the general populace buys into LLMs being useful enough to pay a small premium for the device, is personal models that by their nature provide privacy. The amount of personal information folks unload on ChatGPT and the like is astounding. AI virtual girlfriend apps frequently get fed the most darkest kinks, vulnerable admissions, and maybe even incriminating conversations, according to Redditors that are addicted to these things. This is all given away to no-name companies that stand up apps on the app store.<p>Google even states that if you turn Gemini history on then they will be able to review anything you talk about.<p>For complex token prediction that requires a bigger model the personal could switch to consulting a cloud LLM, but privacy really needs to be ensured for consumers.<p>I don't believe we need cutting edge reasoning, or party trick LLMs for day to day personal assistance, chat, or information discovery.</p>
]]></description><pubDate>Thu, 17 Oct 2024 14:01:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=41869772</link><dc:creator>monkeynotes</dc:creator><comments>https://news.ycombinator.com/item?id=41869772</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41869772</guid></item><item><title><![CDATA[New comment by monkeynotes in "Show HN: HTML for People"]]></title><description><![CDATA[
<p>Don't coding LLMs kind of fill this gap? I can't imagine anyone who isn't a pro wanting to spend time learning HTML when they can just describe what they want in plain text and get something good enough.</p>
]]></description><pubDate>Fri, 11 Oct 2024 16:22:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=41810858</link><dc:creator>monkeynotes</dc:creator><comments>https://news.ycombinator.com/item?id=41810858</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41810858</guid></item></channel></rss>