<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: robbrown451</title><link>https://news.ycombinator.com/user?id=robbrown451</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 14 Apr 2026 22:43:18 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=robbrown451" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by robbrown451 in "Claude March 2026 usage promotion"]]></title><description><![CDATA[
<p>I'm in their time zone, and was just planning to stop with my bad habit of staying up working till 4 am and waking up at noon.<p>So much for that plan.</p>
]]></description><pubDate>Sun, 15 Mar 2026 01:38:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47383354</link><dc:creator>robbrown451</dc:creator><comments>https://news.ycombinator.com/item?id=47383354</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47383354</guid></item><item><title><![CDATA[New comment by robbrown451 in "Preparing for AI's economic impact: exploring policy responses"]]></title><description><![CDATA[
<p>For most things they don't need to be "human equivalent."  I'd be willing to be the current crop of robots we're seeing could do most tasks like vacuuming, cooking, picking up clutter, folding laundry and putting it aways, making beds, touch up painting, gardening etc. It seems to be getting better very fast.  And if mechanical tendons break, you replace them.  Big deal. You don't even need a person to do the repair.</p>
]]></description><pubDate>Wed, 15 Oct 2025 03:15:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=45587731</link><dc:creator>robbrown451</dc:creator><comments>https://news.ycombinator.com/item?id=45587731</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45587731</guid></item><item><title><![CDATA[New comment by robbrown451 in "Preparing for AI's economic impact: exploring policy responses"]]></title><description><![CDATA[
<p>I'm having trouble understanding what they want to "upskill" those people to do.<p>What skills won't be replaced?  The only ones I can think of either have a large physical component, or are only doable by a tiny fraction of the current workforce.<p>As for the ones with a physical component (plumbers being the most cited), the cognitive parts of the job (the "skilled" part of skilled labor) can be replaced while having the person just following directions demonstrated onscreen for them.  And of course, the robots aren't far behind, since the main hard part of making a capable robot is the AI part.</p>
]]></description><pubDate>Wed, 15 Oct 2025 01:10:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=45586960</link><dc:creator>robbrown451</dc:creator><comments>https://news.ycombinator.com/item?id=45586960</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45586960</guid></item><item><title><![CDATA[New comment by robbrown451 in "Developers does not care about poisoning attacks in LLM code assistants"]]></title><description><![CDATA[
<p>It was cited from a Newsweek article, and Cliff said this about it later: "Of my many mistakes, flubs, and howlers, few have been as public as my 1995 howler ... Now, whenever I think I know what's happening, I temper my thoughts: Might be wrong, Cliff ..."<p>You may be right about humans biasing toward easiest to obtain information, but that doesn't say "don't use AI assistance", it says "use care when using AI assistance".<p>Also, Cliff wasn't saying the information was easier to use, since in his case, it was actually harder to use than just looking it up in a printed encyclopedia or the like. But none of the problems he mentioned were inherent problems with the internet, they were because it was a brand new medium still working out its kinks. AI may well be harder to use for coding right now, at least for many use cases. However, a look at the bigger picture strongly suggests it is the future, just as a look at the bigger picture in 1995 would have suggested that the internet was the future, at least for answering questions like "when was the battle of Trafalgar?"<p>This is consistent with my horse/car analogy: the car wasn't the problem, the problem was people who assumed cars were going to keep themselves on the road like a horse would naturally do. You can get a huge gain, but you have to be smart about how you use it.</p>
]]></description><pubDate>Wed, 22 May 2024 19:36:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=40445333</link><dc:creator>robbrown451</dc:creator><comments>https://news.ycombinator.com/item?id=40445333</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40445333</guid></item><item><title><![CDATA[New comment by robbrown451 in "Developers does not care about poisoning attacks in LLM code assistants"]]></title><description><![CDATA[
<p>I'd suggest enjoying that vindication while it lasts.<p>From my perspective, your perspective is like a horse and buggy driver feeling vindicated when a "horseless carriage" driver accidentally drives one into a tree. The cars will get easier to drive and safer in crashes, and the drivers will learn to pay attention in certain ways they previously didn't have to.<p>Will there still be occasional problems? Sure, but that doesn't mean that tying your career to horses would have been a wise move. Same here.<p>(Also, this article is about "poisoned ChatGPT-like tools."  Which says very little about using the tools that most developers are using)<p>I'm always reminded of this:
"Logged onto the World Wide Web, I hunt for the date of the Battle of Trafalgar. Hundreds of files show up, and it takes 15 minutes to unravel them—one's a biography written by an eighth grader, the second is a computer game that doesn't work and the third is an image of a London monument. None answers my question, and my search is periodically interrupted by messages like, "Too many connections, try again later.""  -- Cliff Stoll, 1995</p>
]]></description><pubDate>Wed, 22 May 2024 18:58:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=40444770</link><dc:creator>robbrown451</dc:creator><comments>https://news.ycombinator.com/item?id=40444770</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40444770</guid></item><item><title><![CDATA[New comment by robbrown451 in "Elon Musk sues Sam Altman, Greg Brockman, and OpenAI [pdf]"]]></title><description><![CDATA[
<p>Imagine if a regular for profit startup did that. It gets 60 million in initial funding, and later their valuation goes up to 100 billion. Of course they can't just give the 60 million back.<p>This is different and has a lot of complications that are basically things we've never seen before, but still, just giving the 60 million back doesn't make any sense at all. They would've never achieved what they've achieved without his 60 million.</p>
]]></description><pubDate>Sat, 02 Mar 2024 01:40:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=39569018</link><dc:creator>robbrown451</dc:creator><comments>https://news.ycombinator.com/item?id=39569018</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39569018</guid></item><item><title><![CDATA[New comment by robbrown451 in "Elon Musk sues Sam Altman, Greg Brockman, and OpenAI [pdf]"]]></title><description><![CDATA[
<p>I don't see how opening it makes it safer. It's very different from security things, where some "white hat" can find a security, and they can then fix it so instances don't get hacked. Sure, a bad person could run the software without fixing the bug, but that isn't going to harm anyone but themselves.<p>That isn't the case here.  If some well meaning person discovers a way that you can create a pandemic causing superbug, they can't just "fix" the AI to make that impossible. Not if it is open source. Very different thing.</p>
]]></description><pubDate>Sat, 02 Mar 2024 01:36:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=39568995</link><dc:creator>robbrown451</dc:creator><comments>https://news.ycombinator.com/item?id=39568995</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39568995</guid></item><item><title><![CDATA[New comment by robbrown451 in "Stable pseudonyms create a more civil environment than real names: study (2021)"]]></title><description><![CDATA[
<p>There are differences but there are also similarities. I think the similarities are more important, both when you're driving and interacting online, you have conflicting agendas, which could be a simple as when driving you're trying to get there as soon as possible, and when you are using an online message board you're either trying to get your point accepted or you trying to make yourself look good and smart.<p>The point, though, is that if you're not gonna have to interact with these people in the future, and there are otherwise no repercussions to being nasty, you're more likely to be nasty.</p>
]]></description><pubDate>Thu, 15 Feb 2024 21:50:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=39389514</link><dc:creator>robbrown451</dc:creator><comments>https://news.ycombinator.com/item?id=39389514</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39389514</guid></item><item><title><![CDATA[New comment by robbrown451 in "Show HN: A platform for remote piano lessons based on the Web MIDI API"]]></title><description><![CDATA[
<p>Any digital piano functions as a midi controller, and many of them have weighted keys. And there are a few "pure" midi controllers that have 88 weighted keys, such as the M-Audio Hammer 88 or StudioLogic SL88.</p>
]]></description><pubDate>Sun, 11 Feb 2024 03:28:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=39332429</link><dc:creator>robbrown451</dc:creator><comments>https://news.ycombinator.com/item?id=39332429</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39332429</guid></item><item><title><![CDATA[New comment by robbrown451 in "I disagree with Geoff Hinton regarding "glorified autocomplete""]]></title><description><![CDATA[
<p>"This is not a representation of the real world, of facts, but simply a product of its training."<p>Tell me how that doesn't apply to the human brain as well.</p>
]]></description><pubDate>Sun, 19 Nov 2023 22:23:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=38339014</link><dc:creator>robbrown451</dc:creator><comments>https://news.ycombinator.com/item?id=38339014</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38339014</guid></item><item><title><![CDATA[New comment by robbrown451 in "I disagree with Geoff Hinton regarding "glorified autocomplete""]]></title><description><![CDATA[
<p>"LLMs do not directly model the world; they train on and model what people write about the world"<p>This is true. But human brains don't directly model the world either, they form an internal model based on what comes in through their senses. Humans have the advantage of being more "multi-modal," but that doesn't mean that they get more information or better information.<p>Much of my "modeling of the world" comes from the fact that I've read a lot of text. But of course I haven't read even a tiny fraction of what GPT4 has.<p>That said, LLMs can already train on images, as GPT4-V does. And the image generators as well do this, it's just a matter of time before the two are fully integrated. Later we'll see a lot more training on video and sound, and it all being integrated into a single model.</p>
]]></description><pubDate>Sun, 19 Nov 2023 06:11:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=38329648</link><dc:creator>robbrown451</dc:creator><comments>https://news.ycombinator.com/item?id=38329648</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38329648</guid></item><item><title><![CDATA[New comment by robbrown451 in "I disagree with Geoff Hinton regarding "glorified autocomplete""]]></title><description><![CDATA[
<p>It certainly doesn't "look up" text data it has seen before. That shows a fundamental misunderstanding of how this stuff works. That's exactly why I use the example above of Alpha Zero and how it learns to play Go, since that demonstrates very clearly that it's not just looking things up.<p>And I have no idea what you mean by saying that it has no concept of true or false. Even the simplest computer programs have a concept of true or false, that's kind of the simplest data type, a boolean. Large language models have a much more sophisticated concept of true and false that has a lot more nuance. That's really a pretty ridiculous thing to say.</p>
]]></description><pubDate>Sun, 19 Nov 2023 05:41:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=38329483</link><dc:creator>robbrown451</dc:creator><comments>https://news.ycombinator.com/item?id=38329483</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38329483</guid></item><item><title><![CDATA[New comment by robbrown451 in "I disagree with Geoff Hinton regarding "glorified autocomplete""]]></title><description><![CDATA[
<p>I agree with Hinton, although a lot hinges on your definition of "understand."<p>I think to best wrap your head around this stuff, you should look to the commonalities of LLM's, image, generators, and even things like Alpha Zero and how it learned to play Go.<p>Alpha Zero is kind of the extreme in terms of not imitating anything that humans have done. It learns to play the game simply by playing itself -- and what they found is that there isn't really a limit to how good it can get. There may be some theoretical limit of a "perfect" Go player, or maybe not, but it will continue to converge towards perfection by continuing to train. And it can go far beyond what the best human Go player can ever do.  Even though very smart humans have spent their lifetimes deeply studying the game, and Alpha Zero had to learn everything from scratch.<p>One other thing to take into consideration, is that to play the game of Go you can't just think of the next move. You have to think far forward in the game --  even though technically all it's doing is picking the next move, it is doing so using a model that has obviously looked forward more than just one move. And that model is obviously very sophisticated, and if you are going to say that it doesn't understand the game of Go, I would argue that you have a very, oddly restricted definition of the word, understand, and one that isn't particularly useful.<p>Likewise, with large language models, while on the surface, they may be just predicting the next word one after another, to do so effectively they have to be planning ahead. As Hinton says, there is no real limit to how sophisticated they can get. When training, it is never going to be 100% accurate in predicting text it hasn't trained on, but it can continue to get closer and closer to 100% the more it trains. And the closer it gets, the more sophisticated model it needs. In the sense that Alpha Zero needs to "understand" the game of Go to play effectively, the large language model needs to understand "the world" to get better at predicting.</p>
]]></description><pubDate>Sat, 18 Nov 2023 18:15:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=38322566</link><dc:creator>robbrown451</dc:creator><comments>https://news.ycombinator.com/item?id=38322566</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38322566</guid></item><item><title><![CDATA[New comment by robbrown451 in "OpenAI's board has fired Sam Altman"]]></title><description><![CDATA[
<p>Well fully sentient doesn't mean it is superintelligent.</p>
]]></description><pubDate>Fri, 17 Nov 2023 22:47:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=38311660</link><dc:creator>robbrown451</dc:creator><comments>https://news.ycombinator.com/item?id=38311660</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38311660</guid></item><item><title><![CDATA[New comment by robbrown451 in "How deep is the brain? The shallow brain hypothesis"]]></title><description><![CDATA[
<p>Exactly what are you doing here then?<p>But hey I guess I can do this too. How's this? Using cringe as an adjective is cringe.</p>
]]></description><pubDate>Mon, 30 Oct 2023 09:37:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=38067332</link><dc:creator>robbrown451</dc:creator><comments>https://news.ycombinator.com/item?id=38067332</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38067332</guid></item><item><title><![CDATA[New comment by robbrown451 in "How deep is the brain? The shallow brain hypothesis"]]></title><description><![CDATA[
<p>You're saying the study has no grounding in how brains work? I'd think a more reasonable conclusion would be that the neuroscientists involved have no grounding in how artificial neural networks work.<p>It seems the whole point is to bring in additional details of how brains work, that the think may be relevant to artificial NNs.</p>
]]></description><pubDate>Mon, 30 Oct 2023 03:40:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=38065426</link><dc:creator>robbrown451</dc:creator><comments>https://news.ycombinator.com/item?id=38065426</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38065426</guid></item><item><title><![CDATA[New comment by robbrown451 in "How deep is the brain? The shallow brain hypothesis"]]></title><description><![CDATA[
<p>I dunno. My comment complained about the parent comment not adding positively to the discussion. And gave at least a bit of support for that complaint.<p>Would you have preferred I emulate your style, and complain while providing no support for my complaint?<p>Ok.</p>
]]></description><pubDate>Mon, 30 Oct 2023 03:06:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=38065249</link><dc:creator>robbrown451</dc:creator><comments>https://news.ycombinator.com/item?id=38065249</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38065249</guid></item><item><title><![CDATA[New comment by robbrown451 in "How deep is the brain? The shallow brain hypothesis"]]></title><description><![CDATA[
<p>I can't agree with the dismissiveness of this comment, and frankly I find its tone out of line and not with the spirit of Hacker News.<p>There are insights that can come from studying the brain, that do indeed apply. Some researchers may not glean anything from such studies, and some may. I have no doubt that as neural networks get more an more powerful, we will continue to find more ways they are similar to the brain, and apply things we've learned about the brain to them.<p>I certainly prefer to see people making comparisons of neural networks to the brain, that the old "it's just a glorified autocomplete" and the like.<p>Relax.</p>
]]></description><pubDate>Mon, 30 Oct 2023 01:57:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=38064876</link><dc:creator>robbrown451</dc:creator><comments>https://news.ycombinator.com/item?id=38064876</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38064876</guid></item><item><title><![CDATA[New comment by robbrown451 in "Improving image generation with better captions [pdf]"]]></title><description><![CDATA[
<p>Personally I think DALL-E does better quality, especially at photorealistic stuff.<p>Here's a few of mine, many photorealistic.<p><a href="https://www.karmatics.com/stuff/dalle.html" rel="nofollow noreferrer">https://www.karmatics.com/stuff/dalle.html</a></p>
]]></description><pubDate>Fri, 20 Oct 2023 03:34:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=37952122</link><dc:creator>robbrown451</dc:creator><comments>https://news.ycombinator.com/item?id=37952122</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37952122</guid></item><item><title><![CDATA[New comment by robbrown451 in "DALL·E 3 is now available in ChatGPT Plus and Enterprise"]]></title><description><![CDATA[
<p>"i entered your text - looks great!"<p>Yours does too. Very different feel than mine, but beautiful.</p>
]]></description><pubDate>Thu, 19 Oct 2023 18:50:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=37946960</link><dc:creator>robbrown451</dc:creator><comments>https://news.ycombinator.com/item?id=37946960</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37946960</guid></item></channel></rss>