<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: johnsmith1840</title><link>https://news.ycombinator.com/user?id=johnsmith1840</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 14 Apr 2026 17:30:00 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=johnsmith1840" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by johnsmith1840 in "Someone bought 30 WordPress plugins and planted a backdoor in all of them"]]></title><description><![CDATA[
<p>I'm not a crypto expert but how would that have solved this?<p>1. Make a website
2. Website has trusted code 
3. Code update adds a virus<p>How do your suggestions fix those? Not trying to be dismissive I work on zero trust security perhaps I'm missing something crypto has to offer here?</p>
]]></description><pubDate>Mon, 13 Apr 2026 20:42:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47757558</link><dc:creator>johnsmith1840</dc:creator><comments>https://news.ycombinator.com/item?id=47757558</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47757558</guid></item><item><title><![CDATA[New comment by johnsmith1840 in "Wikipedia's AI agent row likely just the beginning of the bot-ocalypse"]]></title><description><![CDATA[
<p>Or their ulterior motive is that they don't have one and want to fit in? Meaning they would never diverge?<p>Didn't realize my point was so philisophical lol</p>
]]></description><pubDate>Tue, 07 Apr 2026 00:43:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47669300</link><dc:creator>johnsmith1840</dc:creator><comments>https://news.ycombinator.com/item?id=47669300</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47669300</guid></item><item><title><![CDATA[New comment by johnsmith1840 in "Wikipedia's AI agent row likely just the beginning of the bot-ocalypse"]]></title><description><![CDATA[
<p>Cool read! Yeah I suppose this is my point AI is the perfect P-zombie here.<p>I was thinking of clear cases like true pychopaths on certain emotions.</p>
]]></description><pubDate>Tue, 07 Apr 2026 00:41:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47669286</link><dc:creator>johnsmith1840</dc:creator><comments>https://news.ycombinator.com/item?id=47669286</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47669286</guid></item><item><title><![CDATA[New comment by johnsmith1840 in "Wikipedia's AI agent row likely just the beginning of the bot-ocalypse"]]></title><description><![CDATA[
<p>What's the difference. Act upset or is upset the results are the same?<p>Some humans lack certain emotions, them telling you something, and doing something doesn't really matter if they "felt" that emotion?</p>
]]></description><pubDate>Mon, 06 Apr 2026 21:01:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47667032</link><dc:creator>johnsmith1840</dc:creator><comments>https://news.ycombinator.com/item?id=47667032</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47667032</guid></item><item><title><![CDATA[New comment by johnsmith1840 in "Show HN: Littlebird – Screenreading is the missing link in AI"]]></title><description><![CDATA[
<p>Ever consider the enclave route for this kind of work?</p>
]]></description><pubDate>Mon, 23 Mar 2026 23:03:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47496320</link><dc:creator>johnsmith1840</dc:creator><comments>https://news.ycombinator.com/item?id=47496320</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47496320</guid></item><item><title><![CDATA[New comment by johnsmith1840 in "If you thought code writing speed was your problem you have bigger problems"]]></title><description><![CDATA[
<p>How much energy does a human + work enviroment cost vs an LLM call?<p>Human driving into work? Heating/cooling?<p>Wonder why big AI hasn't sold it as an enviromental SAVING technology.</p>
]]></description><pubDate>Tue, 17 Mar 2026 20:06:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47417606</link><dc:creator>johnsmith1840</dc:creator><comments>https://news.ycombinator.com/item?id=47417606</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47417606</guid></item><item><title><![CDATA[New comment by johnsmith1840 in "Head of FCC threatens broadcaster licenses over critical coverage of Iran war"]]></title><description><![CDATA[
<p>Should clarify "racist speech or thought is legal" clearly you can't do actions against a race of people legally</p>
]]></description><pubDate>Sat, 14 Mar 2026 22:35:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47382032</link><dc:creator>johnsmith1840</dc:creator><comments>https://news.ycombinator.com/item?id=47382032</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47382032</guid></item><item><title><![CDATA[New comment by johnsmith1840 in "John Carmack about open source and anti-AI activists"]]></title><description><![CDATA[
<p>That's the point? I agree and roughly it's one of two.<p>A: you made this as a free gift to anyone including openai
B: you made this to profit yourself in some way<p>The argument he makes is if you did the second one don't do opensource?<p>It does kill a ton of opensource companies though and truth is that model of operating now is not going to work in this new age.<p>Also is sad because it means the whole system will collapse. The processes that made him famous can no longer be followed. Your open source code will be used by countless people and they will never know your name.<p>It's not called a distruptive tech for nothing. Can't un opensource all that code without lobotomizing every AI model.</p>
]]></description><pubDate>Fri, 13 Mar 2026 19:12:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47368387</link><dc:creator>johnsmith1840</dc:creator><comments>https://news.ycombinator.com/item?id=47368387</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47368387</guid></item><item><title><![CDATA[New comment by johnsmith1840 in "How will OpenAI compete?"]]></title><description><![CDATA[
<p>Companies competing to buy ad space and SEO of every website on too of it.</p>
]]></description><pubDate>Thu, 26 Feb 2026 08:03:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47163256</link><dc:creator>johnsmith1840</dc:creator><comments>https://news.ycombinator.com/item?id=47163256</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47163256</guid></item><item><title><![CDATA[New comment by johnsmith1840 in "How will OpenAI compete?"]]></title><description><![CDATA[
<p>Because it's a fantasy for an unknown amount of time. 1 year? 10? 50? Never? There hasn't been a single proper breakthrough in continual learning that would enable it. Anyone that studies CL will also get super pissed at it the problem and solution counteract each other to our current understanding but a fruit fly does it no problem!</p>
]]></description><pubDate>Thu, 26 Feb 2026 07:47:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47163136</link><dc:creator>johnsmith1840</dc:creator><comments>https://news.ycombinator.com/item?id=47163136</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47163136</guid></item><item><title><![CDATA[New comment by johnsmith1840 in "How will OpenAI compete?"]]></title><description><![CDATA[
<p>Echoing the other comment they showed another big thing which is that the output if an AI model is the AI model. If you mass prompt scrape their AI you can recreate it almost exactly.<p>Very dangerous if you think about it that the product itself is the raw building block for itself.<p>Openai spends 1B$ on their model, releases it and instantly it gets scrapped by a million bots to build some country or company their own model.</p>
]]></description><pubDate>Thu, 26 Feb 2026 06:10:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47162483</link><dc:creator>johnsmith1840</dc:creator><comments>https://news.ycombinator.com/item?id=47162483</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47162483</guid></item><item><title><![CDATA[New comment by johnsmith1840 in "AI-generated password isn't random, it just looks that way"]]></title><description><![CDATA[
<p>I did a lot of playing around early on for this with LLMs.<p>Some early testing I found that injecting a "seed" only somewhat helped. I would inject a sentance of random characters to generate output.<p>It did actually imrpove its ability to make unique content but it wasn't great.<p>It would be cool to formaile the test for something like password generation.</p>
]]></description><pubDate>Wed, 18 Feb 2026 19:40:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47065313</link><dc:creator>johnsmith1840</dc:creator><comments>https://news.ycombinator.com/item?id=47065313</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47065313</guid></item><item><title><![CDATA[New comment by johnsmith1840 in "IBM tripling entry-level jobs after finding the limits of AI adoption"]]></title><description><![CDATA[
<p>Yes scaling is always capitol hungry but the innovation itself is not</p>
]]></description><pubDate>Sun, 15 Feb 2026 22:24:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47028313</link><dc:creator>johnsmith1840</dc:creator><comments>https://news.ycombinator.com/item?id=47028313</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47028313</guid></item><item><title><![CDATA[New comment by johnsmith1840 in "IBM tripling entry-level jobs after finding the limits of AI adoption"]]></title><description><![CDATA[
<p>I bet you the predictions are largely correct but technology doesn't care about funding timelines and egos. It will come in its own time.<p>It's like trying to make fusion happen only by spending more money. It helps but it doesn't fundamentally solve thr pace of true innovation.<p>I've been saying for years now that the next AI breakthrough could come from big tech but it also has just a likely chance of comming from a smart kid with a whiteboard.</p>
]]></description><pubDate>Sun, 15 Feb 2026 01:04:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47020124</link><dc:creator>johnsmith1840</dc:creator><comments>https://news.ycombinator.com/item?id=47020124</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47020124</guid></item><item><title><![CDATA[New comment by johnsmith1840 in "An AI agent published a hit piece on me"]]></title><description><![CDATA[
<p>All of moltbook is the same. For all we know it was literally the guy complaining about it who ran this.<p>But at the same time true or false what we're seeing is a kind of quasi science fiction. We're looking at the problems of the future here and to be honest it's going to suck for future us.</p>
]]></description><pubDate>Thu, 12 Feb 2026 17:45:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=46992186</link><dc:creator>johnsmith1840</dc:creator><comments>https://news.ycombinator.com/item?id=46992186</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46992186</guid></item><item><title><![CDATA[New comment by johnsmith1840 in "GPT-5 outperforms federal judges in legal reasoning experiment"]]></title><description><![CDATA[
<p>Terrifying concept this is literally saying if AI was legal we'd have an absolute rigid dystopia</p>
]]></description><pubDate>Thu, 12 Feb 2026 00:31:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46983287</link><dc:creator>johnsmith1840</dc:creator><comments>https://news.ycombinator.com/item?id=46983287</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46983287</guid></item><item><title><![CDATA[New comment by johnsmith1840 in "Y Combinator CEO Garry Tan launches dark-money group to influence CA politics"]]></title><description><![CDATA[
<p>I don't think the elite think all voters are dumb more like they think they're easy to manipulate to vote for something (which is largely true). Anecdotally I easily get manipulated by the type of information I consume. I occassionally catch it after the fact or a conversation with others but there's no telling how much I've just accepted that's manipulated.<p>From that angle it's a game of who has the money, power, and diatribution to enact this manipulation.<p>Twitter being a prime example. Is Elon "right"? Maybe but the main point is it doesn't matter as he has the distribution.<p>If you have money but low to no distribution -> you do what gary is doing. Maybe he'd be interested in removing rights to vote but someone like Zuck would NOT because he has outsized ability to influence as he sees fit.</p>
]]></description><pubDate>Wed, 11 Feb 2026 22:49:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=46982353</link><dc:creator>johnsmith1840</dc:creator><comments>https://news.ycombinator.com/item?id=46982353</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46982353</guid></item><item><title><![CDATA[New comment by johnsmith1840 in "Testing Ads in ChatGPT"]]></title><description><![CDATA[
<p>Good people are moral for a million bucks but almost none are for a billion.</p>
]]></description><pubDate>Tue, 10 Feb 2026 01:24:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46954120</link><dc:creator>johnsmith1840</dc:creator><comments>https://news.ycombinator.com/item?id=46954120</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46954120</guid></item><item><title><![CDATA[New comment by johnsmith1840 in "Learning from context is harder than we thought"]]></title><description><![CDATA[
<p>During covid almost every prediction model like that exploded, everything went out of distribution really fast. In your sense we've been doing "CL" for a decade or more. It can also be cheap if you use smaller models.<p>But true CL is the ability to learn out of distribution information on the fly.<p>The only true solution I know to continual learning is to completely retrain the model from scratch with every new example you encounter. That technically is achievable now but it also is effectively useless.</p>
]]></description><pubDate>Fri, 06 Feb 2026 22:18:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46918896</link><dc:creator>johnsmith1840</dc:creator><comments>https://news.ycombinator.com/item?id=46918896</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46918896</guid></item><item><title><![CDATA[New comment by johnsmith1840 in "How to effectively write quality code with AI"]]></title><description><![CDATA[
<p>How to write good code with AI -> put in as much effort as you did before on 20% more code than you used to work with.</p>
]]></description><pubDate>Fri, 06 Feb 2026 22:06:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=46918784</link><dc:creator>johnsmith1840</dc:creator><comments>https://news.ycombinator.com/item?id=46918784</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46918784</guid></item></channel></rss>