<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: djwide</title><link>https://news.ycombinator.com/user?id=djwide</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 28 Apr 2026 15:37:43 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=djwide" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by djwide in "OracleGPT: Thought Experiment on an AI Powered Executive"]]></title><description><![CDATA[
<p>I'm saying there's something structurally different form autonomous systems generally and from an LLM corpus which has all of the information in one place and at least in theory extractable by one user.</p>
]]></description><pubDate>Mon, 26 Jan 2026 19:24:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=46770287</link><dc:creator>djwide</dc:creator><comments>https://news.ycombinator.com/item?id=46770287</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46770287</guid></item><item><title><![CDATA[New comment by djwide in "OracleGPT: Thought Experiment on an AI Powered Executive"]]></title><description><![CDATA[
<p>I point that out a little bit when I refer to agencies being discouraged from sharing information.  The CIA may be worried about losing HUMINT data to the NSA for example. You may be referring to them worrying about compartmentalizing the information away from the president as well which you are right happens to some extent now but shouldn't 'in theory'. Maybe it's a don't ask don't tell. I think Cheney blew the cover of an intel asset though.</p>
]]></description><pubDate>Mon, 26 Jan 2026 19:22:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46770256</link><dc:creator>djwide</dc:creator><comments>https://news.ycombinator.com/item?id=46770256</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46770256</guid></item><item><title><![CDATA[New comment by djwide in "OracleGPT: Thought Experiment on an AI Powered Executive"]]></title><description><![CDATA[
<p>Can anyone tell me why the comment gets downvoted.  The article is past character count - I have to link.</p>
]]></description><pubDate>Mon, 26 Jan 2026 16:22:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46767603</link><dc:creator>djwide</dc:creator><comments>https://news.ycombinator.com/item?id=46767603</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46767603</guid></item><item><title><![CDATA[New comment by djwide in "OracleGPT: Thought Experiment on an AI Powered Executive"]]></title><description><![CDATA[
<p>Thanks for the comment.  Interesting to think about but I am also skeptical of who will be doing the "collecting" and "synthesizing". Both tasks are potentially loaded with political bias. Perhaps it's better than our current system though.</p>
]]></description><pubDate>Mon, 26 Jan 2026 16:21:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=46767594</link><dc:creator>djwide</dc:creator><comments>https://news.ycombinator.com/item?id=46767594</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46767594</guid></item><item><title><![CDATA[New comment by djwide in "OracleGPT: Thought Experiment on an AI Powered Executive"]]></title><description><![CDATA[
<p>There's not a direct tie to what I'm trying to sell admittedly. I just thought it was a worthwhile topic of discussion - it doesn't need to be politically divisive and I might as well post it on my company site.<p>I don't think there are easy answers to the questions I am posing and any engineering solution would fall short.  Thanks for reading.</p>
]]></description><pubDate>Mon, 26 Jan 2026 16:18:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46767535</link><dc:creator>djwide</dc:creator><comments>https://news.ycombinator.com/item?id=46767535</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46767535</guid></item><item><title><![CDATA[OracleGPT: Thought Experiment on an AI Powered Executive]]></title><description><![CDATA[
<p>Article URL: <a href="https://senteguard.com/blog/#post-7fYcaQrAcfsldmSb7zVM">https://senteguard.com/blog/#post-7fYcaQrAcfsldmSb7zVM</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46766507">https://news.ycombinator.com/item?id=46766507</a></p>
<p>Points: 60</p>
<p># Comments: 51</p>
]]></description><pubDate>Mon, 26 Jan 2026 15:06:09 +0000</pubDate><link>https://senteguard.com/blog/#post-7fYcaQrAcfsldmSb7zVM</link><dc:creator>djwide</dc:creator><comments>https://news.ycombinator.com/item?id=46766507</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46766507</guid></item><item><title><![CDATA[New comment by djwide in "TimeCapsuleLLM: LLM trained only on data from 1800-1875"]]></title><description><![CDATA[
<p>AlphaGo was trained on old games and then presented with a game it had never seen before. It came up with a move that a human would not have played.</p>
]]></description><pubDate>Wed, 21 Jan 2026 08:01:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=46702514</link><dc:creator>djwide</dc:creator><comments>https://news.ycombinator.com/item?id=46702514</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46702514</guid></item><item><title><![CDATA[New comment by djwide in "American Closed Source vs. Chinese Open Source: A False Dichotomy"]]></title><description><![CDATA[
<p>Either way, not sure protectionism and siphoning money to frontier model owners will help us.<p>But also by that argument they would have beaten us to frontier model tech as well. Their education system appeared better than ours 20 years ago. We could have a bigger and broader conversation comparing the two systems and China's has a lot of flaws</p>
]]></description><pubDate>Tue, 20 Jan 2026 00:26:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=46686446</link><dc:creator>djwide</dc:creator><comments>https://news.ycombinator.com/item?id=46686446</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46686446</guid></item><item><title><![CDATA[New comment by djwide in "American Closed Source vs. Chinese Open Source: A False Dichotomy"]]></title><description><![CDATA[
<p>It’s a call to patriotism. China versus America. “Who will you back?” This has become a common plea from the Silicon Valley elite over the last six months. I heard the move up close at the Harvard Kennedy School, where a visiting Eric Schmidt warned that AI may soon cross into autonomous self-improvement, argued that someone will need to “raise their hand” and impose limits, and then pivoted into the geopolitical register, contrasting American and Chinese trajectories and urging policy and funding choices aligned with “American values.” Others have also made versions of this argument in different forums. Tarun Chhabra, head of national security policy at Anthropic, has made a similar argument, urging an “American stack” and treating model governance as a geopolitical contest. Putting aside the awkwardness of nationalist messaging coming from the Bay Area’s long-time borderless “global citizens,” the incentives are not hard to see. If you can frame the open vs closed models debate as a national security referendum, you can frame restrictive rules as patriotism and you can frame “responsible control” as synonymous with dominance by a small circle of incumbent providers.<p>The posture makes sense once you consider two facts. One: industries which may live and die on capricious regulatory rule making must make their case to those with their hands on the levers of power. In 2026 America, those hands are professed patriotic Republicans. Two: Big Frontier LLM is losing the tech battle, or at least losing the easy assumption that America’s lead is automatic and permanent. They are on their back foot so they must frame the open vs closed model debate wrongfully as a fight between America and China. America cannot afford to lose a battle to China and by extension Anthropic, OpenAI and Alphabet cannot afford to lose to their competition.<p>Yet there is nothing inherently Chinese about open models and nothing inherently American about closed models. If anything, it is the opposite. Open models are decentralized, inspectable, forkable, and difficult to monopolize. That aligns with an American instinct to diffuse power, prefer competition over permission, and distrust single points of control. Closed models concentrate capability behind a small number of gatekeepers, wrapped in secrecy, and sustained by privileged access to regulators. That logic is far closer to centralized control than to open competition. The real fault line is not America versus China. It is democratic diffusion versus unnatural scarcity, and good tech versus bad tech.<p>Full article linked.</p>
]]></description><pubDate>Mon, 19 Jan 2026 23:46:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46686174</link><dc:creator>djwide</dc:creator><comments>https://news.ycombinator.com/item?id=46686174</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46686174</guid></item><item><title><![CDATA[American Closed Source vs. Chinese Open Source: A False Dichotomy]]></title><description><![CDATA[
<p>Article URL: <a href="https://senteguard.com/blog/#post-h2V9GtUh5Xts9NTzH4zu">https://senteguard.com/blog/#post-h2V9GtUh5Xts9NTzH4zu</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46686173">https://news.ycombinator.com/item?id=46686173</a></p>
<p>Points: 14</p>
<p># Comments: 4</p>
]]></description><pubDate>Mon, 19 Jan 2026 23:46:11 +0000</pubDate><link>https://senteguard.com/blog/#post-h2V9GtUh5Xts9NTzH4zu</link><dc:creator>djwide</dc:creator><comments>https://news.ycombinator.com/item?id=46686173</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46686173</guid></item><item><title><![CDATA[New comment by djwide in "Nailing Jell-O to the Wall, Again. Why China Will Struggle to Contain LLMs"]]></title><description><![CDATA[
<p>In 2000, President Bill Clinton famously looked at Beijing’s early internet controls and quipped: “Good luck. That’s sort of like trying to nail Jell-O to the wall.”<p>So far he’s been proven wrong. The CCP didn’t just contain the internet; it has effectively used the internet as a tool to entrench its control by building a system that fuses chokepoints, platform governance, and punitive enforcement into something like a sovereign information utility. That said, the jury is still out, and Clinton may still be vindicated.<p>On the one hand, LLMs can be understood as a natural outgrowth of Clinton’s (and Gore’s) internet but it can also be seen as its next evolution. By amplifying individual autonomy, LLMs present significant opportunities for economic growth but in pursuing growth they will also amplify individual agency. Therefore, the Party faces a quandary: pursue a strategy of economic growth and risk an erosion of Party authority or crack down and risk being left behind in the technology of the future.<p>Full article linked.</p>
]]></description><pubDate>Tue, 13 Jan 2026 06:50:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46597963</link><dc:creator>djwide</dc:creator><comments>https://news.ycombinator.com/item?id=46597963</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46597963</guid></item><item><title><![CDATA[Nailing Jell-O to the Wall, Again. Why China Will Struggle to Contain LLMs]]></title><description><![CDATA[
<p>Article URL: <a href="https://senteguard.com/blog/#post-jjip31e6y1iTyGKpzso4">https://senteguard.com/blog/#post-jjip31e6y1iTyGKpzso4</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46597962">https://news.ycombinator.com/item?id=46597962</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Tue, 13 Jan 2026 06:50:31 +0000</pubDate><link>https://senteguard.com/blog/#post-jjip31e6y1iTyGKpzso4</link><dc:creator>djwide</dc:creator><comments>https://news.ycombinator.com/item?id=46597962</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46597962</guid></item><item><title><![CDATA[New comment by djwide in "TimeCapsuleLLM: LLM trained only on data from 1800-1875"]]></title><description><![CDATA[
<p>What do they (or you) have to say about the Lee Sedol AlphaGo move 78.  It seems like that was "new knowledge." Are games just iterable and the real world idea space not? I am playing with these ideas a little.</p>
]]></description><pubDate>Tue, 13 Jan 2026 01:31:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=46596383</link><dc:creator>djwide</dc:creator><comments>https://news.ycombinator.com/item?id=46596383</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46596383</guid></item><item><title><![CDATA[New comment by djwide in "TimeCapsuleLLM: LLM trained only on data from 1800-1875"]]></title><description><![CDATA[
<p>With LLMs the synthesis cycles could happen at a much higher frequency.  Decades condensed to weeks or days?<p>I imagine possible buffers on that conjecture synthesis being epxerimentation and acceptance by the scientific community.  AIs can come up with new ideas every day but Nature won't publish those ideas for years.</p>
]]></description><pubDate>Tue, 13 Jan 2026 01:27:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46596361</link><dc:creator>djwide</dc:creator><comments>https://news.ycombinator.com/item?id=46596361</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46596361</guid></item><item><title><![CDATA[Living with LLMs Everywhere – How Ambient LLMs Negate Security Policy]]></title><description><![CDATA[
<p>Full article linked below.<p>Unchecking the “Improve Model for Everyone” box isn’t a privacy policy. Neither is relying on an enterprise agreement.<p>LLMs aren’t just tools we visit, they’re becoming ambient infrastructure: inside email, meeting notes, browsers, and even operating systems. You can uncheck a box or sign an enterprise agreement, but that only covers a narrow risk.<p>The bigger issue is custody. Even without training, text can be retained, logged, routed through vendors, and then cascade into other systems over time. As high-quality public data runs out, the pressure to find new training sources grows.<p>Organizations also face a weakest-link reality: under time pressure, employees will route around rules on corporate machines. One paste can be enough.<p>My article argues we need to shift from “trust” to boundaries: educate people to recognize sensitive ideas, and pair that with real-time controls that prevent leakage at the moment it tries to leave.<p>If LLMs are becoming ambient, security has to become ambient too.<p>https://senteguard.com/blog/#post-cTdX0IaIRz8STpBU9VYk</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46596317">https://news.ycombinator.com/item?id=46596317</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 13 Jan 2026 01:21:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=46596317</link><dc:creator>djwide</dc:creator><comments>https://news.ycombinator.com/item?id=46596317</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46596317</guid></item></channel></rss>