<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: hamburga</title><link>https://news.ycombinator.com/user?id=hamburga</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 30 Apr 2026 10:06:07 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=hamburga" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by hamburga in "Disrupting the first reported AI-orchestrated cyber espionage campaign"]]></title><description><![CDATA[
<p>>> This raises an important question: if AI models can be misused for cyberattacks at this scale, why continue to develop and release them? The answer is<p>> Money.<p>For those who didn’t read, the actual response in the text was was:<p>“The answer is that the very abilities that allow Claude to be used in these attacks also make it crucial in cyber defense.”<p>Hideous AI-slop-weasel-worded passive-voice way of saying that reason to develop Claude is to protect us from Claude.</p>
]]></description><pubDate>Fri, 14 Nov 2025 13:14:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=45926412</link><dc:creator>hamburga</dc:creator><comments>https://news.ycombinator.com/item?id=45926412</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45926412</guid></item><item><title><![CDATA[New comment by hamburga in "MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline"]]></title><description><![CDATA[
<p>Socrates famously complained about literacy making us stupider in Phaedrus.<p>Which I believe still does have a large grain of truth.<p>These things can make us simultaneously dumber and smarter, depending on usage.</p>
]]></description><pubDate>Wed, 03 Sep 2025 13:52:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=45115813</link><dc:creator>hamburga</dc:creator><comments>https://news.ycombinator.com/item?id=45115813</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45115813</guid></item><item><title><![CDATA[New comment by hamburga in "Covers as a way of learning music and code"]]></title><description><![CDATA[
<p>Yup, though he was also intentional about not copying word-for-word, but rather trying to predict the next token (or phrase).<p><a href="https://muldoon.cloud/2025/05/17/frankin-llm.html" rel="nofollow">https://muldoon.cloud/2025/05/17/frankin-llm.html</a></p>
]]></description><pubDate>Thu, 24 Jul 2025 15:16:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=44671746</link><dc:creator>hamburga</dc:creator><comments>https://news.ycombinator.com/item?id=44671746</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44671746</guid></item><item><title><![CDATA[New comment by hamburga in "LLM Inevitabilism"]]></title><description><![CDATA[
<p>But assuming no new models are trained, this competitive effect drives down the profit margin on the current SOTA models to zero.</p>
]]></description><pubDate>Tue, 15 Jul 2025 14:07:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=44571326</link><dc:creator>hamburga</dc:creator><comments>https://news.ycombinator.com/item?id=44571326</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44571326</guid></item><item><title><![CDATA[New comment by hamburga in "Problems the AI industry is not addressing adequately"]]></title><description><![CDATA[
<p>> This reminds me of a paradox: The AI industry is concerned with the alignment problem (how to make a super smart AI adhere to human values and goals) while failing to align between and within organizations and with the broader world. The bar they’ve set for themselves is simply too high for the performance they’re putting out.<p>My argument is that it’s <i>our</i> job as consumers to align the AIs to <i>our values</i> (which are not all the same) via selection pressure: <a href="https://muldoon.cloud/2025/05/22/alignment.html" rel="nofollow">https://muldoon.cloud/2025/05/22/alignment.html</a></p>
]]></description><pubDate>Sat, 05 Jul 2025 15:41:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=44473502</link><dc:creator>hamburga</dc:creator><comments>https://news.ycombinator.com/item?id=44473502</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44473502</guid></item><item><title><![CDATA[New comment by hamburga in "Beyond the Black Box: Interpretability of LLMs in Finance"]]></title><description><![CDATA[
<p>I’m still waiting for somebody to explain to me how a model with a million+ parameters can <i>ever</i> be interpretable in a useful way. You can’t actually understand the model state, so you’re just making very coarse statistical associations between some parameters and some kinds of responses. Or relying on another AI (itself not interpretable) to do your interpretation for you. What am I missing?</p>
]]></description><pubDate>Mon, 02 Jun 2025 14:15:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=44159094</link><dc:creator>hamburga</dc:creator><comments>https://news.ycombinator.com/item?id=44159094</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44159094</guid></item><item><title><![CDATA[New comment by hamburga in "Trump Taps Palantir to Compile Data on Americans"]]></title><description><![CDATA[
<p>There are a lot of people here who reflexively flag anything remotely close to US politics.</p>
]]></description><pubDate>Fri, 30 May 2025 18:14:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=44138679</link><dc:creator>hamburga</dc:creator><comments>https://news.ycombinator.com/item?id=44138679</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44138679</guid></item><item><title><![CDATA[New comment by hamburga in "Trump Taps Palantir to Compile Data on Americans"]]></title><description><![CDATA[
<p>How long until Palantir gets a contract to do the same work for Russia?</p>
]]></description><pubDate>Fri, 30 May 2025 15:12:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=44137019</link><dc:creator>hamburga</dc:creator><comments>https://news.ycombinator.com/item?id=44137019</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44137019</guid></item><item><title><![CDATA[New comment by hamburga in "Problems in AI alignment: A scale model"]]></title><description><![CDATA[
<p>AI is definitely a significant force multiplier in many areas. Still, an individual in total isolation has limited agency.</p>
]]></description><pubDate>Fri, 23 May 2025 17:49:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=44074914</link><dc:creator>hamburga</dc:creator><comments>https://news.ycombinator.com/item?id=44074914</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44074914</guid></item><item><title><![CDATA[New comment by hamburga in "Problems in AI alignment: A scale model"]]></title><description><![CDATA[
<p>> The argument that persuaded many of us is that people have a lot of desires, i.e., the algorithmic complexity of human desires is at least dozens or hundreds of bits of information<p>I would really try to disentangle this.<p>1. I don't know what my desires are.
2. "Desire" itself is a vague word that can't be measured or quantified; where does my desire for "feeling at peace" get encoded in any hypothetical artificial mind?
3. People have different and opposing desires.<p>Therefore, Coherent Extrapolated Volition is not coherent to me.<p>This is kind of where I go when I say that any centralized, top-down "grand plan" for AI safety is a folly. On the other hand, we all contribute to Selection.</p>
]]></description><pubDate>Fri, 23 May 2025 17:43:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=44074848</link><dc:creator>hamburga</dc:creator><comments>https://news.ycombinator.com/item?id=44074848</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44074848</guid></item><item><title><![CDATA[New comment by hamburga in "Problems in AI alignment: A scale model"]]></title><description><![CDATA[
<p>All it takes for somebody to nuke Atlanta is an atom bomb and an airplane and somebody willing to fly the plane.<p>I’m being facetious but there ARE ways to decide/act as a society and as subgroups within society that we want to disallow and punish and select out qualities of AIs that we think are unethical.</p>
]]></description><pubDate>Fri, 23 May 2025 15:20:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=44073667</link><dc:creator>hamburga</dc:creator><comments>https://news.ycombinator.com/item?id=44073667</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44073667</guid></item><item><title><![CDATA[New comment by hamburga in "Problems in AI alignment: A scale model"]]></title><description><![CDATA[
<p>A single person acting in isolation (no friends, no colleagues, no customers) has very little agency. While theoretically a single person could release smallpox back into civilization, we have collectively selected it out very effectively.</p>
]]></description><pubDate>Fri, 23 May 2025 15:15:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=44073627</link><dc:creator>hamburga</dc:creator><comments>https://news.ycombinator.com/item?id=44073627</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44073627</guid></item><item><title><![CDATA[New comment by hamburga in "Problems in AI alignment: A scale model"]]></title><description><![CDATA[
<p>Thank you for this. It gets exactly to the heart of the issue and what I sense is being missed in the AI alignment conversation. “What does ‘aligned’ mean” is and ethical/political question; and when people skip over that, it’s often to (1) smuggle in their own ethics and present them as universal, or (2) run away from the messy political questions and towards the safe but much narrower domain of technical research.</p>
]]></description><pubDate>Fri, 23 May 2025 15:03:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=44073525</link><dc:creator>hamburga</dc:creator><comments>https://news.ycombinator.com/item?id=44073525</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44073525</guid></item><item><title><![CDATA[New comment by hamburga in "Problems in AI alignment: A scale model"]]></title><description><![CDATA[
<p>> AI Alignment is different because it is trying to align something which is completely human made.<p>Not sure what you’re getting at here; pharmaceuticals are also human made. The point in the blog post was that we should also want drugs (for example) to be aligned to our values.<p>> What I think is absolutely important to understand is that throughout human history "alignment" has never happened.<p>Agree with that. This is a journey, not a destination. It’s a practice, not a mathematical problem to be solved. With no end in sight. In the same way that “perfect ethics” will never be achieved.</p>
]]></description><pubDate>Fri, 23 May 2025 14:56:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=44073479</link><dc:creator>hamburga</dc:creator><comments>https://news.ycombinator.com/item?id=44073479</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44073479</guid></item><item><title><![CDATA[New comment by hamburga in "Problems in AI alignment: A scale model"]]></title><description><![CDATA[
<p>Agreed - and this was definitely my intent with the blog post. If you only do Selection passively, you’re abdicating your ethical responsibilities to contribute to AI Alignment.</p>
]]></description><pubDate>Fri, 23 May 2025 14:48:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=44073416</link><dc:creator>hamburga</dc:creator><comments>https://news.ycombinator.com/item?id=44073416</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44073416</guid></item><item><title><![CDATA[New comment by hamburga in "Problems in AI alignment: A scale model"]]></title><description><![CDATA[
<p>Why can’t we infuse Selection with Goodness? We’re the ones doing the selecting. We’ve selected out things like chattel slavery, for example.<p>(Disclaimer: fell asleep after 10 minutes of reading the SSC post last night. I know it’s part of the HN Canon and perhaps I’m missing something)</p>
]]></description><pubDate>Fri, 23 May 2025 14:46:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=44073390</link><dc:creator>hamburga</dc:creator><comments>https://news.ycombinator.com/item?id=44073390</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44073390</guid></item><item><title><![CDATA[New comment by hamburga in "Problems in AI alignment: A scale model"]]></title><description><![CDATA[
<p>Indeed. Tolstoy does a deep exploration of this in War and Peace, using the example of Napoleon.<p>Of course, this gets to the heart of the free will debate (to be settled in a future post ;)). Both are true at the same time - organized people and dictators and other factors simultaneously wrestle for influence in complex ways in which causation is impossible to measure.<p>My own two cents, though, is that the Categorical Imperative is a tremendously important and underappreciated tool for raising the self-consciousness of groups.<p>A practical implementation of it is linked at the bottom of the blog post.</p>
]]></description><pubDate>Fri, 23 May 2025 14:43:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=44073367</link><dc:creator>hamburga</dc:creator><comments>https://news.ycombinator.com/item?id=44073367</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44073367</guid></item><item><title><![CDATA[New comment by hamburga in "Management = Bullshit (LLM Edition)"]]></title><description><![CDATA[
<p>Also see Commandment 7 - <a href="https://muldoon.cloud/2023/10/29/ai-commandments.html" rel="nofollow">https://muldoon.cloud/2023/10/29/ai-commandments.html</a></p>
]]></description><pubDate>Fri, 23 May 2025 03:02:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=44069473</link><dc:creator>hamburga</dc:creator><comments>https://news.ycombinator.com/item?id=44069473</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44069473</guid></item><item><title><![CDATA[New comment by hamburga in "Management = Bullshit (LLM Edition)"]]></title><description><![CDATA[
<p>This is how society collapses and OpenAI wins.</p>
]]></description><pubDate>Fri, 23 May 2025 02:58:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=44069457</link><dc:creator>hamburga</dc:creator><comments>https://news.ycombinator.com/item?id=44069457</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44069457</guid></item><item><title><![CDATA[New comment by hamburga in "Problems in AI alignment: A scale model"]]></title><description><![CDATA[
<p>Not totally following your last point, though I do totally agree that there is this historical drift from “AI alignment” referring to existential risk, to today, where any AI personality you don’t like is “unaligned.”<p>Still, “AI existential risk” is practically a different beast from “AI alignment,” and I’m trying to argue that the latter is not just for experts, but that it’s mostly a sociopolitical question of selection.</p>
]]></description><pubDate>Fri, 23 May 2025 02:46:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=44069402</link><dc:creator>hamburga</dc:creator><comments>https://news.ycombinator.com/item?id=44069402</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44069402</guid></item></channel></rss>