<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: zozbot234</title><link>https://news.ycombinator.com/user?id=zozbot234</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 14 Apr 2026 10:23:44 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=zozbot234" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by zozbot234 in "A new spam policy for “back button hijacking”"]]></title><description><![CDATA[
<p>> The fix is to not to implement anti-user patterns.<p>That's not a fix the user can implement themselves. Holding down the back button is comparatively trivial.</p>
]]></description><pubDate>Tue, 14 Apr 2026 08:04:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47762699</link><dc:creator>zozbot234</dc:creator><comments>https://news.ycombinator.com/item?id=47762699</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47762699</guid></item><item><title><![CDATA[New comment by zozbot234 in "A new spam policy for “back button hijacking”"]]></title><description><![CDATA[
<p>You can usually address this by going back as far as possible, then holding the button again so more of the history shows up.  And IME, it's only really broken sites that have this problem in the first place.</p>
]]></description><pubDate>Tue, 14 Apr 2026 07:58:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47762649</link><dc:creator>zozbot234</dc:creator><comments>https://news.ycombinator.com/item?id=47762649</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47762649</guid></item><item><title><![CDATA[New comment by zozbot234 in "The Future of Everything Is Lies, I Guess: Safety"]]></title><description><![CDATA[
<p>It's history of ideas. What Graeber says is ultimately aligned to this, as I pointed out in a sibling thread.</p>
]]></description><pubDate>Tue, 14 Apr 2026 07:45:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47762546</link><dc:creator>zozbot234</dc:creator><comments>https://news.ycombinator.com/item?id=47762546</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47762546</guid></item><item><title><![CDATA[New comment by zozbot234 in "A new spam policy for “back button hijacking”"]]></title><description><![CDATA[
<p>It's a fix because it completely solves the issue on any site, without requiring changes from LinkedIn or any other actor.</p>
]]></description><pubDate>Tue, 14 Apr 2026 07:42:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47762523</link><dc:creator>zozbot234</dc:creator><comments>https://news.ycombinator.com/item?id=47762523</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47762523</guid></item><item><title><![CDATA[New comment by zozbot234 in "A new spam policy for “back button hijacking”"]]></title><description><![CDATA[
<p>The fix is to hold down the back button so the local history shows up, and pick the right page to go back to. Unfortunately, some versions of Chrome and/or Android seem to break this but that's a completely self-inflicted problem.</p>
]]></description><pubDate>Tue, 14 Apr 2026 07:27:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47762406</link><dc:creator>zozbot234</dc:creator><comments>https://news.ycombinator.com/item?id=47762406</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47762406</guid></item><item><title><![CDATA[New comment by zozbot234 in "Rust Threads on the GPU"]]></title><description><![CDATA[
<p>It looks like they're trying to map the entire "normal GPU programming model" to Rust code, including potentially things like GPU "threads" (to SIMD lanes + masked/predicated execution to account for divergence) and the execution model where a single GPU shader is launched in multiple instances with varying x, y and z indexes.  In this context, it makes sense to map the GPU "warp" to a Rust thread since GPU lanes, even with partially independent program counters, still execute in lockstep much like CPU SIMD/SPMD or vector code.</p>
]]></description><pubDate>Tue, 14 Apr 2026 07:19:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47762330</link><dc:creator>zozbot234</dc:creator><comments>https://news.ycombinator.com/item?id=47762330</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47762330</guid></item><item><title><![CDATA[New comment by zozbot234 in "The Future of Everything Is Lies, I Guess: Safety"]]></title><description><![CDATA[
<p>> the whole discussion of markets is a terrible starting place for deriving results in ethics/psychology.<p>Historically, we did essentially the opposite. We figured out many aspects of human ethics and psychology first, and deduced from them how and why markets work as they do.<p>> ...  If people weren't broadly aligned on basic stuff, then autocrats, theocrats, kleptocrats and so on would simply not be interested in dismantling democracies. They make that effort because they must.<p>This implies that people are only <i>weakly</i> aligned in the first place, otherwise no such attempt at dismantling could ever succeed.  That's not a very interesting claim; it does not refute the usefulness of some external mechanism to more directly foster aligned action.  Markets do this with a maximum of decentralized power and a minimum of institutional mechanism.</p>
]]></description><pubDate>Mon, 13 Apr 2026 22:02:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47758461</link><dc:creator>zozbot234</dc:creator><comments>https://news.ycombinator.com/item?id=47758461</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47758461</guid></item><item><title><![CDATA[New comment by zozbot234 in "The Future of Everything Is Lies, I Guess: Safety"]]></title><description><![CDATA[
<p>These social ties are real (they are a kind of wealth, or social capital, for the persons involved) but they're also limited to very small social groups, the equivalent of a modern small village neighborhood or HOA.  The point of the market is that it scales well beyond those.</p>
]]></description><pubDate>Mon, 13 Apr 2026 21:19:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47757955</link><dc:creator>zozbot234</dc:creator><comments>https://news.ycombinator.com/item?id=47757955</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47757955</guid></item><item><title><![CDATA[New comment by zozbot234 in "The Future of Everything Is Lies, I Guess: Safety"]]></title><description><![CDATA[
<p>Unless it's some sort of complete post-scarcity, it has to be understandable in market terms. What happens if people try to free-ride on the whole "communist" system? If they get excluded from its benefits, that's equivalent to enforcing some bundle of property rights.</p>
]]></description><pubDate>Mon, 13 Apr 2026 20:57:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47757720</link><dc:creator>zozbot234</dc:creator><comments>https://news.ycombinator.com/item?id=47757720</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47757720</guid></item><item><title><![CDATA[New comment by zozbot234 in "The Future of Everything Is Lies, I Guess: Safety"]]></title><description><![CDATA[
<p>AIUI David Graeber famously pointed out that people in small groups can form the equivalent of a "market" simply by exchanging favours ("I'll scratch your back if you scratch mine") in an informal gift economy, without any money-like token or external unit of account. That's quite in line with what I said.</p>
]]></description><pubDate>Mon, 13 Apr 2026 20:48:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47757619</link><dc:creator>zozbot234</dc:creator><comments>https://news.ycombinator.com/item?id=47757619</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47757619</guid></item><item><title><![CDATA[New comment by zozbot234 in "The Future of Everything Is Lies, I Guess: Safety"]]></title><description><![CDATA[
<p>Frontier AI models get evaluated for safety precisely to avert the "AI robot uprising causes an existential disaster" scenario.  At the moment we are light years away from anything like that ever happening, and that's after we literally tried our best to LARP that very scenario into existence with things like moltbook and OpenClaw.</p>
]]></description><pubDate>Mon, 13 Apr 2026 20:28:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47757402</link><dc:creator>zozbot234</dc:creator><comments>https://news.ycombinator.com/item?id=47757402</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47757402</guid></item><item><title><![CDATA[New comment by zozbot234 in "The Future of Everything Is Lies, I Guess: Safety"]]></title><description><![CDATA[
<p>> each party's profit is necessairly limited by the other party's<p>Profit is obtained by maximizing traded benefits and minimizing costs.  None of this requires taking anything away from any other party.</p>
]]></description><pubDate>Mon, 13 Apr 2026 20:09:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47757189</link><dc:creator>zozbot234</dc:creator><comments>https://news.ycombinator.com/item?id=47757189</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47757189</guid></item><item><title><![CDATA[New comment by zozbot234 in "The Future of Everything Is Lies, I Guess: Safety"]]></title><description><![CDATA[
<p>If you've built an agent that can act even vaguely close to a paperclip maximizer, you've already solved 99.999% or more of the alignment problem.  The hard part of alignment so far is getting the AI to <i>do</i> something useful in pursuit of the right goal, and not just waste energy.  We still have no idea how to do this with any effectiveness: even modern "RL from verified feedback" systems are effectively toys, the equivalent of playing video games, not really of doing something useful in the real world.</p>
]]></description><pubDate>Mon, 13 Apr 2026 20:03:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47757137</link><dc:creator>zozbot234</dc:creator><comments>https://news.ycombinator.com/item?id=47757137</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47757137</guid></item><item><title><![CDATA[New comment by zozbot234 in "The Future of Everything Is Lies, I Guess: Safety"]]></title><description><![CDATA[
<p>> You can tell that broad alignment between people is natural<p>It really isn't. The whole point of the market system is to collectively align people's actions towards a shared target of "Pareto-optimized total welfare".  And even then the alignment is approximate and heavily constrained due to a combination of transaction costs (which also account for e.g. externalities) and information asymmetries.  But transaction costs and information asymmetries apply to <i>any</i> system of alignment, including non-market ones. The market (augmented with some pre-determined legal assignment of property rights, potentially including quite complex bundles of rules and regulations) is still your best bet.</p>
]]></description><pubDate>Mon, 13 Apr 2026 19:54:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47757035</link><dc:creator>zozbot234</dc:creator><comments>https://news.ycombinator.com/item?id=47757035</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47757035</guid></item><item><title><![CDATA[New comment by zozbot234 in "AI could be the end of the digital wave, not the next big thing"]]></title><description><![CDATA[
<p>Chat bots can run on your local hardware these days, even mobile phone hardware. That's effectively free.</p>
]]></description><pubDate>Mon, 13 Apr 2026 13:51:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47751983</link><dc:creator>zozbot234</dc:creator><comments>https://news.ycombinator.com/item?id=47751983</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47751983</guid></item><item><title><![CDATA[New comment by zozbot234 in "The Closing of the Frontier"]]></title><description><![CDATA[
<p>We've had such models before. GPT Pro, Gemini DeepThink.  Mostly targeting science advancements as opposed to security research, but still, in a way Mythos is just more of the same.</p>
]]></description><pubDate>Sun, 12 Apr 2026 20:41:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47744239</link><dc:creator>zozbot234</dc:creator><comments>https://news.ycombinator.com/item?id=47744239</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47744239</guid></item><item><title><![CDATA[New comment by zozbot234 in "The Closing of the Frontier"]]></title><description><![CDATA[
<p>It depends. If Mythos is AGI then OpenAI's charter says they <i>have</i> to merge with the winner.</p>
]]></description><pubDate>Sun, 12 Apr 2026 20:37:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47744203</link><dc:creator>zozbot234</dc:creator><comments>https://news.ycombinator.com/item?id=47744203</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47744203</guid></item><item><title><![CDATA[New comment by zozbot234 in "Pro Max 5x quota exhausted in 1.5 hours despite moderate usage"]]></title><description><![CDATA[
<p>Because the "explicit instruction" you give AI is not deterministic as in a normal computer program. It's a complete black box and the context is also most likely polluted by all sorts of weird stuff.  Putting it on as tight of a leash as possible should be seen as normal.</p>
]]></description><pubDate>Sun, 12 Apr 2026 14:04:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47739800</link><dc:creator>zozbot234</dc:creator><comments>https://news.ycombinator.com/item?id=47739800</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47739800</guid></item><item><title><![CDATA[New comment by zozbot234 in "Pro Max 5x quota exhausted in 1.5 hours despite moderate usage"]]></title><description><![CDATA[
<p>Memory is expensive? If reads are as rare as they claim you can just stash the KV-cache on spinning disk.</p>
]]></description><pubDate>Sun, 12 Apr 2026 14:00:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47739754</link><dc:creator>zozbot234</dc:creator><comments>https://news.ycombinator.com/item?id=47739754</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47739754</guid></item><item><title><![CDATA[New comment by zozbot234 in "Pro Max 5x quota exhausted in 1.5 hours despite moderate usage"]]></title><description><![CDATA[
<p>> It goes into long exploration loops for 5+ minutes even when I point it to the exact files to inspect.<p>Give it a custom sandbox and context for the work, so it has no opportunity to roam around when not required.  AI agentic coding is hugely wasteful of context and tokens in general (compared to generic chat, which is how most people use AI), there's a whole lot of scope for improvement there.</p>
]]></description><pubDate>Sun, 12 Apr 2026 13:55:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47739693</link><dc:creator>zozbot234</dc:creator><comments>https://news.ycombinator.com/item?id=47739693</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47739693</guid></item></channel></rss>