<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: casebash</title><link>https://news.ycombinator.com/user?id=casebash</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 10:23:25 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=casebash" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by casebash in "Sam Altman Slams Meta’s AI Talent Poaching: 'Missionaries Will Beat Mercenaries'"]]></title><description><![CDATA[
<p>Oh, they're actually the bad guys, just folks haven't thought far enough ahead to realise it yet.</p>
]]></description><pubDate>Tue, 01 Jul 2025 21:23:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=44438124</link><dc:creator>casebash</dc:creator><comments>https://news.ycombinator.com/item?id=44438124</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44438124</guid></item><item><title><![CDATA[New comment by casebash in "Summary: Imagining and Building Wise Machines"]]></title><description><![CDATA[
<p>Authors of original paper: Samuel G. B. Johnson, Amir-Hossein Karimi, Yoshua Bengio, Nick Chater, Tobias Gerstenberg, Kate Larson, Sydney Levine, Melanie Mitchell, Iyad Rahwan, Bernhard Schölkopf, Igor Grossmann</p>
]]></description><pubDate>Fri, 11 Apr 2025 08:22:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=43651561</link><dc:creator>casebash</dc:creator><comments>https://news.ycombinator.com/item?id=43651561</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43651561</guid></item><item><title><![CDATA[Summary: Imagining and Building Wise Machines]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.lesswrong.com/posts/euAMyQAQWTYyWZW8Z/summary-imagining-and-building-wise-machines-the-centrality">https://www.lesswrong.com/posts/euAMyQAQWTYyWZW8Z/summary-imagining-and-building-wise-machines-the-centrality</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43651560">https://news.ycombinator.com/item?id=43651560</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Fri, 11 Apr 2025 08:22:39 +0000</pubDate><link>https://www.lesswrong.com/posts/euAMyQAQWTYyWZW8Z/summary-imagining-and-building-wise-machines-the-centrality</link><dc:creator>casebash</dc:creator><comments>https://news.ycombinator.com/item?id=43651560</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43651560</guid></item><item><title><![CDATA[New comment by casebash in "US and UK refuse to sign AI safety declaration at summit"]]></title><description><![CDATA[
<p>I'll copy my LinkedIn comment:<p>"Well done to the UK for not signing the fully compromised Statement on Inclusive and Sustainable Artificial Intelligence for the People and the Planet. Australia shouldn't have signed this statement either given how France intentionally derailed attempts to build a global consensus on how we can develop AI safely.<p>For those who lack context, the UK organised the AI Safety Summit at Bletchley Park in November 2023 to allow countries to discuss how advanced AI technologies can be discussed safely. There was a mini-conference in Korea, France was given the opportunity to organise the next big conference, a trust they immediately betrayed by changing the event to be about promoting investment in their AI industry.<p>They renamed the summit to the AI Action Summit and relegated safety from the sole focus to being just one of five focus areas, but not even one of five equally important focus areas, but one that seems to have been purposefully minimized even further.<p>Within the conference statement safety was reduced to a single paragraph that undermines safety if anything:<p>“Harnessing the benefits of AI technologies to support our economies and societies depends on advancing Trust and Safety. We commend the role of the Bletchley Park AI Safety Summit and Seoul Summits that have been essential in progressing international cooperation on AI safety and we note the voluntary commitments launched there. We will keep addressing the risks of AI to information integrity and continue the work on AI transparency.”<p>Let’s break it down:
• First, safety is being framed as “trust and safety”. These are not the same things. The word trust appearing first is not as innocent as it appears: trust is the primary goal and safety is secondary to this. This is a very commercial perspective, if people trust your product you can trick them into buying it, even if it isn't actually safe.
• Second, trust and safety are not framed as values important in and of themselves, but as subordinate to realising the benefits of these technologies, primarily the "economic benefits". While the development of advanced AI technologies could theoretically create a social surplus that could be taxed and distributed, it's naive to assume that this will be automatic, particularly when the policy mechanisms are this compromised.
• Finally, the statement doesn’t commit to continuing to address these risks, but only narrowly to “addressing the risks of AI to information integrity” and “continue the work on AI transparency”. In other words, they’re purposefully downplaying any more significant potential risks, likely because discussing more serious risks would get in the way of convincing companies to invest in France.<p>Unfortunately, France has sold out humanity for short-term commercial benefit and we may all pay the price."</p>
]]></description><pubDate>Wed, 12 Feb 2025 23:06:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=43030751</link><dc:creator>casebash</dc:creator><comments>https://news.ycombinator.com/item?id=43030751</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43030751</guid></item><item><title><![CDATA[New comment by casebash in "Framework for Artificial Intelligence Diffusion"]]></title><description><![CDATA[
<p>Most of the comments here only make sense under a model where AI isn't going to become extremely powerful AI in the near term.<p>If you think upcoming models aren't going to be very powerful, then you'll probably endorse business-as-usual policies such as rejecting any policy that isn't perfect or insisting on a high bar of evidence before regulating.<p>On the other hand, if you have a world model where AI is going to provide malicious actors with extremely powerful and dangerous technologies within the next few years, then instead of being radical, proposal like this start appearing extremely timid.</p>
]]></description><pubDate>Fri, 17 Jan 2025 15:14:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=42738354</link><dc:creator>casebash</dc:creator><comments>https://news.ycombinator.com/item?id=42738354</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42738354</guid></item><item><title><![CDATA[New comment by casebash in ""Godfather of AI" Geoffrey Hinton: The 60 Minutes Interview [video]"]]></title><description><![CDATA[
<p>Released 9th October, 2023</p>
]]></description><pubDate>Tue, 08 Oct 2024 14:24:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=41777717</link><dc:creator>casebash</dc:creator><comments>https://news.ycombinator.com/item?id=41777717</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41777717</guid></item><item><title><![CDATA["Godfather of AI" Geoffrey Hinton: The 60 Minutes Interview [video]]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.youtube.com/watch?v=qrvK_KuIeJk">https://www.youtube.com/watch?v=qrvK_KuIeJk</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=41777716">https://news.ycombinator.com/item?id=41777716</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Tue, 08 Oct 2024 14:24:08 +0000</pubDate><link>https://www.youtube.com/watch?v=qrvK_KuIeJk</link><dc:creator>casebash</dc:creator><comments>https://news.ycombinator.com/item?id=41777716</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41777716</guid></item><item><title><![CDATA[New comment by casebash in "Oprah will screw up the AI story"]]></title><description><![CDATA[
<p>Will Oprah screw up the AI story?<p>Quite possibly, but likely not as bad as this article.<p>Complete clickbait title, assumes that the author's hobby horses are the most important thing in the world, bizarrely argues that crypto hype is an "attack on labour".</p>
]]></description><pubDate>Sun, 01 Sep 2024 03:41:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=41414090</link><dc:creator>casebash</dc:creator><comments>https://news.ycombinator.com/item?id=41414090</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41414090</guid></item><item><title><![CDATA[New comment by casebash in "You are not dumb, you just lack the prerequisites"]]></title><description><![CDATA[
<p>I'm not going to try to recap all of that, but, as an example, if you have a sufficiently strong understanding of arithmetic, learning basic modular arithmetic should be effortless, pigeonhole principle completely obvious.<p>I was quite surprised when I tried applying for a Microsoft internship in uni and they gave me a question on the pigeon-hole principle.</p>
]]></description><pubDate>Sun, 25 Aug 2024 12:58:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=41346971</link><dc:creator>casebash</dc:creator><comments>https://news.ycombinator.com/item?id=41346971</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41346971</guid></item><item><title><![CDATA[New comment by casebash in "You are not dumb, you just lack the prerequisites"]]></title><description><![CDATA[
<p>Just thought I'd add a comment as someone who came top of the state in my grade in multiple olympiad competitions:<p>I always felt that a large part of my advantage came from having a strong understanding of maths from the ground up.<p>I felt that a lot more people could have gained the same level of understanding as I did if they had been willing to work hard enough, but I also felt that almost no-one would, because it'd be an incredibly hard sell to convince someone to engage in years-long project where they'd go all the way back to kindergarten and rebuild their knowledge from the ground up.<p>In other words, excellence is often the accumulation of small advantages over time.</p>
]]></description><pubDate>Sat, 24 Aug 2024 16:09:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=41339201</link><dc:creator>casebash</dc:creator><comments>https://news.ycombinator.com/item?id=41339201</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41339201</guid></item><item><title><![CDATA[New comment by casebash in "Open source AI is the path forward"]]></title><description><![CDATA[
<p>I expect this to end up having been one of the worst timed blog posts in history. Open source AI has mostly been good for the world up until now, but we're getting to the point where we're about to find out why open-sourcing sufficiently bad models is a terrible idea.</p>
]]></description><pubDate>Wed, 24 Jul 2024 03:30:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=41053373</link><dc:creator>casebash</dc:creator><comments>https://news.ycombinator.com/item?id=41053373</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41053373</guid></item><item><title><![CDATA[New comment by casebash in "Please don't mention AI again"]]></title><description><![CDATA[
<p>I'm just going to say it.<p>The author is an idiot who is using insults as a crutch to make his case.</p>
]]></description><pubDate>Thu, 20 Jun 2024 09:15:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=40736587</link><dc:creator>casebash</dc:creator><comments>https://news.ycombinator.com/item?id=40736587</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40736587</guid></item><item><title><![CDATA[New comment by casebash in "International Scientific Report on the Safety of Advanced AI [pdf]"]]></title><description><![CDATA[
<p>Did you read the report? It's answer for basically anything contentious was, "views differ"</p>
]]></description><pubDate>Sun, 19 May 2024 01:16:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=40403388</link><dc:creator>casebash</dc:creator><comments>https://news.ycombinator.com/item?id=40403388</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40403388</guid></item><item><title><![CDATA[New comment by casebash in "Ilya Sutskever to leave OpenAI"]]></title><description><![CDATA[
<p>Not possible because they've got LeCun.</p>
]]></description><pubDate>Wed, 15 May 2024 12:50:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=40366225</link><dc:creator>casebash</dc:creator><comments>https://news.ycombinator.com/item?id=40366225</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40366225</guid></item><item><title><![CDATA[New comment by casebash in "SB-1047 will stifle open-source AI and decrease safety"]]></title><description><![CDATA[
<p>While this article makes some valid points, it basically just ignores the reasons why the law is being passed, that is the potential for open-models to enable bio-attacks, cyberattacks, election manipulation, automated personalised scams, and who knows what else.<p>One might question why that is. Perhaps it's the case that Jeremy has an excellent response to these points which he has somehow neglected to raise. Or perhaps it's because these threats are very inconvenient for an open source developer.<p>I'm sure he'd say that open-sourcing models means that all actors have access to defensive systems and that the good guys outnumber the bad guys and it'll all work out well.<p>And that could be true. Or it could be false. It's not like we really know that everything would work out fine. It's not that we've run the experiment. I mean maybe it works out like that, or maybe one guy creates a virus and then it doesn't really matter how many folk on the other side, but we still get kind of screwed because we can only produce vaccines that fast. It's that's what going to happen? I don't really know, but it's at least plausible. I mean, maybe we'll automate all aspects of vaccine production and be able to respond much faster, but that's dependent on when we develop this technology vs. when AI starts significantly helping with bioweapons with someone then using it for an attack. And at that point it's all so uncertain and up in the air that it's seems rather strange for someone to suggest that it'll all be fine.</p>
]]></description><pubDate>Mon, 29 Apr 2024 15:31:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=40199620</link><dc:creator>casebash</dc:creator><comments>https://news.ycombinator.com/item?id=40199620</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40199620</guid></item><item><title><![CDATA[New comment by casebash in "Let's Think Dot by Dot: Hidden Computation in Transformer Language Models"]]></title><description><![CDATA[
<p>"This paper follows a recent trend of marketing excellent theoretical work as LLMs being capable of secretly plotting behind your back, when the realistic implication is backdoor risk".<p>Many top computer scientists consider loss of control risks to be a possibility that we need to take seriously.<p>So the question then becomes, is there a way to apply science to gain greater clarity on the possibility of these claims? And this is very tricky, since we're trying to evaluate claims not about models that currently exist, but about future models.<p>And I guess what people have realised recently is that, even if we can't directly run an experiment to determine the validity of the core claim of concern, we can run experiments on auxiliary claims in order to better inform discussions. For example, the best way to show that a future model could have a capability is to demonstrate that a current model possesses that capability.<p>I'm guessing you'd like to see more scientific evidence before you want to take possibilities like deceptive alignment seriously. I think that's reasonable. However, work like this is how we gather that evidence.<p>Obviously, each individual result doesn't provide much evidence on its own, but the accumulation of results has helped to provide more strategic clarity over time.</p>
]]></description><pubDate>Sun, 28 Apr 2024 07:38:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=40186782</link><dc:creator>casebash</dc:creator><comments>https://news.ycombinator.com/item?id=40186782</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40186782</guid></item><item><title><![CDATA[New comment by casebash in "Should AI Be Open?"]]></title><description><![CDATA[
<p>OpenAI just released a response to Musk's lawsuit.<p>One of the emails provided as evidence of their claims is Slate Star Codex's "Should AI be open?"<p>The emails show Musk forwarding him an email he received to Sam Altman and then Ilya replying that for a hard takeoff scenario, open sourcing could make it easier for a bad actor to reach AGI first and that the right strategy would be to share everything in the short and possibly medium term and that it would make sense to start being less open as they got closer to AGI.<p><a href="https://openai.com/blog/openai-elon-musk#email-4" rel="nofollow">https://openai.com/blog/openai-elon-musk#email-4</a></p>
]]></description><pubDate>Wed, 06 Mar 2024 03:22:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=39611937</link><dc:creator>casebash</dc:creator><comments>https://news.ycombinator.com/item?id=39611937</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39611937</guid></item><item><title><![CDATA[Should AI Be Open?]]></title><description><![CDATA[
<p>Article URL: <a href="https://slatestarcodex.com/2015/12/17/should-ai-be-open/">https://slatestarcodex.com/2015/12/17/should-ai-be-open/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=39611936">https://news.ycombinator.com/item?id=39611936</a></p>
<p>Points: 4</p>
<p># Comments: 1</p>
]]></description><pubDate>Wed, 06 Mar 2024 03:22:16 +0000</pubDate><link>https://slatestarcodex.com/2015/12/17/should-ai-be-open/</link><dc:creator>casebash</dc:creator><comments>https://news.ycombinator.com/item?id=39611936</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39611936</guid></item><item><title><![CDATA[New comment by casebash in "Beyond A*: Better Planning with Transformers"]]></title><description><![CDATA[
<p>Yeah, at first I read that as it using 26.8% of the original steps, but reducing the number of steps by 26.8% is not that impressive. I wonder whether it actually reduces total search time as there is added overhead of running the neural network.</p>
]]></description><pubDate>Fri, 23 Feb 2024 16:02:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=39482262</link><dc:creator>casebash</dc:creator><comments>https://news.ycombinator.com/item?id=39482262</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39482262</guid></item><item><title><![CDATA[New comment by casebash in ""Accelerationism" is an overdue corrective to years of gloom"]]></title><description><![CDATA[
<p>EA here:<p>"Discussing the risks and opportunities in front of us intelligently, e/accs believe, is a sign of a flourishing civil society"<p>Except e/acc has made a massive contribution to lowering the standard of discourse. Beff talked intelligently on Lex and they are much more reasonable on Twitter Spaces, but on the Twitter timeline itself 90% of their posts are some combination of trash/propaganda/insults.<p>"Rather, their moral vision is one where more people — including and especially those who consider themselves hands-off today — actively engage with emerging technology and identify concrete plans for its development and stewardship, rather than reflexively backing away from what they don’t understand"<p>Again, e/acc seems to be all about "build, build, build!" which stands in stark contrast to taking a step back and thinking carefully about the impacts of what you're doing before you do it.<p>"Discussing the risks and opportunities in front of us intelligently, e/accs believe, is a sign of a flourishing civil society."<p>Again, this isn't accurate. E/acc is very much <i>not</i> about balance and also very much <i>not</i> about about discussing risks intelligently vs. almost always criticising the people making the claims instead of engaging in discussion on a technical level.<p>...<p>Criticism aside, this article paints a picture of e/acc which, while not representative of the movement as it exists, is something which it could choose to grow into.</p>
]]></description><pubDate>Tue, 13 Feb 2024 01:19:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=39353093</link><dc:creator>casebash</dc:creator><comments>https://news.ycombinator.com/item?id=39353093</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39353093</guid></item></channel></rss>