<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: skybrian</title><link>https://news.ycombinator.com/user?id=skybrian</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 10:55:19 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=skybrian" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by skybrian in "What is a property?"]]></title><description><![CDATA[
<p>The property being tested in this example is “after inserting a row into a database table, the same row can be read back again.”<p>The insert statement isn’t independent of the database because the table needs to exist and its schema has to allow the inserted values. If the database is generated randomly, you need access to it to generate an insert statement that will work.<p>This is straightforward to do if the library is designed for it. Using my own TypeScript library [1]:<p>const insertCaseArb: Arbitrary<InsertCase> = arb.from((pick) => {<p><pre><code>  const db = pick(dbArb);
  const table = pick(arb.of(...db.tables));
  const values = pick(rowArbForTable(table));

  return {
    db,
    tableName: table.name,
    insert: {
      kind: "insert",
      table: table.name,
      values,
    },
  };
 });
</code></pre>
Why might that be difficult? Some property testing libraries don’t let you call a pick function directly.<p>[1] <a href="https://jsr.io/@skybrian/repeat-test" rel="nofollow">https://jsr.io/@skybrian/repeat-test</a></p>
]]></description><pubDate>Sat, 11 Apr 2026 23:58:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47735023</link><dc:creator>skybrian</dc:creator><comments>https://news.ycombinator.com/item?id=47735023</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47735023</guid></item><item><title><![CDATA[New comment by skybrian in "What is a property?"]]></title><description><![CDATA[
<p>Good link. I think that explanation works because it's somewhat closer to providing concrete examples of the kinds of tests you can write.</p>
]]></description><pubDate>Sat, 11 Apr 2026 22:50:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47734684</link><dc:creator>skybrian</dc:creator><comments>https://news.ycombinator.com/item?id=47734684</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47734684</guid></item><item><title><![CDATA[New comment by skybrian in "US summons bank bosses over cyber risks from Anthropic's latest AI model"]]></title><description><![CDATA[
<p>It’s more about when your priors are so strong that it’s not worth paying attention to a new report. Clearly not in this case.</p>
]]></description><pubDate>Sat, 11 Apr 2026 14:55:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47731178</link><dc:creator>skybrian</dc:creator><comments>https://news.ycombinator.com/item?id=47731178</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47731178</guid></item><item><title><![CDATA[New comment by skybrian in "US summons bank bosses over cyber risks from Anthropic's latest AI model"]]></title><description><![CDATA[
<p>A lesson of the parable about "crying wolf" is that cynicism based on previous events doesn't prove that the next event is fake. The people who ignored the warning may have thought it "most likely," but <i>they were wrong.</i></p>
]]></description><pubDate>Fri, 10 Apr 2026 22:43:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47724660</link><dc:creator>skybrian</dc:creator><comments>https://news.ycombinator.com/item?id=47724660</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47724660</guid></item><item><title><![CDATA[New comment by skybrian in "US summons bank bosses over cyber risks from Anthropic's latest AI model"]]></title><description><![CDATA[
<p>I mean sure, they could be lying. It seems like a rather elaborate lie, though, considering that they got several other major companies to go along with it.</p>
]]></description><pubDate>Fri, 10 Apr 2026 20:41:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47723426</link><dc:creator>skybrian</dc:creator><comments>https://news.ycombinator.com/item?id=47723426</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47723426</guid></item><item><title><![CDATA[New comment by skybrian in "US summons bank bosses over cyber risks from Anthropic's latest AI model"]]></title><description><![CDATA[
<p>Your cynicism doesn't prove that it's fake, though.</p>
]]></description><pubDate>Fri, 10 Apr 2026 19:58:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47722914</link><dc:creator>skybrian</dc:creator><comments>https://news.ycombinator.com/item?id=47722914</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47722914</guid></item><item><title><![CDATA[New comment by skybrian in "US summons bank bosses over cyber risks from Anthropic's latest AI model"]]></title><description><![CDATA[
<p>A year ago the LLM's weren't good enough to find these security issues. They could have done other stuff. But then again, the big tech companies were already doing other stuff, with bug bounties, fuzzing, rewriting key libraries, and so on.<p>This initiative probably could have started a few months sooner with Opus and similar models, though.</p>
]]></description><pubDate>Fri, 10 Apr 2026 15:45:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47719846</link><dc:creator>skybrian</dc:creator><comments>https://news.ycombinator.com/item?id=47719846</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47719846</guid></item><item><title><![CDATA[New comment by skybrian in "US summons bank bosses over cyber risks from Anthropic's latest AI model"]]></title><description><![CDATA[
<p>Your cynicism doesn't prove that it's fake, though.</p>
]]></description><pubDate>Fri, 10 Apr 2026 15:34:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47719702</link><dc:creator>skybrian</dc:creator><comments>https://news.ycombinator.com/item?id=47719702</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47719702</guid></item><item><title><![CDATA[New comment by skybrian in "Sam Altman's Coworkers Say He Can Barely Code and Misunderstands Basic Concepts"]]></title><description><![CDATA[
<p>Didn’t Jobs have nutty beliefs about food and healthcare?</p>
]]></description><pubDate>Fri, 10 Apr 2026 01:14:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47712375</link><dc:creator>skybrian</dc:creator><comments>https://news.ycombinator.com/item?id=47712375</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47712375</guid></item><item><title><![CDATA[New comment by skybrian in "The AI Great Leap Forward"]]></title><description><![CDATA[
<p>Which shareholders do you mean? Mark Zuckerberg holds >50% of voting rights for Facebook. Sergey Brin and Larry Page hold >50% of voting rights for Google. That means management gets to do what it wants, within very broad legal limits.<p>On the other hand, how the stock does will matter to other employees because they’re shareholders and they have a stake in the outcome.</p>
]]></description><pubDate>Thu, 09 Apr 2026 03:36:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47699008</link><dc:creator>skybrian</dc:creator><comments>https://news.ycombinator.com/item?id=47699008</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47699008</guid></item><item><title><![CDATA[New comment by skybrian in "The AI Great Leap Forward"]]></title><description><![CDATA[
<p>It looks like nobody is collapsing, but OpenAI might be behind Anthropic now:<p><a href="https://www.axios.com/2026/03/18/ai-enterprise-revenue-anthropic-openai" rel="nofollow">https://www.axios.com/2026/03/18/ai-enterprise-revenue-anthr...</a><p><a href="https://x.com/albrgr/status/2041288324464451617" rel="nofollow">https://x.com/albrgr/status/2041288324464451617</a></p>
]]></description><pubDate>Thu, 09 Apr 2026 00:25:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47697866</link><dc:creator>skybrian</dc:creator><comments>https://news.ycombinator.com/item?id=47697866</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47697866</guid></item><item><title><![CDATA[New comment by skybrian in "The AI Great Leap Forward"]]></title><description><![CDATA[
<p>Suppose they do somehow collapse. How does that cause wider problems? Their competitors will pick up customers.</p>
]]></description><pubDate>Wed, 08 Apr 2026 21:16:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47696395</link><dc:creator>skybrian</dc:creator><comments>https://news.ycombinator.com/item?id=47696395</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47696395</guid></item><item><title><![CDATA[New comment by skybrian in "The AI Great Leap Forward"]]></title><description><![CDATA[
<p>If you want to show that that there's a risk of disaster you need to do better than making a silly analogy. Companies will often start expensive projects that fail and then they pick themselves up and move on. Big, profitable companies can afford bigger failures. Google has had a slew of failed projects, and Meta's metaverse stuff tanked, and they're still fine. They can afford to experiment.<p>So which companies are betting so big that it might actually threaten them? Oracle maybe?</p>
]]></description><pubDate>Wed, 08 Apr 2026 21:00:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47696200</link><dc:creator>skybrian</dc:creator><comments>https://news.ycombinator.com/item?id=47696200</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47696200</guid></item><item><title><![CDATA[New comment by skybrian in "IPv6 is the only way forward"]]></title><description><![CDATA[
<p>First I heard of it. Apparently they are private Ipv6 addresses:<p><a href="https://en.wikipedia.org/wiki/Unique_local_address" rel="nofollow">https://en.wikipedia.org/wiki/Unique_local_address</a><p>If your intranet has no IPv4 addresses, this is better than a NAT somehow?</p>
]]></description><pubDate>Wed, 08 Apr 2026 13:08:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47689688</link><dc:creator>skybrian</dc:creator><comments>https://news.ycombinator.com/item?id=47689688</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47689688</guid></item><item><title><![CDATA[New comment by skybrian in "I've sold out"]]></title><description><![CDATA[
<p>Looks like Earendil has a product called Lefos, which is an email-based agent:<p><a href="https://lefos.com/about" rel="nofollow">https://lefos.com/about</a><p>Apparently it’s possible to give it access to much of your Google account:<p><a href="https://lefos.com/terms" rel="nofollow">https://lefos.com/terms</a><p>I didn’t see a pricing page, but there is this:<p>> Lefos uses a credit-based billing system. New accounts receive a limited number of starter credits at no cost. Usage of AI features consumes credits.<p>> When your credits run out, you can subscribe to a paid plan to receive additional credits each billing cycle. Subscriptions are processed through Polar, our billing provider. You can manage or cancel your subscription at any time from your account settings.</p>
]]></description><pubDate>Wed, 08 Apr 2026 13:02:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47689605</link><dc:creator>skybrian</dc:creator><comments>https://news.ycombinator.com/item?id=47689605</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47689605</guid></item><item><title><![CDATA[New comment by skybrian in "Sam Altman may control our future – can he be trusted?"]]></title><description><![CDATA[
<p>I don't think it's all that hard to avoid working on anything shady. It's not as easy to avoid being <i>associated</i> with anything shady due to widespread cynicism and a tendency to treat tech companies with thousands of projects as a monolith.</p>
]]></description><pubDate>Tue, 07 Apr 2026 01:31:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47669679</link><dc:creator>skybrian</dc:creator><comments>https://news.ycombinator.com/item?id=47669679</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47669679</guid></item><item><title><![CDATA[New comment by skybrian in "Anthropic expands partnership with Google and Broadcom for next-gen compute"]]></title><description><![CDATA[
<p>Why should we have strong priors in either direction? Maybe it will keep scaling for decades like Moore's law. Maybe not.</p>
]]></description><pubDate>Tue, 07 Apr 2026 01:11:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47669519</link><dc:creator>skybrian</dc:creator><comments>https://news.ycombinator.com/item?id=47669519</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47669519</guid></item><item><title><![CDATA[New comment by skybrian in "Anthropic expands partnership with Google and Broadcom for next-gen compute"]]></title><description><![CDATA[
<p>I guess gigawatts is how we roughly measure computing capacity at the datacenter scale? Also saw something similar here:<p>> Costs and pricing are expressed per “token”, but the published data immediately seems to admit that this is a bad choice of unit because it costs a lot more to output a token than input one. It seems to me that the actual marginal quantity being produced and consumed is “processing power”, which is apparently measured in gigawatt hours these days. In any case, I think more than anything this vindicates my original decision not to get too precise. [...]<p><a href="https://backofmind.substack.com/p/new-new-rules-for-the-new-new-economy" rel="nofollow">https://backofmind.substack.com/p/new-new-rules-for-the-new-...</a><p>Is it priced that way, though? I assume next-gen TPU's will be more efficient?</p>
]]></description><pubDate>Tue, 07 Apr 2026 00:17:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47669124</link><dc:creator>skybrian</dc:creator><comments>https://news.ycombinator.com/item?id=47669124</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47669124</guid></item><item><title><![CDATA[New comment by skybrian in "Launch HN: Freestyle – Sandboxes for Coding Agents"]]></title><description><![CDATA[
<p>Any ideas for locking down remote access from an untrusted VM? Cloudflare has object-based capabilities and some similar thing might be useful to let a VM make remote requests without giving it API keys. (Keys could be exfiltrated via prompt injection.)</p>
]]></description><pubDate>Mon, 06 Apr 2026 19:21:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47665609</link><dc:creator>skybrian</dc:creator><comments>https://news.ycombinator.com/item?id=47665609</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47665609</guid></item><item><title><![CDATA[New comment by skybrian in "Launch HN: Freestyle: Sandboxes for AI Coding Agents"]]></title><description><![CDATA[
<p>Can you start up multiple VM's easily on a Hetzner box?</p>
]]></description><pubDate>Mon, 06 Apr 2026 19:01:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47665292</link><dc:creator>skybrian</dc:creator><comments>https://news.ycombinator.com/item?id=47665292</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47665292</guid></item></channel></rss>