<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: bonsai_spool</title><link>https://news.ycombinator.com/user?id=bonsai_spool</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 14 Apr 2026 13:42:45 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=bonsai_spool" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by bonsai_spool in "Evaluation of Claude Mythos Preview's cyber capabilities"]]></title><description><![CDATA[
<p>> On individual tasks Claude and GPT are comparable<p>That is not what the first graphs show - the Anthropic models cluster at 'better' positions on the graph, and I imagine you could show that the values are significantly different.</p>
]]></description><pubDate>Tue, 14 Apr 2026 05:28:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47761602</link><dc:creator>bonsai_spool</dc:creator><comments>https://news.ycombinator.com/item?id=47761602</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47761602</guid></item><item><title><![CDATA[New comment by bonsai_spool in "Evaluation of Claude Mythos Preview's cyber capabilities"]]></title><description><![CDATA[
<p>>  From what I see, the first two graphs have OpenAI models above Claude<p>That's just in that final graph, and that graph is perhaps the least instructive - they talk about ranges of outcomes but they don't show whether all of the models besides Mythos / Opus 4.6 overlap<p>Take a look at all three graphs together and it's clear Anthropic are doing better in this arena</p>
]]></description><pubDate>Mon, 13 Apr 2026 23:43:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47759409</link><dc:creator>bonsai_spool</dc:creator><comments>https://news.ycombinator.com/item?id=47759409</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47759409</guid></item><item><title><![CDATA[New comment by bonsai_spool in "New Orleans's Car-Crash Conspiracy"]]></title><description><![CDATA[
<p>Garrison was killed four days after the indictment was released. From the text:<p>> On September 18, 2020, the Justice Department unsealed a seven-count indictment charging Garrison with “staging over fifty accidents.” Alfortish and Motta weren’t indicted or named in the document, but they were described, respectively, as “Co-Conspirator A” and “Attorney B.” Garrison’s coöperation with the F.B.I. wasn’t referenced in the text—and it might have seemed that charging him in such a public fashion would be a good way to conceal his role as an informant. But a close reading of the filing encouraged certain inferences. One stray sentence asserted that “Co-Conspirator A instructed Garrison on the number of passengers to include in staged collisions.” Alfortish might have made some unconventional life choices, but he wasn’t a total idiot. He certainly hadn’t supplied that information to the Feds—and the only other person who could have done so was Garrison.<p>> Four days after the indictment was made public, Garrison had dinner with his mother, Sandra Fontenette, who was seventy-four, at the tidy condominium that she owned, on Foy Street. They ate gumbo and talked. Garrison had been texting with a woman named Kim that afternoon, and they had made plans to hang out after dinner. At around eight-thirty, the doorbell rang, and Garrison went to meet her. But, upon opening the front door, he shouted to his mother, “Get down!” Ten shots rang out, and Garrison collapsed on the floor, dead.</p>
]]></description><pubDate>Mon, 13 Apr 2026 23:40:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47759384</link><dc:creator>bonsai_spool</dc:creator><comments>https://news.ycombinator.com/item?id=47759384</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47759384</guid></item><item><title><![CDATA[New comment by bonsai_spool in "Evaluation of Claude Mythos Preview's cyber capabilities"]]></title><description><![CDATA[
<p>Looking closely at the graphs, the anthropic models are clearly all higher than the openai models<p>Whether the difference is meaningful can’t be determined from the graphs (and picking one graph over the ensemble also doesn't have a reasoned basis given that these are all arbitrary).</p>
]]></description><pubDate>Mon, 13 Apr 2026 18:48:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47756304</link><dc:creator>bonsai_spool</dc:creator><comments>https://news.ycombinator.com/item?id=47756304</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47756304</guid></item><item><title><![CDATA[New comment by bonsai_spool in "The Life and Death of the Book Review"]]></title><description><![CDATA[
<p>In a 16th century French literature course, I read Montaigne in the original—I realized then how much I rely on paragraphs to read prose...<p>I don't quite see why the author shuns them.</p>
]]></description><pubDate>Sun, 12 Apr 2026 00:06:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47735053</link><dc:creator>bonsai_spool</dc:creator><comments>https://news.ycombinator.com/item?id=47735053</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47735053</guid></item><item><title><![CDATA[New comment by bonsai_spool in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>The big thing you're missing here is that biology people don't (in my experience) post opinions about the future/futility/ease/unimportance of computer science <i>especially when their opinion goes against other biologists' evidence-backed views</i>. This is a cultural thing in biology.<p>It's not your fault that you don't know this, but this whole subthread is very CS-coded in its disdain for other software people's standard of evidence.</p>
]]></description><pubDate>Tue, 07 Apr 2026 22:25:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47682132</link><dc:creator>bonsai_spool</dc:creator><comments>https://news.ycombinator.com/item?id=47682132</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47682132</guid></item><item><title><![CDATA[New comment by bonsai_spool in "System Card: Claude Mythos Preview [pdf]"]]></title><description><![CDATA[
<p>Actually you're saying similar things:<p>Rent-seeking of old was a ground rent, monies paid for the land <i>without</i> considering the building that was on it.<p>Residential rents today often have implied warrants because of modern law, so your landlord is essentially selling you a service at a particular location.</p>
]]></description><pubDate>Tue, 07 Apr 2026 20:46:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47681155</link><dc:creator>bonsai_spool</dc:creator><comments>https://news.ycombinator.com/item?id=47681155</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47681155</guid></item><item><title><![CDATA[New comment by bonsai_spool in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>> Yes that is correct. I would like a large body of experience and consenus to rely on as opposed to the regular 'trust the experts' argument, which has been shown for decades that is a deeply flawed and easy to manipulate argument.<p>Yes, it is far inferior to the <i>'Trust torginus and his ability to understand the large body of experience that other actual subject-matter-experts have somehow not understood'</i> strategy</p>
]]></description><pubDate>Tue, 07 Apr 2026 20:35:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47681020</link><dc:creator>bonsai_spool</dc:creator><comments>https://news.ycombinator.com/item?id=47681020</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47681020</guid></item><item><title><![CDATA[New comment by bonsai_spool in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>>>It's very easy to learn more about this if it's seriously a question you have.<p>>No, it's not. It took years of polishing by software engineers, who understand this exact profession to get models where they are now<p>This reads as defensive. The thing that is easy to learn is 'why are biology ai LLMs dangerous chatgpt claude'. I have never googled this before, so I'll do this with the reader, live. I'm applying a date cutoff of 12/31/24 by the way.<p>Here, dear reader, are the first five links. I wish I were lying about this:<p>- <a href="https://sciencebusiness.net/news/ai/scientists-grapple-risk-artificial-intelligence-created-pandemics" rel="nofollow">https://sciencebusiness.net/news/ai/scientists-grapple-risk-...</a><p>- <a href="https://www.governance.ai/analysis/managing-risks-from-ai-enabled-biological-tools" rel="nofollow">https://www.governance.ai/analysis/managing-risks-from-ai-en...</a><p>- <a href="https://gssr.georgetown.edu/the-forum/topics/biosec/the-double-edged-sword-opportunities-and-risks-of-ai-in-biosecurity/" rel="nofollow">https://gssr.georgetown.edu/the-forum/topics/biosec/the-doub...</a><p>- <a href="https://www.vox.com/future-perfect/23820331/chatgpt-bioterrorism-bioweapons-artificial-inteligence-openai-terrorism" rel="nofollow">https://www.vox.com/future-perfect/23820331/chatgpt-bioterro...</a><p>- <a href="https://www.reddit.com/r/ClaudeAI/comments/1de8qkv/awareness_about_the_potential_harm_from/" rel="nofollow">https://www.reddit.com/r/ClaudeAI/comments/1de8qkv/awareness...</a><p>I don't know about you, but that counts as easy to me.<p>-----<p>> I would apply this framework to biology - this time, expert effort, and millions of GPU hours and a giant corpus that is open source clearly has not been involved in biology.<p>I've been getting good programming and molecular biology results out of these back to GPT3.5.<p>I don't know what to tell you—if you really wanted to understand the importance, you'd know already.</p>
]]></description><pubDate>Tue, 07 Apr 2026 19:34:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47680277</link><dc:creator>bonsai_spool</dc:creator><comments>https://news.ycombinator.com/item?id=47680277</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47680277</guid></item><item><title><![CDATA[New comment by bonsai_spool in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>> Just reading this, the inevitable scaremongering about biological weapons comes up.<p>It's very easy to learn more about this if it's seriously a question you have.<p>I don't quite follow why you think that you are so much more thoughtful than Anthropic/OpenAI/Google such that you agree that LLMs can't autonomously create very bad things but—in this area that is not your domain of expertise—you disagree and insist that LLMs <i>cannot</i> create damaging things autonomously in biology.<p>I will be charitable and reframe your question for you: is outputting a sequence of tokens, let's call them characters, by LLM dangerous? Clearly not, we have to figure out what interpreter is being used, download runtimes etc.<p>Is outputting a sequence of tokens, let's call them DNA bases, by LLM dangerous?   What if we call them RNA bases? Amino acids? What if we're able to send our token output to a machine that automatically synthesizes the relevant molecules?</p>
]]></description><pubDate>Tue, 07 Apr 2026 19:11:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47679966</link><dc:creator>bonsai_spool</dc:creator><comments>https://news.ycombinator.com/item?id=47679966</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47679966</guid></item><item><title><![CDATA[New comment by bonsai_spool in "Unverified: What Practitioners Post About OCR, Agents, and Tables"]]></title><description><![CDATA[
<p>> I mean "a" text! I was just curious how you write. Do you prefer to write comments?<p>In all fairness, I've been accused of sounding like an LLM this year, which is quite unfortunate as I think we're coming to the end of careful writing.</p>
]]></description><pubDate>Mon, 06 Apr 2026 06:06:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47657478</link><dc:creator>bonsai_spool</dc:creator><comments>https://news.ycombinator.com/item?id=47657478</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47657478</guid></item><item><title><![CDATA[New comment by bonsai_spool in "Unverified: What Practitioners Post About OCR, Agents, and Tables"]]></title><description><![CDATA[
<p>I'll amend my statement; I think the comparison text was written by an LLM with human editing. As I read it more, there are also some LLM-isms there.</p>
]]></description><pubDate>Mon, 06 Apr 2026 03:57:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47656826</link><dc:creator>bonsai_spool</dc:creator><comments>https://news.ycombinator.com/item?id=47656826</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47656826</guid></item><item><title><![CDATA[New comment by bonsai_spool in "Unverified: What Practitioners Post About OCR, Agents, and Tables"]]></title><description><![CDATA[
<p>Here's a great example of something written by a human that otherwise seems to have a similar structure to the OP:<p><a href="https://lalitm.com/post/building-syntaqlite-ai/" rel="nofollow">https://lalitm.com/post/building-syntaqlite-ai/</a><p>Flags for LLM vs human drafting:<p>- Subtitles have the rhetoric turned to 11 with LLMs. (<i>Note: Who has ever had multiple sentences as a blog post heading? It's bizarre</i>) :<p><pre><code>  - LLM "The Demo Works. Production Does Not."

  - Human "AI is why this project exist, and why it's as complete as it is"

</code></pre>
- Sources for claims that call for evidence<p><pre><code>  - LLM "Six months ago, a practitioner could name a preferred OCR engine with confidence. Based on what I read, that confidence is gone." - *What was read?*

  - Human "AI coding tools and playing slot machines"[ref]
</code></pre>
- Variable paragraph lengths, where things that need more explanation have longer paragraphs (and vice versa)<p><pre><code>  - LLM *Scroll through—each thing is about the same length*
</code></pre>
----<p>There are lots of tells like this. This is a moment to get good at detecting LLM text in case it's surreptitiously used to your detriment.</p>
]]></description><pubDate>Mon, 06 Apr 2026 01:53:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47656078</link><dc:creator>bonsai_spool</dc:creator><comments>https://news.ycombinator.com/item?id=47656078</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47656078</guid></item><item><title><![CDATA[New comment by bonsai_spool in "Unverified: What Practitioners Post About OCR, Agents, and Tables"]]></title><description><![CDATA[
<p>You can ask an LLM to write in a different voice—they don't all sound exactly the same, though this one is no different than other examples.<p>When I use an LLM, it tries to sound like me but there are still tendencies it falls back on, especially when the context window begins to expand.<p>The 'missing subject nouns' is probably the LLM's way of sounding like an authoritative source in a technical field since many programmers like to write that way.</p>
]]></description><pubDate>Mon, 06 Apr 2026 00:25:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47655444</link><dc:creator>bonsai_spool</dc:creator><comments>https://news.ycombinator.com/item?id=47655444</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47655444</guid></item><item><title><![CDATA[New comment by bonsai_spool in "Unverified: What Practitioners Post About OCR, Agents, and Tables"]]></title><description><![CDATA[
<p>>  Which text did you last publish?<p>Never published an LLM text, friend.<p>And if somebody needs Claude to get something published, that person should find a better line of business, one more suited to her or his aptitudes.<p>> I would be genuinely interested in specific changes you would do if you were the editor.<p>This whole thing would get sent back with the kind request to think of an argument and write it out. By hand. Without an LLM.</p>
]]></description><pubDate>Mon, 06 Apr 2026 00:23:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47655426</link><dc:creator>bonsai_spool</dc:creator><comments>https://news.ycombinator.com/item?id=47655426</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47655426</guid></item><item><title><![CDATA[New comment by bonsai_spool in "Unverified: What Practitioners Post About OCR, Agents, and Tables"]]></title><description><![CDATA[
<p>> Did you read the article?<p>How else do you think I would have come to write this comment? I got to the second major heading before realizing that there is little human input in this document.<p>I use LLMs but I will never impose on Claude's intellectual musings on another person as some sort of intellectual insight.<p>This is about the same as copying someone else's homework and then presenting the copied work as an example of deep brilliance. The copying isn't great, but the boasting is absurd. Who are we trying to con?</p>
]]></description><pubDate>Sun, 05 Apr 2026 16:28:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47651025</link><dc:creator>bonsai_spool</dc:creator><comments>https://news.ycombinator.com/item?id=47651025</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47651025</guid></item><item><title><![CDATA[New comment by bonsai_spool in "Unverified: What Practitioners Post About OCR, Agents, and Tables"]]></title><description><![CDATA[
<p>Please write in your own words! I’m not inclined to read something if it consists of what you copy and pasted from Claude</p>
]]></description><pubDate>Sun, 05 Apr 2026 10:44:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47648073</link><dc:creator>bonsai_spool</dc:creator><comments>https://news.ycombinator.com/item?id=47648073</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47648073</guid></item><item><title><![CDATA[New comment by bonsai_spool in "A.I. Helped One Man (and His Brother) Build a $1.8B Company"]]></title><description><![CDATA[
<p>It's in the article—he buys advertising on news sites. They claim to have verified his financials</p>
]]></description><pubDate>Fri, 03 Apr 2026 11:45:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47625576</link><dc:creator>bonsai_spool</dc:creator><comments>https://news.ycombinator.com/item?id=47625576</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47625576</guid></item><item><title><![CDATA[New comment by bonsai_spool in "A.I. Helped One Man (and His Brother) Build a $1.8B Company"]]></title><description><![CDATA[
<p>> I’m pretty sure the writer included this and other details precisely so that readers would understand the ethics of this company.<p>Maybe, but it felt like this was meant to support the idea that the company is scrappy / under construction.</p>
]]></description><pubDate>Fri, 03 Apr 2026 11:43:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47625554</link><dc:creator>bonsai_spool</dc:creator><comments>https://news.ycombinator.com/item?id=47625554</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47625554</guid></item><item><title><![CDATA[Mapping pesticides to cancer risk at the country scale with spatial exposomics]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.nature.com/articles/s44360-026-00087-0">https://www.nature.com/articles/s44360-026-00087-0</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47601055">https://news.ycombinator.com/item?id=47601055</a></p>
<p>Points: 7</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 01 Apr 2026 14:03:57 +0000</pubDate><link>https://www.nature.com/articles/s44360-026-00087-0</link><dc:creator>bonsai_spool</dc:creator><comments>https://news.ycombinator.com/item?id=47601055</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47601055</guid></item></channel></rss>