<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: tom_tom_watson</title><link>https://news.ycombinator.com/user?id=tom_tom_watson</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 25 Apr 2026 12:30:25 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=tom_tom_watson" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by tom_tom_watson in "Ask HN: What are you working on? (February 2026)"]]></title><description><![CDATA[
<p>Hi HN,<p>I’m working on an API that provides heuristic signals about whether a piece of text is likely LLM-generated.<p>The system uses an ensemble of techniques, and on my internal evaluation set (details below) it achieves ~99.8% accuracy. I’m fully aware that general-purpose AI text detection is hard and adversarial, so this is meant as a probabilistic signal rather than a definitive classifier.<p>There’s a simple demo site: <a href="https://checktica.com" rel="nofollow">https://checktica.com</a><p>And public API docs (no API keys required): <a href="https://api.checktica.com/v1/docs" rel="nofollow">https://api.checktica.com/v1/docs</a><p>I’d really appreciate critical feedback, especially on:<p>- Failure modes you’ve seen in similar systems<p>- Whether this framing makes sense at all<p>- Where such a tool might actually be useful (or useless)<p>Happy to answer technical questions.</p>
]]></description><pubDate>Mon, 09 Feb 2026 18:03:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46948557</link><dc:creator>tom_tom_watson</dc:creator><comments>https://news.ycombinator.com/item?id=46948557</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46948557</guid></item></channel></rss>