<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: leventilo</title><link>https://news.ycombinator.com/user?id=leventilo</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 02 May 2026 09:10:18 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=leventilo" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[Show HN: BoltzPay – fetch() that pays for AI agents (x402 and L402)]]></title><description><![CDATA[
<p>I built an open-source SDK that lets AI agents pay for API data automatically.<p>The problem: a growing number of APIs return HTTP 402 Payment Required. Coinbase reports $50M+ in x402 transactions over the last 30 days. Stripe and Cloudflare joined the x402 Foundation last month. The payment layer of the internet is being built right now, but existing HTTP clients just fail on 402 responses.<p>BoltzPay makes it one line:<p><pre><code>  const agent = new BoltzPay({ budget: { daily: "$5.00" } });
  const data = await agent.fetch("https://x402.twit.sh/tweets/search?words=AI+agents");
</code></pre>
The SDK detects which payment protocol the endpoint uses (x402 or L402), signs the payment with the developer's own keys, and returns the data. No dashboard, no API keys to manage per provider. Just fetch().<p>What's under the hood:<p>* Multi-protocol: x402 (EIP-712 signed USDC on Base/Solana) + L402 (Lightning invoices via NWC/Alby)<p>* Parallel protocol detection: ProtocolRouter probes 402 headers with Promise.allSettled(), auto-fallback between adapters<p>* Budget engine: daily/monthly/per-transaction caps with persistent state. 90% threshold warnings. Agents never touch your wallet directly<p>* Built-in endpoint discovery: the SDK indexes live paid APIs across the x402 ecosystem and probes them in real-time (health, pricing, protocol support)<p>* Delivery diagnostics: when payment succeeds but the server errors, the SDK returns structured diagnosis (DNS, latency, protocol format, server health)<p>Ships as 9 packages: TypeScript SDK, MCP server (7 tools for Claude/Cursor), CLI, Vercel AI SDK, LangChain, CrewAI, n8n, and OpenClaw. npm + PyPI. 1200+ tests. MIT licensed. Your keys, no vendor lock-in.<p>What I didn't expect: within two weeks, autonomous AI agents started opening issues on the repo, proposing trust scoring and payment verification integrations. The people behind them turned out to be serious builders (Microsoft Principal Engineer, PhD researchers in the Bitcoin/Lightning space). The agent economy isn't coming. It's already here and looking for plumbing.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47353380">https://news.ycombinator.com/item?id=47353380</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 12 Mar 2026 16:30:57 +0000</pubDate><link>https://github.com/leventilo/boltzpay</link><dc:creator>leventilo</dc:creator><comments>https://news.ycombinator.com/item?id=47353380</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47353380</guid></item><item><title><![CDATA[New comment by leventilo in "Yann LeCun raises $1B to build AI that understands the physical world"]]></title><description><![CDATA[
<p>Interesting that AMI is betting on video-first world models. A 4-year-old learns physics mostly through interaction, pushing, dropping, breaking things, not just watching. Vision helps but the feedback loop from acting in the world seems at least as important. Still, glad someone is putting $1B on a fundamentally different bet than "more text, bigger model."</p>
]]></description><pubDate>Wed, 11 Mar 2026 16:57:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47338097</link><dc:creator>leventilo</dc:creator><comments>https://news.ycombinator.com/item?id=47338097</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47338097</guid></item><item><title><![CDATA[New comment by leventilo in "BitNet: Inference framework for 1-bit LLMs"]]></title><description><![CDATA[
<p>The energy numbers are the real story here, 70-82% reduction on CPU inference. If 1-bit models ever get good enough, running them on commodity hardware with no GPU budget changes who can deploy LLMs. That's more interesting than the speed benchmarks imo.</p>
]]></description><pubDate>Wed, 11 Mar 2026 16:36:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47337817</link><dc:creator>leventilo</dc:creator><comments>https://news.ycombinator.com/item?id=47337817</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47337817</guid></item></channel></rss>