<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: sambigeara</title><link>https://news.ycombinator.com/user?id=sambigeara</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 05 May 2026 08:30:28 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=sambigeara" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by sambigeara in "Show HN: Pollen – distributed WASM runtime, no control plane, single binary"]]></title><description><![CDATA[
<p>Thanks! It's just Wazero's default config[1] right now, so it implements (and is constrained to) those capabilities--WASI p1 is supported, WASI p2 isn't (Wazero yet to implement). Yes to SIMD, no to GC and tail calls (I think), etc. Full capabilities can be inferred from digging around in the code linked below.<p>Good suggestion on listing capabilities, will add a note.<p>[1]<a href="https://github.com/wazero/wazero/blob/2bbd517b7633bf6a126305a1644263416b978484/config.go#L196-L201" rel="nofollow">https://github.com/wazero/wazero/blob/2bbd517b7633bf6a126305...</a></p>
]]></description><pubDate>Mon, 04 May 2026 08:36:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=48006095</link><dc:creator>sambigeara</dc:creator><comments>https://news.ycombinator.com/item?id=48006095</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48006095</guid></item><item><title><![CDATA[New comment by sambigeara in "Show HN: Pollen – distributed WASM runtime, no control plane, single binary"]]></title><description><![CDATA[
<p>Yes! I host pln.sh (and subdomains) as assets on my prod cluster. I have a couple of nodes hosted in EU/US, but do rely on a Cloudflare and a couple of A records to land traffic on them.</p>
]]></description><pubDate>Sun, 03 May 2026 14:13:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47997135</link><dc:creator>sambigeara</dc:creator><comments>https://news.ycombinator.com/item?id=47997135</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47997135</guid></item><item><title><![CDATA[New comment by sambigeara in "Show HN: Pollen – distributed WASM runtime, no control plane, single binary"]]></title><description><![CDATA[
<p>Ha, thanks! I'll ping you an email.<p>> And what's the public API/stdlib/bindings inside the WASM workers?<p>Wazero (via Extism) carries the load here. As it stands, the runtime lifts three basic host functions into guest code which enable the RPC-like behaviour, injection of caller-context and basic logging[1], which are in turn referenced in the guest code like in the example[2].<p>In the reverse direction, guest code exposes it's public APIs via build directives[3], which are handled by the runtime code[4].<p>Figured that concrete examples might be more helpful here (I hope the formatting works).<p>[1] <a href="https://github.com/Sambigeara/pollen/blob/567e85d5f1407932dd10a8bf35bb80f39d930775/pkg/wasm/hostfuncs.go#L53" rel="nofollow">https://github.com/Sambigeara/pollen/blob/567e85d5f1407932dd...</a>
[2] <a href="https://github.com/Sambigeara/pollen/blob/567e85d5f1407932dd10a8bf35bb80f39d930775/examples/echo/main.go#L19-L20" rel="nofollow">https://github.com/Sambigeara/pollen/blob/567e85d5f1407932dd...</a>
[3] <a href="https://github.com/Sambigeara/pollen/blob/567e85d5f1407932dd10a8bf35bb80f39d930775/examples/echo/main.go#L28-L29" rel="nofollow">https://github.com/Sambigeara/pollen/blob/567e85d5f1407932dd...</a>
[4] <a href="https://github.com/Sambigeara/pollen/blob/567e85d5f1407932dd10a8bf35bb80f39d930775/pkg/wasm/runtime.go#L164" rel="nofollow">https://github.com/Sambigeara/pollen/blob/567e85d5f1407932dd...</a></p>
]]></description><pubDate>Sun, 03 May 2026 07:26:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47994332</link><dc:creator>sambigeara</dc:creator><comments>https://news.ycombinator.com/item?id=47994332</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47994332</guid></item><item><title><![CDATA[New comment by sambigeara in "Show HN: Pollen – distributed WASM runtime, no control plane, single binary"]]></title><description><![CDATA[
<p>Wow! This is seriously cool. And certainly not bad form, there is a level of convergence here and it's always interesting to see what else is being built out in the ecosystem.<p>I'd agree that Pollen's current cap-enforcement story is limited, I'm not sure what direction I'll be heading in for that, but I was erring on the side of "bring your own enforcement" as a design pattern (ultimately, people deploy their own decision engines as first class seeds in the cluster). Naturally, the enforcement is weaker than the (fascinating) pattern you've landed on--seriously cool.<p>> and there’s a tiny Clojure-inspired Lisp (“Glia”) that doubles as an LLM-facing or human-facing shell.<p>This is a _lovely_ abstraction. How does it work? Does the LLM emit Glia directly or is there a translation layer between natural language and the interpreter..?<p>> It's a les polished compared to what Sam has shipped, but moving fast, and this post has jolted me into sharing a bit before I had planned!<p>I'm _far_ from polished. I suspect you're underselling your own position here, looks like you have something very compelling. And apologies for the jolt! Certainly happy to compare notes--I (think) I've added my email to my profile.</p>
]]></description><pubDate>Sun, 03 May 2026 07:17:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47994286</link><dc:creator>sambigeara</dc:creator><comments>https://news.ycombinator.com/item?id=47994286</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47994286</guid></item><item><title><![CDATA[New comment by sambigeara in "Show HN: Pollen – distributed WASM runtime, no control plane, single binary"]]></title><description><![CDATA[
<p>Absolutely! What's _really_ cool is that if you have disjoint computational steps that don't necessarily scale together linearly, you could split them into separately deployed `pln seeds` and let the cluster organically balance the compute as the different usage patterns occur. And yes, "p2p compute on demand" is certainly an intriguing idea.</p>
]]></description><pubDate>Sun, 03 May 2026 06:23:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47993954</link><dc:creator>sambigeara</dc:creator><comments>https://news.ycombinator.com/item?id=47993954</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47993954</guid></item><item><title><![CDATA[New comment by sambigeara in "Show HN: Pollen – distributed WASM runtime, no control plane, single binary"]]></title><description><![CDATA[
<p>Hypothetically, yes! If your workloads are bounded and can compile to WASM, break them into logical units which would benefit from individual scalability, and `pln seed` them into the cluster. Ingress can be from any node. Any workload that doesn't suit the WASM seeds can be `pln serve`d on dedicated hosts.<p>You could also establish a dev cluster (/environment) where all devs run a local instance. You can iterate on services quickly, expose ngrok-like capabilities by exposing a local dev instance of a server with `pln serve 8080 test_server` for your colleagues to consume with `pln connect test_server`, etc, etc.<p>A more whacky idea I've not been able to get out of my head which might become possible as the access story solidifies: imagine a customer could access a controlled subset of your companies offering by having a delegated node, running in their own infrastructure, that ultimately you can delegate and revoke at any given time.</p>
]]></description><pubDate>Sun, 03 May 2026 06:20:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47993937</link><dc:creator>sambigeara</dc:creator><comments>https://news.ycombinator.com/item?id=47993937</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47993937</guid></item><item><title><![CDATA[New comment by sambigeara in "Show HN: Pollen – distributed WASM runtime, no control plane, single binary"]]></title><description><![CDATA[
<p>Thank you. Me too!</p>
]]></description><pubDate>Sun, 03 May 2026 06:12:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47993902</link><dc:creator>sambigeara</dc:creator><comments>https://news.ycombinator.com/item?id=47993902</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47993902</guid></item><item><title><![CDATA[New comment by sambigeara in "Show HN: Pollen – distributed WASM runtime, no control plane, single binary"]]></title><description><![CDATA[
<p>This is a cool idea!</p>
]]></description><pubDate>Sun, 03 May 2026 06:12:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47993899</link><dc:creator>sambigeara</dc:creator><comments>https://news.ycombinator.com/item?id=47993899</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47993899</guid></item><item><title><![CDATA[New comment by sambigeara in "Show HN: Pollen – distributed WASM runtime, no control plane, single binary"]]></title><description><![CDATA[
<p>Oo good question. I'd prefer to keep external storage solutions as an exercise for the reader. I've touched on this in other comments, but I am looking to introduce state to the cluster internally, but for more sophisticated storage solutions, I'll probably avoid steering the project towards any one solution, at least for now.</p>
]]></description><pubDate>Sat, 02 May 2026 18:37:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47989125</link><dc:creator>sambigeara</dc:creator><comments>https://news.ycombinator.com/item?id=47989125</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47989125</guid></item><item><title><![CDATA[New comment by sambigeara in "Show HN: Pollen – distributed WASM runtime, no control plane, single binary"]]></title><description><![CDATA[
<p>So, the moment a partition occurs, nodes within their resultant partitions then view the remaining peers as the full view of the world. There is _no_ concept of a split brain scenario.<p>ANY decision around network topography or workload placement is a deterministic calculation run by all nodes individually. If all nodes see the same sub-set of peers representing their entire "cluster", they'll all naturally converge on the same view of what the cluster should look like. If the calculated output determines that Node A should claim Seed B, and it doesn't have it, it requests it from a peer who has it.<p>As soon as the partition recovers, nodes see the additional nodes re-enter the candidate set, which is then added in to future routing and placement decisions.<p>The main tradeoff to understand here is that you're at mercy of the random (best attempt redundant) placement of a seed. If the entire cluster has, say, 2 replicas  stored on any given nodes, if a resultant partition doesn't happen to have either of those two nodes, then the seed will be unavailable until the partition recovers. You can work around this with "smart" initial placements (one near, one close, for example) but you're still at the mercy of random partition events. An additional factor is of course getting very unlucky with dropped gossip events, which would also impact the rate of convergence across the cluster.</p>
]]></description><pubDate>Sat, 02 May 2026 18:35:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47989098</link><dc:creator>sambigeara</dc:creator><comments>https://news.ycombinator.com/item?id=47989098</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47989098</guid></item><item><title><![CDATA[New comment by sambigeara in "Show HN: Pollen – distributed WASM runtime, no control plane, single binary"]]></title><description><![CDATA[
<p>Fair comment that I'm hearing in a lot of places. I'll work on trying to land some concrete examples.</p>
]]></description><pubDate>Sat, 02 May 2026 17:52:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47988693</link><dc:creator>sambigeara</dc:creator><comments>https://news.ycombinator.com/item?id=47988693</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47988693</guid></item><item><title><![CDATA[New comment by sambigeara in "Show HN: Pollen – distributed WASM runtime, no control plane, single binary"]]></title><description><![CDATA[
<p>Thank you! I suspect there'll be a fair few dragons to uncover (memory constrained nodes and partial views, disk storage, startup/shutdown patterns, etc etc), if it's worthy of a write-up then I shall certainly post it here.</p>
]]></description><pubDate>Sat, 02 May 2026 17:49:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47988665</link><dc:creator>sambigeara</dc:creator><comments>https://news.ycombinator.com/item?id=47988665</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47988665</guid></item><item><title><![CDATA[New comment by sambigeara in "Show HN: Pollen – distributed WASM runtime, no control plane, single binary"]]></title><description><![CDATA[
<p>Well, I have a lot to thank you for. The single binary, heterogeneous story would have fallen flat on it's face if it wasn't for the brill work you lot are doing, so, thanks!</p>
]]></description><pubDate>Sat, 02 May 2026 17:36:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47988541</link><dc:creator>sambigeara</dc:creator><comments>https://news.ycombinator.com/item?id=47988541</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47988541</guid></item><item><title><![CDATA[New comment by sambigeara in "Show HN: Pollen – distributed WASM runtime, no control plane, single binary"]]></title><description><![CDATA[
<p>Thank you!</p>
]]></description><pubDate>Sat, 02 May 2026 17:18:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47988350</link><dc:creator>sambigeara</dc:creator><comments>https://news.ycombinator.com/item?id=47988350</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47988350</guid></item><item><title><![CDATA[New comment by sambigeara in "Show HN: Pollen – distributed WASM runtime, no control plane, single binary"]]></title><description><![CDATA[
<p>Honestly, not really. It started as an experiment in local-first, convergent state (I have a historical fascination of this: <a href="https://news.ycombinator.com/item?id=27606604">https://news.ycombinator.com/item?id=27606604</a>, <a href="https://news.ycombinator.com/item?id=42444856">https://news.ycombinator.com/item?id=42444856</a>) and then continued to grow.<p>I do absolutely despise the complexity of administering modern distributed systems, hence my attempt to make Pollen as ergonomic and (as much as I hate to use this term) batteries-included as possible.<p>I've not come across either of those projects, oddly. I have a tendency to avoid looking for similar projects during the development of my own, lest I get despondent and run out of steam. Both sound cool, though. I'd say WASM was a natural workload "type" that fit nicely into what I was trying to achieve with Pollen, rather than a driving factor, if you know what I mean.</p>
]]></description><pubDate>Sat, 02 May 2026 17:18:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47988339</link><dc:creator>sambigeara</dc:creator><comments>https://news.ycombinator.com/item?id=47988339</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47988339</guid></item><item><title><![CDATA[New comment by sambigeara in "Show HN: Pollen – distributed WASM runtime, no control plane, single binary"]]></title><description><![CDATA[
<p>Failed to mention in my other reply: a "seed" because I envisioned, perhaps too poetically, "seeding" some generic computational unit into the cluster only for it to organically spread to other nodes in the cluster... sort of like pollen? Maybe.</p>
]]></description><pubDate>Sat, 02 May 2026 17:05:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47988201</link><dc:creator>sambigeara</dc:creator><comments>https://news.ycombinator.com/item?id=47988201</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47988201</guid></item><item><title><![CDATA[New comment by sambigeara in "Show HN: Pollen – distributed WASM runtime, no control plane, single binary"]]></title><description><![CDATA[
<p>Ha, thanks! The routing is all Pollen. You reach the workloads through the gRPC control API (exposed on a socket on the host) via a `pln call seed_name function_name payload` or with a more traditional gRPC client. But once they're in, it routes them to a keyed WASM instance of that given seed on whatever node happens to be hosting it at that moment.</p>
]]></description><pubDate>Sat, 02 May 2026 16:52:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47988058</link><dc:creator>sambigeara</dc:creator><comments>https://news.ycombinator.com/item?id=47988058</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47988058</guid></item><item><title><![CDATA[New comment by sambigeara in "Show HN: Pollen – distributed WASM runtime, no control plane, single binary"]]></title><description><![CDATA[
<p>OK bear with me on this, it'll probably be a idle thought-stream because I don't have a concrete answer right now.<p>My intention is for Pollen to become a "generic blob of computational capability" into which you idly `pln seed` a workload and do not have to worry about ANY aspects of managing locality, scale, redundancy etc. You seed a workload onto any node, and you call it from any (other?) node. If you want to add more computational power to the cluster, you fire up Pollen on another machine and `pln invite` -> `pln join`.<p>Every node also has it's own ed25519 cert. The root key pair (the "don't lose this or you're in trouble" key pair) is used to delegate admin certs to other nodes. I'm also working on a mechanism which allows you to bake any arbitrary properties into a cert (as it stands, these are lifted into the WASM guest code for, say, in-application authz purposes). I have more ideas about how this can be extended in the future.<p>The root authority can invalidate a participating peer's cert at any point, currently just via a `pln deny` command which is eagerly gossiped around the cluster so other nodes stop talking to the denied node, too. I think this offers some opportunities for some fairly novel applications. Perhaps, in the future, you'll provision a node with a certain level or capability or authority to run on some external infrastructure. It'll have all of the (allowed) capabilities of your cluster, but will act like it's local to the external system. Plus, you can revoke it's access or re-set it's capabilities at any point; `pln grant` eagerly applies across the cluster, too.<p>The workloads, at the moment, are just anything you can compile to WASM via the Extism PDK. Stateless, for now, but with a view to add shared state and persistence in the near future!<p>Sorry this was rambly, hopefully it offered something useful.</p>
]]></description><pubDate>Sat, 02 May 2026 16:41:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47987937</link><dc:creator>sambigeara</dc:creator><comments>https://news.ycombinator.com/item?id=47987937</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47987937</guid></item><item><title><![CDATA[New comment by sambigeara in "Show HN: Pollen – distributed WASM runtime, no control plane, single binary"]]></title><description><![CDATA[
<p>Ha, at a hand-wavey level, yes? Like you say, there's no IPv6 overlay, each node just exposes it's own primary UDP port which talks Pollen's mesh protocol. It uses a single QUIC transport, one QUIC connection per peer, and a combination of streams and datagrams for different bits serving both the control/data layers.<p>I'd say "WASM-powered serverless functions" is a reasonable analogy, if your serverless functions maintained a minimal number of live replicas at any one point Also, of course, you're tied to the physical ceiling of the explicit hosts that are underpinning your cluster (N machines which are not dynamic like, say, lambdas are when they auto-provision to match demand).<p>And yeah, you can also `pln serve` arbitrary services which are then exposed to the cluster, but it's worth mentioning that these will of course not benefit from the inherent, organic autoscaling and locality mechanisms that come with the WASM blobs. I only added it in as a feature so I could retire my (basic) Tailscale usage.<p>Also, you can `pln seed` arbitrary blobs which can be `pln fetch`ed from other nodes. You can also `pln seed ./public my-site` a static webpage which you can reach from any node with `curl -H "Host: my-site" http://<node-addr>:8080/` (8080 being a configurable port).</p>
]]></description><pubDate>Sat, 02 May 2026 16:13:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47987668</link><dc:creator>sambigeara</dc:creator><comments>https://news.ycombinator.com/item?id=47987668</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47987668</guid></item><item><title><![CDATA[New comment by sambigeara in "Show HN: Pollen – distributed WASM runtime, no control plane, single binary"]]></title><description><![CDATA[
<p>Feel free to message here or privately if you wanted to discuss your actual use-case, would be keen to understand how people might try to use it!</p>
]]></description><pubDate>Sat, 02 May 2026 15:59:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47987534</link><dc:creator>sambigeara</dc:creator><comments>https://news.ycombinator.com/item?id=47987534</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47987534</guid></item></channel></rss>