A beehive of self-healing clusters. A rotating shadow master per cluster. Self-reconciling roll call for guaranteed delivery. NACK aggregation that scales O(N / cluster_size) instead of O(N). And a GCS that wraps any TCP/IP or SATCOM uplink into a Pulse multicast — turning the swarm into a transparent IP-speaking fabric that survives DDIL and DoS conditions by design.
Pulse Protocol today is a software bus — application logic riding on top of fixed WiFi silicon. The SDR roadmap takes ownership of MAC and PHY, which is what unlocks beehive scale, FHSS jamming resistance, slot-precise TDMA, and waveform agility.
Software-defined behavior on fixed WiFi silicon.
Custom waveform · custom MAC · slot-precise timing in the FPGA.
Pulse organizes nodes into clusters of ~50, and clusters into beehives of ~20 clusters. Each cluster elects a shadow master. Beehive geometry is RSSI-driven — neighbors are physically close, so local NACKs and repairs ride short links at low power.
The architecture is built for DDIL — Denied, Degraded, Intermittent, and Limited bandwidth — and for active DoS conditions. Click through the scenarios below to see how the network responds in real time.
Beehive operating in normal state. GCS gateway serving as TCP/IP wrap point. Self-reconciling roll call propagating across clusters. Shadow masters maintaining intra-cluster repair. Inter-cluster mesh active on the dashed amber paths.
Reliable multicast at swarm scale isn't one trick — it's three. Self-reconciling roll call tells every node what's been delivered, without coordination. NACK aggregation collapses thousands of repair requests into dozens. Rotating shadow masters keep no single node on the hook for too long.
Click "Play" on each card to animate the mechanism
Every node periodically broadcasts a compact digest of the messages it has received. Peers compare digests, identify gaps, and request only the missing pieces. The system reconciles itself — mathematics guarantees every node converges to the same global view of the roll call without a coordinator, regardless of the order updates arrive.
The shadow master watches the cluster's self-reconciling roll call, identifies gaps, and sends one aggregated NACK to the GCS on the cluster's behalf. Local repair: the master serves missing frames from its own cache to peers without GCS round-trip. Back-channel load drops from O(N) to O(N / cluster_size).
Shadow master is an elected role, not a fixed node. It rotates on cadence — driven by RSSI re-evaluation, battery balance, or failure detection. No single node carries cluster-master overhead long enough to drain its battery, become a jamming target, or take down the cluster if it fails.
The "GCS" in our diagrams isn't a single radio — it's a gateway. Inbound TCP/IP traffic from a SATCOM uplink, ground network, or remote command server arrives at the gateway, gets wrapped in a Pulse multicast frame, and is broadcast across the beehive. Telemetry flows back the same way: drones publish to their cluster, shadow masters aggregate up the mesh, the gateway unwraps and emits standard IP traffic to whatever's listening upstream.
The combination of self-reconciling roll call (knowing what's delivered) and NACK aggregation (repairing what isn't) is what lets the gateway treat the beehive as a single reliable IP-speaking endpoint — without the upstream caller needing to know Pulse exists.
Click the buttons to animate a packet through the gateway
Command server, video uplink, or ground TCP/IP traffic arrives at the gateway. The IP payload is wrapped in a Pulse multicast frame addressed to the relevant cluster, beehive, or full swarm. Pulse handles delivery — self-reconciling roll call confirms reach, NACK aggregation handles repair. Upstream caller never sees Pulse.
Drones publish telemetry, video, or status into their cluster. Shadow masters aggregate and forward up the inter-cluster mesh to the gateway. The gateway unwraps the Pulse frame and emits standard IP traffic on the SATCOM uplink or ground network — looks like any other endpoint to upstream consumers.
What the field needs versus what each build delivers. Today is honest about being a software bus. The roadmap is what plug-and-play 1,000-node operations look like.