Pulse Architecture · Interactive Deep-Dive

PULSEat scale.

A beehive of self-healing clusters. A rotating shadow master per cluster. Self-reconciling roll call for guaranteed delivery. NACK aggregation that scales O(N / cluster_size) instead of O(N). And a GCS that wraps any TCP/IP or SATCOM uplink into a Pulse multicast — turning the swarm into a transparent IP-speaking fabric that survives DDIL and DoS conditions by design.

1,000+
Nodes per beehive
~50
Nodes per cluster
<1s
Master failover
DDIL
Survives by design
01 · Communications stack

From firmware
to frequency.

Pulse Protocol today is a software bus — application logic riding on top of fixed WiFi silicon. The SDR roadmap takes ownership of MAC and PHY, which is what unlocks beehive scale, FHSS jamming resistance, slot-precise TDMA, and waveform agility.

Today

Pulse on COTS MCU

Software-defined behavior on fixed WiFi silicon.

Application
Telemetry · FOTA · jammer detection
Pulse
Transport
UDP · custom retries
Pulse
Network
Static peer list · no real routing
Pulse
Data link / MAC
Upper MAC in firmware · lower MAC frozen in WiFi silicon
Partial
PHY
802.11 OFDM · no waveform control
Fixed
RF front end
Integrated radio block · 2.4 GHz only
Fixed
Antenna
PCB trace
Fixed
Roadmap

Pulse on SDR

Custom waveform · custom MAC · slot-precise timing in the FPGA.

Application
Telemetry · FOTA · mission logic · cluster roles
Pulse
Transport
Self-reconciling roll call · NACK aggregation · local repair
Pulse
Network
RSSI-aware beehive clustering · multi-hop mesh
Pulse
Data link / MAC
TDMA + FHSS · slot timing on FPGA · upper MAC on host MCU
Pulse
PHY
GMSK / CPM modulator · FEC · channelizer in FPGA gateware
Pulse
RF front end
Wideband transceiver · UHF / L-band / S-band · configured by Pulse
Configured
Antenna
Selected per band and mission profile
Selected
02 · Hive topology · DDIL resilience

Beehive formations
survive contested RF.

Pulse organizes nodes into clusters of ~50, and clusters into beehives of ~20 clusters. Each cluster elects a shadow master. Beehive geometry is RSSI-driven — neighbors are physically close, so local NACKs and repairs ride short links at low power.

The architecture is built for DDIL — Denied, Degraded, Intermittent, and Limited bandwidth — and for active DoS conditions. Click through the scenarios below to see how the network responds in real time.

Resilience scenario
SATCOM GCS · GATEWAY TCP/IP ⇄ Pulse UPLINK ✗ HIVE-02 50 NODES R HIVE-03 50 NODES R R ROTATING HIVE-04 50 NODES R JAMMED ⚡ FHSS hopping HIVE-05 50 NODES R HIVE-06 50 NODES R HIVE-07 50 NODES R HIVE-01 50 NODES R 1 BEEHIVE = 20 clusters = 1,000+ nodes SHADOW MASTERS one per cluster rotates over time
All systems nominal

Beehive operating in normal state. GCS gateway serving as TCP/IP wrap point. Self-reconciling roll call propagating across clusters. Shadow masters maintaining intra-cluster repair. Inter-cluster mesh active on the dashed amber paths.

Scenario · Normal
Operational nodes
1000 / 1000
full reach
Mesh integrity
FULL
all clusters healthy
GCS uplink
UP
TCP/IP ⇄ Pulse active
Failover state
NOMINAL
no rotations active
03 · Guaranteed delivery

Three mechanisms
working together.

Reliable multicast at swarm scale isn't one trick — it's three. Self-reconciling roll call tells every node what's been delivered, without coordination. NACK aggregation collapses thousands of repair requests into dozens. Rotating shadow masters keep no single node on the hook for too long.

Click "Play" on each card to animate the mechanism

A · Roll call

Self-Reconciling Roll Call

NODE A {m1,m2} v3 NODE B {m1,m3} v2 NODE C {m2,m3} v4 CONVERGED {m1,m2,m3}

Every node periodically broadcasts a compact digest of the messages it has received. Peers compare digests, identify gaps, and request only the missing pieces. The system reconciles itself — mathematics guarantees every node converges to the same global view of the roll call without a coordinator, regardless of the order updates arrive.

No master required · Order-independent · Convergent by design
B · Repair

NACK aggregation

GCS R 1 NACK per cluster 4 missed collapsed

The shadow master watches the cluster's self-reconciling roll call, identifies gaps, and sends one aggregated NACK to the GCS on the cluster's behalf. Local repair: the master serves missing frames from its own cache to peers without GCS round-trip. Back-channel load drops from O(N) to O(N / cluster_size).

NORM · SRM · local repair
C · Resilience

Rotating master

R t₀ t₁ t₂ t₃ role rotates over time

Shadow master is an elected role, not a fixed node. It rotates on cadence — driven by RSSI re-evaluation, battery balance, or failure detection. No single node carries cluster-master overhead long enough to drain its battery, become a jamming target, or take down the cluster if it fails.

RSSI threshold · battery balance · <1s failover
04 · External connectivity

The GCS is a
TCP/IP gateway.

The "GCS" in our diagrams isn't a single radio — it's a gateway. Inbound TCP/IP traffic from a SATCOM uplink, ground network, or remote command server arrives at the gateway, gets wrapped in a Pulse multicast frame, and is broadcast across the beehive. Telemetry flows back the same way: drones publish to their cluster, shadow masters aggregate up the mesh, the gateway unwraps and emits standard IP traffic to whatever's listening upstream.

The combination of self-reconciling roll call (knowing what's delivered) and NACK aggregation (repairing what isn't) is what lets the gateway treat the beehive as a single reliable IP-speaking endpoint — without the upstream caller needing to know Pulse exists.

Click the buttons to animate a packet through the gateway

Animate flow
SATCOM TCP / IP command · video ↓ INBOUND IP GCS · GATEWAY encapsulate IP payload → Pulse frame decapsulate Pulse frame → IP payload PULSE MULTICAST → ← TELEMETRY AGGREGATE BEEHIVE 1,000+ nodes PKT TLM
↓ Inbound

IP / SATCOM → Pulse

Command server, video uplink, or ground TCP/IP traffic arrives at the gateway. The IP payload is wrapped in a Pulse multicast frame addressed to the relevant cluster, beehive, or full swarm. Pulse handles delivery — self-reconciling roll call confirms reach, NACK aggregation handles repair. Upstream caller never sees Pulse.

↑ Outbound

Pulse → IP / SATCOM

Drones publish telemetry, video, or status into their cluster. Shadow masters aggregate and forward up the inter-cluster mesh to the gateway. The gateway unwraps the Pulse frame and emits standard IP traffic on the SATCOM uplink or ground network — looks like any other endpoint to upstream consumers.

05 · Capability matrix

What the SDR
roadmap unlocks.

What the field needs versus what each build delivers. Today is honest about being a software bus. The roadmap is what plug-and-play 1,000-node operations look like.

Capability
COTS MCU today
SDR roadmap
Practical swarm size
~30 back-channel-bound
1,000+ beehive of clusters
PHY ownership
No 802.11 OFDM frozen
Yes any waveform in HDL
Frequency hopping (FHSS)
No
Yes FPGA-controlled fast hop
Slot-precise TDMA
No RTOS-bounded jitter
Yes sub-microsecond on ECP5
Per-channel RSSI telemetry
Limited aggregate only
Full per-hop, per-neighbor
Self-reconciling roll call
Software latency-bounded
Hardware-assisted sub-frame propagation
NACK aggregation
Software bounded scale
Native shadow-master pattern
Local repair
No source-only retransmit
Yes peer-served from cache
Rotating shadow master
N/A single GCS only
Yes RSSI / battery driven
Jamming resistance
None
High FHSS + custom waveform
DDIL / DoS survival
No single-point failures cascade
Yes autonomous mesh, rotating roles
Multi-band operation
2.4 GHz only
UHF · L · S via LMS7002M
GCS as TCP/IP gateway
Partial custom protocol per app
Native any IP traffic wrapped in Pulse
Hardware deploy model
Per-device custom firmware
Plug-and-play common gateway, self-organizing