Cluster Membership
DSM supports three membership modes so the same runtime API can fit local development, trusted networks, and cloud deployments.
Topology At A Glance
standalone
[node-a]
multicast
[node-a] <~~~~> [node-b] <~~~~> [node-c]
network multicast discovery on trusted LAN/VPC
unicast gossip
[node-a] --> [node-b]
[node-b] --> [node-c]
[node-c] --> [node-a]
explicit seeds, gossip fanout, optional DNS discoveryDecision Guide
local development? -> standalone
trusted same-network cluster? -> multicast
kubernetes / cloud? -> unicast gossipStandalone
Use standalone membership for development and single-node demos.
- no network discovery required
- easiest way to validate runtime behavior
- ideal for the first integration pass
Use this first unless you are explicitly testing cluster behavior.
Multicast
Use multicast membership on trusted LAN or VPC networks.
- automatic peer discovery on a shared network
- simple configuration model
- useful where multicast is supported and allowed
Default multicast settings in Spring properties:
- group:
239.0.77.1 - port:
4446 - heartbeat interval:
1s - failure threshold:
5
Unicast Gossip
Use unicast gossip in Kubernetes and cloud environments.
- explicit seed nodes
- configurable gossip fanout and interval
- optional DNS seed discovery
Default unicast settings in Spring properties:
- gossip port:
4447 - gossip interval:
1s - gossip fanout:
3 - failure threshold:
5 - DNS port:
9090
Concrete example:
dsm:
cluster:
mode: unicast
unicast:
gossip-port: 4447
gossip-interval: 1s
gossip-fanout: 3
seed-nodes:
- 10.0.1.20:4447
- 10.0.1.21:4447unicast bootstrap flow
node-c starts
-> contacts 10.0.1.20:4447
-> learns more peers from gossip state
-> joins cluster view
-> begins normal replication trafficSpring Property Model
Cluster mode is configured under dsm.cluster.mode.
Unicast-specific settings live under dsm.cluster.unicast.*, including:
seed-nodesgossip-portgossip-intervalgossip-fanoutfailure-threshold
Multicast settings live under dsm.cluster.multicast.*, including group, port, heartbeat interval, and failure threshold.
Node identity defaults also matter:
dsm.node.iddefaults to a random UUIDdsm.node.hostdefaults to the resolved local host addressdsm.node.portdefaults to9090
If you are running a real multi-node environment, make node identity explicit rather than relying on defaults.
Isolation Boundary
Membership is always scoped by both clusterId and serviceId. If those do not match, nodes should not participate in the same DSM fabric.
clusterId = deployment family boundary
serviceId = service membership boundary
same clusterId + same serviceId -> allowed to join
same clusterId + different serviceId -> reject / ignore
different clusterId -> separate DSM fabrics