Multicluster environments
With DX Cloud, you can deploy a multicluster environment that boosts Magnolia availability to 99.9%
and supports automated disaster recovery.
This is achieved through an active/active approach, distributing content across multiple clusters while ensuring resilience and security.
An active/active multicluster setup leverages a central author cluster to manage and distribute content, paired with multiple satellite clusters that serve content to users. Traffic is dynamically routed to the nearest available cluster via a CDN, enhancing performance and availability.
This page focuses on the conceptual aspect of our multicluster environment offering. If you want to dive into the technical details of setup and configuration, see Magnolia: Running Multicluster Environments.
Key features
-
Standalone resilience: No cluster configuration (e.g., Solr indexes) is automatically synced between clusters, preventing error replication and treating each as an independent entity.
-
Regional session management: Sticky sessions or session sharing (e.g., via Redisson) is limited to instances within the same cluster, not across clusters (e.g., a session in Frankfurt is not shared with a US-based cluster).
-
Secure communication: mTLS authentication ensures trusted interactions between the author cluster and satellite clusters, safeguarding content distribution.
-
High availability: Regional distribution and CDN routing improve performance and uptime, targeting
99.9%
availability.
Architecture overview
A multicluster environment consists of:
-
Author cluster: The central hub that manages content and configuration, pushing updates to all connected clusters.
-
Satellite clusters: Standalone clusters that serve content to users, operating independently to ensure resilience.
Satellite clusters are reachable from the author cluster. However, the author cluster does not have to be reachable from the satellite clusters, i.e. can be behind a firewall for security reasons.

Content is distributed from the author cluster to satellite clusters, with a CDN (e.g., Fastly) handling routing based on geolocation. This ensures users access the closest available cluster via the nearest CDN Point of Presence (POP).
Security between clusters is maintained through a mutual TLS (mTLS) trust mechanism, where the author cluster authenticates itself to satellite clusters using certificates, ensuring secure communication without compromising cluster independence.
CDN routing
The CDN dynamically distributes traffic, connecting users to the nearest satellite cluster for reduced latency and high availability.
Content distribution flow
Content flows from the author cluster to all satellite clusters, ensuring consistency across regions while allowing each cluster to operate standalone.
Advantages:
-
Reduces latency by serving users from the closest available satellite cluster.
-
Ensures high availability, even if one cluster experiences issues.
-
Eliminates single-cluster dependencies for content delivery through secure, authenticated distribution.

What is not synchronized across clusters?
In a multiregion setup, certain resources remain isolated per cluster and are not automatically synchronized. This means each cluster operates independently for these elements, ensuring security, stability, and fault isolation. Understanding these limitations helps teams design a resilient, secure, and well-managed multiregion architecture.
Resources that are not synced
The following resources must be managed separately in each cluster.
Resource | Notes |
---|---|
Search indexes (Solr) |
Each cluster maintains its own index, meaning searches may return different results if indexes are not manually aligned. |
Ingress configurations |
Custom domain routes and load balancer settings must be explicitly defined per cluster. This includes network routing rules. |
TLS/SSL certificates |
Certificates must be provisioned separately per cluster; they do not propagate automatically. |
Secrets |
Sensitive credentials such as environment variables and API keys remain local to each cluster and must be manually deployed or managed via a secure vault. |
Configuration maps (ConfigMaps) |
Cluster-specific application configurations do not sync between regions. |
Persistent storage |
Data storage is local to each cluster, preventing accidental cross-region data corruption. This includes PV/PVCs, databases and Magnolia-home. |
Kubernetes workloads |
Each cluster has its own independent workloads; there is no automatic cross-cluster replication. This includes Pods, StatefulSets, Deployments, and DaemonSets. |
CronJobs (scheduled tasks) |
Scheduled jobs run only in the cluster where they are defined. |
Why this matters
Keeping these resources isolated per cluster provides several key advantages:
-
Security: Secrets (API keys, credentials) do not automatically replicate across clusters, reducing risk in case of a security breach.
-
Stability: Workloads and databases are independent, preventing issues in one cluster from impacting another.
-
Compliance: Certain regulatory frameworks require strict separation of environments, which this approach enforces.