DX Cloud Architecture
A typical DX Cloud deployment contains productive and non-productive cluster(s) with our Magnolia Platform Services providing metrics, logs, alert management, and cluster orchestration. The clusters can be on AWS, Azure, or a mixture of both.
These metrics can be viewed directly via the Cockpit. |
The Docker Registry, pipeline, source control (git), Jira support, cluster orchestration, the CDN, and of course, the Cockpit are all accessible by the customer.
As shown in Deployment options, users must be authenticated and authorized to access the core elements of DX Cloud, which include:
-
the Cockpit
-
Source control (
git
)
In fact, even the elements themselves require authentication (via Bearer Token) to perform tasks, further securing your Magnolia deployment.
Deployment options
There are three deployment options for Magnolia using PaaS. These depend on your SLA and particular project.
DX Cloud Basic offers:
-
1 Kubernetes cluster
-
4 nodes
-
1 node for the Development environment (
dev
)The
dev
environment includes a Public and Author instance on the node. -
3 nodes for the Production environment (
prod
)The
prod
environment includes one Author and two Public instances on the each node.
-
When using DX Cloud Basic, you should ensure tolerations: are set for a dedicated node in your values.yml file.
|
If you require more than two environments, you should upgrade to DX Cloud Standard or Premium.
DX Cloud offers:
-
2 Kubernetes clusters
-
1 cluster for non-production environments such as
dev
. -
1 cluster for production environments.
-
If you require more than two clusters, you should upgrade to DX Cloud Premium.
DX Cloud Premium offers:
-
3 Kubernetes clusters
-
1 cluster for non-production environments such as
dev
. -
2 clusters for production environments including a satellite cluster for high availability.
-
Linkerd Satellite cluster communication:
Linkerd plays a critical role in the communication between your Kubernetes clusters. It provides security, observability, and reliability by managing service-to-service traffic.
The primary cluster is setup to receive traffic from a linked remote cluster as defined your values.yml file.
We set up everything in the backend for your multiregion cluster approach. However, we do need you to modify the values.yml file for your specific environments to accommodate the multiregion feature. For more on this, see Set up a multiregion cluster.
Traffic routing via CDN
It’s important to understand how your traffic is routed within a Kubernetes and Fastly CDN setup. The diagram here provides a general sequence flow for traffic routing.
-
User Browser: The user initiates the request from their browser.
-
DNS Resolution: The browser resolves the website URL to the Fastly edge server.
-
Fastly Edge Server: The request is routed to the nearest Fastly edge server.
-
Is Content Cached? The edge server checks if the content is cached.
-
Yes: If cached, the content is served directly from Fastly.
-
No: If not cached, the request is forwarded to the Kubernetes load balancer.
-
Kubernetes Load Balancer: This distributes the request to an appropriate pod within the Kubernetes cluster.
-
Pod in Kubernetes Cluster: The pod processes the request and sends the response back.
-
Response Back to Fastly: The response is sent back to Fastly, where it may be cached for future requests.
-
Response Back to User: Finally, the content is delivered to the user.
Kubernetes and sidecars
DX Cloud uses Kubernetes for baseline orchestration of its environments.
This is an explicit dependency. Helm charts are used to deploy releases on the Kubernetes cluster. |
Item | Note | ||
---|---|---|---|
A |
The CDN is deployed between the end user and the Magnolia instances. |
||
B |
Magnolia instances (author/public) are each deployed in a Kubernetes pod containing their own sidecars and K8s workers. |
||
C |
Sidecar containers are deployed to initialize containers before Magnolia CMS starts.
|
||
D |
The K8s workers handle pod availability. |