DX Cloud Architecture

A typical DX Cloud deployment contains productive and non-productive cluster(s) with our Magnolia Platform Services providing metrics, logs, alert management, and cluster orchestration. The clusters can be on AWS, Azure, or a mixture of both.

These metrics can be viewed directly via the Cockpit.

The Docker Registry, pipeline, source control (git), Jira support, cluster orchestration, the CDN, and of course, the Cockpit are all accessible by the customer.

As shown in Deployment options, users must be authenticated and authorized to access the core elements of DX Cloud, which include:

In fact, even the elements themselves require authentication (via Bearer Token) to perform tasks, further securing your Magnolia deployment.

Deployment options

There are three deployment options for Magnolia using PaaS. These depend on your SLA and particular project.

  • Basic

  • Standard

  • Premium

DX Cloud Basic offers:

  • 1 Kubernetes cluster

  • 4 nodes

    • 1 node for the Development environment (dev)

      The dev environment includes a Public and Author instance on the node.

    • 3 nodes for the Production environment (prod)

      The prod environment includes one Author and two Public instances on the each node.

When using DX Cloud Basic, you should ensure tolerations: are set for a dedicated node in your values.yml file.

If you require more than two environments, you should upgrade to DX Cloud Standard or Premium.

paas architecture basic

DX Cloud offers:

  • 2 Kubernetes clusters

    • 1 cluster for non-production environments such as dev.

    • 1 cluster for production environments.

If you require more than two clusters, you should upgrade to DX Cloud Premium.

paas architecture standard

DX Cloud Premium offers:

  • 3 Kubernetes clusters

    • 1 cluster for non-production environments such as dev.

    • 2 clusters for production environments including a satellite cluster for high availability.

paas architecture premium

Linkerd Satellite cluster communication:

Linkerd plays a critical role in the communication between your Kubernetes clusters. It provides security, observability, and reliability by managing service-to-service traffic.

The primary cluster is setup to receive traffic from a linked remote cluster as defined your values.yml file.

We set up everything in the backend for your multiregion cluster approach. However, we do need you to modify the values.yml file for your specific environments to accommodate the multiregion feature. For more on this, see Set up a multiregion cluster.

Traffic routing via CDN

It’s important to understand how your traffic is routed within a Kubernetes and Fastly CDN setup. The diagram here provides a general sequence flow for traffic routing.

kubernetes fastly traffic flow
  1. User Browser: The user initiates the request from their browser.

  2. DNS Resolution: The browser resolves the website URL to the Fastly edge server.

  3. Fastly Edge Server: The request is routed to the nearest Fastly edge server.

  4. Is Content Cached? The edge server checks if the content is cached.

  5. Yes: If cached, the content is served directly from Fastly.

  6. No: If not cached, the request is forwarded to the Kubernetes load balancer.

  7. Kubernetes Load Balancer: This distributes the request to an appropriate pod within the Kubernetes cluster.

  8. Pod in Kubernetes Cluster: The pod processes the request and sends the response back.

  9. Response Back to Fastly: The response is sent back to Fastly, where it may be cached for future requests.

  10. Response Back to User: Finally, the content is delivered to the user.

Kubernetes and sidecars

DX Cloud uses Kubernetes for baseline orchestration of its environments.

This is an explicit dependency. Helm charts are used to deploy releases on the Kubernetes cluster.
architectural overview rebrand
Item Note

A

The CDN is deployed between the end user and the Magnolia instances.

B

Magnolia instances (author/public) are each deployed in a Kubernetes pod containing their own sidecars and K8s workers.

C

Sidecar containers are deployed to initialize containers before Magnolia CMS starts.

Sidecars are secondary containers that focus on a specific task. They are placed in the same pod as the primary container because resources are shared. Typically sidecars come after the main container in the configuration so the main container is the default target for kubectl execute as shown in the example below (1):
apiVersion: v1
kind: Pod
metadata:
  name: webserver
spec:
  volumes:
    - name: shared-logs
      emptyDir: {}

  containers:
    - name: nginx
      image: nginx
      volumeMounts:
        - name: shared-logs
          mountPath: /var/log/nginx

    - name: sidecar-container (1)
      image: busybox
      command: ["sh","-c","while true; do cat /var/log/nginx/access.log /var/log/nginx/error.log; sleep 30; done"]
      volumeMounts:
        - name: shared-logs
          mountPath: /var/log/nginx

D

The K8s workers handle pod availability.

Feedback

PaaS

×

Location

This widget lets you know where you are on the docs site.

You are currently perusing through the Magnolia PaaS docs.

Main doc sections

DX Core Headless PaaS Legacy Cloud Incubator modules