Memory requests and limits on Magnolia PaaS

Your Kubernetes clusters on Magnolia PaaS have resource types that are allocated among all the pods and containers running on the clusters, including your Magnolia instances and your frontend instances if you have a headless project. In this case, resource allocation affects how well your site runs.

There are many different resource types, but the most important are CPU and memory:

  • CPU: represents how much compute processing is dedicated to a pod.

  • Memory: in simple terms, this is how much memory a pod can use.

You can specify the memory allocated to your Magnolia instances and your frontend instances when they are started - the memory request for the pod - and you can specify the maximum amount of memory the instances are allowed, the memory limit.

Your Kubernetes clusters on Magnolia PaaS are composed of nodes, each having a finite amount of CPU and memory resources. Magnolia PaaS clusters will usually be composed of 3 or more nodes, each node having 32 GB of memory and 4 CPUs. The cluster will have 96 GB total memory and 12 CPUs total for all pods and containers running on the cluster.

The Memory request for a pod affects where Kubernetes will run the pod: Kubernetes picks a node that has enough memory and CPU to satisfy the memory and CPU requested for the pod. If Kubernetes cannot find a node that has the requested CPU and memory, the pod will not start, so it is important to keep in mind the total memory and CPU resources in your cluster and the total memory and CPU available on the individual nodes in your cluster.

Choose memory, CPU requests, and limits that can be met by both constraints.

Best Practice Tip 1

Don’t set CPU requests or limits for Magnolia frontend pods running on your clusters.

Why should the memory request and limit be the same?

If you set a lower memory request than a memory limit, Kubernetes might run your Magnolia instance or Magnolia frontend instance on a cluster node that has enough memory to meet the memory request but not have enough memory to provide all memory of the memory limit.

For Magnolia pods, you can set the memory request and limit used for the Magnolia author and public through the Helm chart values:

  • magnoliaAuthor.resources.limits.memory: the memory limit for the Magnolia author

  • magnoliaAuthor.resources.requests.memory: the memory request for the Magnolia author

  • magnoliaPublic.resources.limits.memory: the memory limit for the Magnolia public

  • magnoliaPublic.resources.requests.memory: the memory request for the Magnolia public

The default value for the memory request and limit in the Magnolia Helm chart is 512 mb.

This is quite small and is likely insufficient to run Magnolia. You should set the memory limit and request for both Magnolia author and public in your Helm chart values to more reasonable values.

For Magnolia frontend pods, the memory request and limit usually is set through your deployment pipeline. Check the pipeline and adjust the memory settings (request and limit) there.

Best Practice Tip 2

The Magnolia author instance for your production environment should have a memory limit and request between 10 - 12 gb.

Using a magnoliaAuthor.setenv.memory.maxPercentage value of 60 (the default value) and a memory limit of 10 - 12 gb will result in a maximum heap for the Magnolia author JVM between 6gb - 7.2gb for the Magnolia author, sufficent to run Magnolia without excessive garbage collection in most circumstances.

Best Practice Tip 3

The Magnolia public instance for your production environment should have a memory limit and between 8 - 10 gb.

Using a magnoliaPublic.setenv.memory.maxPercentage value of 60 (the default) and a memory limit of 8 - 10 gb will result in a maximum heap for the Magnolia author JVM between 4.8gb - 6gb.

Best Practice Tip 4

Choose memory limits so that the Magnolia author and publics are run on different nodes in your production cluster

It is best to run your production Magnolia instances on separate nodes in your production cluster. Running multiple Magnolia instances on the same node can make those Magnolia instances unavailable at the same time if the cluster has a problem.

For example, if you have three nodes in production cluster, with each node having 16gb memory and 4 CPUs, setting the author instance memory limit to 10gb and the public instance memory limit to 10 will ensure that each Magnolia public instance will be run on a separate node in the cluster.

There are other ways to control where Kubernetes runs your Magnolia instances, but these can have unintentional side effects. Setting memory limits is simpler and ensures that multiple Magnolia instances won’t be affected by problems on a single node in your production cluster.

You can get by running Magnolia author and publics with less memory (and less memory used by the JVM), but reduce the memory request and limit only after observing the JVM heap usage for some time, especially in times of peak traffic and load.
Feedback

PaaS

×

Location

This widget lets you know where you are on the docs site.

You are currently perusing through the Magnolia PaaS docs.

Main doc sections

DX Core Headless PaaS Legacy Cloud Incubator modules