Cluster high memory pressure


A CustomerClusterHighMemoryPressure alert is firing. It indicates a filesystem on a node in your cluster is almost full.

CustomerClusterHighMemoryPressure alerts are sent to subscribers via email.
Memory pressure and nodes

Cluster nodes can suffer from memory pressure (less than 5% memory available and a high rate of major page faults used by the alerts), but the cluster may have lots of unused memory. The pod distribution across the cluster node may be uneven with one node getting the memory hungry pods while other nodes are lightly loaded.


Here are the details on the alerts:

Alert: CustomerClusterHighMemoryPressure


instance:node_memory_available:ratio * 100 < 5 and rate(node_vmstat_pgmajfault[2m]) > 1000


15 minutes


team: customer


  • summary

  • description

  • tenant

  • cluster_id

  • cluster_name

  • instance

Determine the node with high memory pressure

The alert contains:

  • the cluster name/id (the k8s_cluster_name and k8s_cluster_id labels)

  • the cluster node (the instance label)

  • the device (the device label)

  • the filesystem (the mountpoint label)

It’s important to note the label values from the alert.


This section provides solutions that should help resolve the issue in most cases.

Increase cluster node instance type

The memory available to a cluster node is determined by the instance type. Cluster nodes are usually t3a-xlarge EC2 instances with 16GB memory. The next larger instance type, t3a.2xlarge, has 32GB memory. There are no instance types larger than t3a.2xlarge.

Cost increase

Increasing the instance type means you will have to pay for the larger instances.

All nodes in the cluster should be upgraded if the instance type is increased!

The instance type of the cluster nodes can be changed in AWS. Magnolia deployments will be unavailable while the node instance type is upgraded.

Add a node to the cluster

We recommend that each Magnolia instance and its database be run on a separate node in the cluster: the author plus two public instances can each be run on separate nodes and each Magnolia instance can have enough memory (10 - 12 GB for author instances and 8 - 10 GB for public instances).

If you want to run more than three Magnolia instances on the cluster, additional nodes can be added to the cluster.

Cost increase

You have to pay more to increase the number of nodes.

Adding a node to cluster won’t necessarily redistribute the workloads more evenly across the cluster.

You may need to redeploy Magnolia after adding a cluster node.





This widget lets you know where you are on the docs site.

You are currently perusing through the Magnolia PaaS docs.

Main doc sections

DX Core Headless PaaS Legacy Cloud Incubator modules