Memcached implementation

Operations Bundled: Community edition

Edition CE, Cloud

License

MLA, GPL

Issues

Maven site

Latest

6.0.1

Memcached is a high-performance, distributed memory object caching system. Memcached implementation for Magnolia CMS brings you the advantages of a distributed cache:

  • Sharing of cache items between multiple instances of Magnolia.

  • Cached items persist restart of Magnolia instance.

  • Memcached servers may run on any server in the network and so do not consume your server memory.

Installing with Maven

Maven is the easiest way to install the module. Add the following to your bundle:

<dependency>
  <groupId>info.magnolia.cache</groupId>
  <artifactId>magnolia-cache-memcached</artifactId>
  <version>6.0.1</version> (1)
</dependency>
1 Should you need to specify the module version, do it using <version>.

If you extend your web app from a magnolia-empty-webapp or any other web app extended from it (such as magnolia-bundled-webapp) you also need to exclude the default ehcache implementation:

    <dependency>
      <groupId>info.magnolia</groupId>
      <artifactId>magnolia-empty-webapp</artifactId>
      <type>pom</type>
      <exclusions>
        <exclusion>
          <groupId>info.magnolia.cache</groupId>
          <artifactId>magnolia-cache-ehcache</artifactId>
        </exclusion>
      </exclusions>
    </dependency>

Usage

If you’ve never used memcached, look at how to install memcached server. You need at least one memcache server per cache. That means for every cache configuration under /modules/cache/config/contentCaching you need one entry under /modules/cache/config/cacheFactory/caches. This is at least (by default) defaultPageCache and uuid-key-mapping.

Magnolia Memcached implementation uses Spymemcached client which has its own configuration options that can be set in /modules/cache/config/cacheFactory/CACHE_NAME.

Cache configuration

Parameter Default Description

protocolN2B

BINARY

BINARY or TEXT protocol.

readBufSizeN2B

10000

Size of the read buffer.

shouldOptimizeN2B

false

There are several elements of the design that each allow high throughput.

useNagleAlgorithmN2B

true

Improves the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network.

maxReconnectDelayN2B

30

Set the maximum reconnect delay.

opQueueMaxBlockTimeN2B

1000

Set the maximum amount of time (in milliseconds) a client is willing to wait for space to become available in an output queue.

timeoutExceptionThresholdN2B

998

Set the maximum timeout exception threshold.

opTimeoutN2B

-1

Set the default operation timeout in milliseconds.

failureModeN2B

Redistribute

  • Redistribute (Move on to functional nodes when nodes fail)

  • Retry (Continue to retry a failing node until it comes back up)

  • Cancel (Automatically cancel all operations heading towards a downed node)

transcoderN2B

net.spy.memcached.transcoders.SerializingTranscoder

Needs to be set as content node with class property:

  • net.spy.memcached.transcoders.SerializingTranscoder

  • net.spy.memcached.transcoders.WhalinTranscoder

  • net.spy.memcached.transcoders.WhalinV1Transcoder

  • net.spy.memcached.transcoders.IntegerTranscoder

  • net.spy.memcached.transcoders.LongTranscoder

locatorN2B

ARRAY_MOD

  • ARRAY_MOD (the classic node location algorithm)

  • VBUCKET (VBucket support)

  • CONSISTENT (Consistent hash algorithm)

servers

-

Memcache server/s to use for this cache in format <domain or ip>:<port number>

Performance

One of the advantages of the memcached implementation is the sharing of cache entries between multiple instances of Magnolia. If the cache item is processed by one of the public instances, it is sent to memcached server/s and other Magnolia instances do not need to render the content again; instead they may use the item cached in the registered memcached servers.

For the following tests, we requested content from two Magnolia instances at the same time. You can see the results on the first graph. As you can see the throughput is 2x bigger since the instances share the cache items. This of course applies only for the first requests for a content object, but that’s the time after flushing of cache when the load on the server is highest.

The second graph shows performance when the items are precached. Memcached is a little slower than Ehcache in this case.

Performance graphs

Memcached Client License

the Spymemcached client uses its own licence.
/**
* Copyright (c) 2006-2009 Dustin Sallings
* Copyright (c) 2009-2011 Couchbase, Inc.

  *
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALING
* IN THE SOFTWARE.
*/
Feedback

DX Core

×

Location

This widget lets you know where you are on the docs site.

You are currently perusing through the DX Core docs.

Main doc sections

DX Core Headless PaaS Legacy Cloud Incubator modules