Skip to main content

Engines, scalability

The Engine in Centrifugo is responsible for publishing messages between nodes, handle PUB/SUB broker subscriptions, save/retrieve online presence and history data.

By default, Centrifugo uses a Memory engine. There are also Redis, KeyDB, Tarantool engines available. And Nats broker which also supports at most once PUB/SUB.

The difference between them - with Memory engine you can start only one node of Centrifugo, while Redis engine allows running several nodes on different machines and they all will be connected via PUB/SUB, will know about each other and will also keep history and presence data in Redis instead of Centrifugo node process memory so this data can be accessed from each node and won't be lost after Centrifugo server restart.

To set engine you can use engine configuration option. Available values are memory, redis, tarantool. The default value is memory.

For example to work with Redis engine:

"engine": "redis"

Memory engine

Used by default. Supports only one node. Nice choice to start with. Supports all features keeping everything in Centrifugo node process memory. You don't need to install Redis when using this engine.


  • Super fast since it does not involve network at all
  • Does not require separate broker setup


  • Does not allow scaling nodes (actually you still can scale Centrifugo with Memory engine but you have to publish data into each Centrifugo node and you won't have consistent history and presence state throughout Centrifugo nodes)
  • Does not persist message history in channels between Centrifugo restarts

Memory engine options


Duration, default 0s.

history_meta_ttl sets a time in seconds of history stream metadata expiration. Stream metadata is information about the current offset number in the channel and epoch value. By default, metadata for channels does not expire. Though in some cases – when channels are created for а short time and then not used anymore – created metadata can stay in memory while not useful. For example, you can have a personal user channel but after using your app for a while user left it forever. From a long-term perspective, this can be an unwanted memory leak. Setting a reasonable value to this option (usually much bigger than the history retention period) can help. In this case, unused channel metadata will eventually expire.

Redis engine

Redis is an open-source, in-memory data structure store, used as a database, cache, and message broker.

Centrifugo Redis engine allows scaling Centrifugo nodes to different machines. Nodes will use Redis as a message broker (utilizing Redis PUB/SUB for node communication) and keep presence and history data in Redis.

Minimal Redis version is 5.0.1

Redis engine options

Several configuration options related to Redis engine.


String, default "" - Redis server address.


String, default "" - Redis password.


Available since Centrifugo v3.2.0

String, default "" - Redis user for ACL-based auth.


Integer, default 0 - number of Redis db to use.


Boolean, default false - enable Redis TLS connection.


Boolean, default false - disable Redis TLS host verification.


String, default "centrifugo" – custom prefix to use for channels and keys in Redis.


Boolean, default false – turns on using Redis Lists instead of Stream data structure for keeping history (not recommended, keeping this for backwards compatibility mostly).


Similar to a Memory engine Redis engine also looks at history_meta_ttl option (duration, default 0) - which sets a time of history stream metadata expiration in Redis Engine (with seconds resolution). Meta key in Redis is a HASH that contains the current offset number in channel and epoch value. By default, metadata for channels does not expire. Though in some cases – when channels are created for а short time and then not used anymore – created stream metadata can stay in memory while not useful. For example, you can have a personal user channel but after using your app for a while user left it forever. From a long-term perspective, this can be an unwanted memory leak. Setting a reasonable value to this option (usually much bigger than the history retention period) can help. In this case, unused channel metadata will eventually expire.

Scaling with Redis tutorial

Let's see how to start several Centrifugo nodes using the Redis Engine. We will start 3 Centrifugo nodes and all those nodes will be connected via Redis.

First, you should have Redis running. As soon as it's running - we can launch 3 Centrifugo instances. Open your terminal and start the first one:

centrifugo --config=config.json --port=8000 --engine=redis --redis_address=

If your Redis is on the same machine and runs on its default port you can omit redis_address option in the command above.

Then open another terminal and start another Centrifugo instance:

centrifugo --config=config.json --port=8001 --engine=redis --redis_address=

Note that we use another port number (8001) as port 8000 is already busy by our first Centrifugo instance. If you are starting Centrifugo instances on different machines then you most probably can use the same port number (8000 or whatever you want) for all instances.

And finally, let's start the third instance:

centrifugo --config=config.json --port=8002 --engine=redis --redis_address=

Now you have 3 Centrifugo instances running on ports 8000, 8001, 8002 and clients can connect to any of them. You can also send API requests to any of those nodes – as all nodes connected over Redis PUB/SUB message will be delivered to all interested clients on all nodes.

To load balance clients between nodes you can use Nginx – you can find its configuration here in the documentation.


In the production environment you will most probably run Centrifugo nodes on different hosts, so there will be no need to use different port numbers.

Here is a live example where we locally start two Centrifugo nodes both connected to local Redis:

Redis Sentinel for high availability

Centrifugo supports the official way to add high availability to Redis - Redis Sentinel.

For this you only need to utilize 2 Redis Engine options: redis_sentinel_address and redis_sentinel_master_name:

  • redis_sentinel_address (string, default "") - comma separated list of Sentinel addresses for HA. At least one known server required.
  • redis_sentinel_master_name (string, default "") - name of Redis master Sentinel monitors


  • redis_sentinel_password – optional string password for your Sentinel, works with Redis Sentinel >= 5.0.1
  • redis_sentinel_user (available since v3.2.0) - optional string user (used only in Redis ACL-based auth).

So you can start Centrifugo which will use Sentinels to discover Redis master instances like this:

centrifugo --config=config.json

Where config.json:

"engine": "redis",
"redis_sentinel_address": "",
"redis_sentinel_master_name": "mymaster"

Sentinel configuration files can look like this:

port 26379
sentinel monitor mymaster 6379 2
sentinel down-after-milliseconds mymaster 10000
sentinel failover-timeout mymaster 60000

You can find how to properly set up Sentinels in official documentation.

Note that when your Redis master instance is down there will be a small downtime interval until Sentinels discover a problem and come to a quorum decision about a new master. The length of this period depends on Sentinel configuration.

Haproxy instead of Sentinel configuration

Alternatively, you can use Haproxy between Centrifugo and Redis to let it properly balance traffic to Redis master. In this case, you still need to configure Sentinels but you can omit Sentinel specifics from Centrifugo configuration and just use Redis address as in a simple non-HA case.

For example, you can use something like this in Haproxy config:

listen redis
server redis-01 check port 6380 check inter 2s weight 1 inter 2s downinter 5s rise 10 fall 2
server redis-02 check port 6381 check inter 2s weight 1 inter 2s downinter 5s rise 10 fall 2 backup
bind *:16379
mode tcp
option tcpka
option tcplog
option tcp-check
tcp-check send PING\r\n
tcp-check expect string +PONG
tcp-check send info\ replication\r\n
tcp-check expect string role:master
tcp-check send QUIT\r\n
tcp-check expect string +OK
balance roundrobin

And then just point Centrifugo to this Haproxy:

centrifugo --config=config.json --engine=redis --redis_address="localhost:16379"

Redis sharding

Centrifugo has built-in Redis sharding support.

This resolves the situation when Redis becoming a bottleneck on a large Centrifugo setup. Redis is a single-threaded server, it's very fast but its power is not infinite so when your Redis approaches 100% CPU usage then the sharding feature can help your application to scale.

At moment Centrifugo supports a simple comma-based approach to configuring Redis shards. Let's just look at examples.

To start Centrifugo with 2 Redis shards on localhost running on port 6379 and port 6380 use config like this:

"engine": "redis",
"redis_address": [

To start Centrifugo with Redis instances on different hosts:

"engine": "redis",
"redis_address": [

If you also need to customize AUTH password, Redis DB number then you can use an extended address notation.


Due to how Redis PUB/SUB works it's not possible (and it's pretty useless anyway) to run shards in one Redis instance using different Redis DB numbers.

When sharding enabled Centrifugo will spread channels and history/presence keys over configured Redis instances using a consistent hashing algorithm. At moment we use Jump consistent hash algorithm (see paper and implementation).

Redis cluster

Running Centrifugo with Redis cluster is simple and can be achieved using redis_cluster_address option. This is an array of strings. Each element of the array is a comma-separated Redis cluster seed node. For example:

"redis_cluster_address": [

You don't need to list all Redis cluster nodes in config – only several working nodes are enough to start.

To set the same over environment variable:


If you need to shard data between several Redis clusters then simply add one more string with seed nodes of another cluster to this array:

"redis_cluster_address": [

Sharding between different Redis clusters can make sense due to the fact how PUB/SUB works in the Redis cluster. It does not scale linearly when adding nodes as all PUB/SUB messages got copied to every cluster node. See this discussion for more information on topic. To spread data between different Redis clusters Centrifugo uses the same consistent hashing algorithm described above (i.e. Jump).

To reproduce the same over environment variable use space to separate different clusters:

CENTRIFUGO_REDIS_CLUSTER_ADDRESS="localhost:30001,localhost:30002 localhost:30101,localhost:30102" CENTRIFUGO_ENGINE=redis ./centrifugo

KeyDB Engine


Centrifugo Redis engine seamlessly works with KeyDB. KeyDB server is compatible with Redis and provides several additional features beyond.


We can't give any promises about compatibility with KeyDB in the future Centrifugo releases - while KeyDB is fully compatible with Redis things should work just fine. That's why we consider this as EXPERIMENTAL feature.

Use KeyDB instead of Redis only if you are sure you need it. Nothing stops you from running several Redis instances per each core you have, configure sharding, and obtain even better performance than KeyDB can provide (due to lack of synchronization between threads in Redis).

To run Centrifugo with KeyDB all you need to do is use redis engine but run the KeyDB server instead of Redis.

Tarantool engine


Tarantool is a fast and flexible in-memory storage with different persistence/replication schemes and LuaJIT for writing custom logic on the Tarantool side. It allows implementing Centrifugo engine with unique characteristics.


EXPERIMENTAL status of Tarantool integration means that we are still going to improve it and there could be breaking changes as integration evolves.

There are many ways to operate Tarantool in production and it's hard to distribute Centrifugo Tarantool engine in a way that suits everyone. Centrifugo tries to fit generic case by providing centrifugal/tarantool-centrifuge module and by providing ready-to-use centrifugal/rotor project based on centrifugal/tarantool-centrifuge and Tarantool Cartridge.


To be honest we bet on the community help to push this integration further. Tarantool provides an incredible performance boost for presence and history operations (up to 5x more RPS compared to the Redis Engine) and a pretty fast PUB/SUB (comparable to what Redis Engine provides). Let's see what we can build together.

There are several supported Tarantool topologies to which Centrifugo can connect:

  • One standalone Tarantool instance
  • Many standalone Tarantool instances and consistently shard data between them
  • Tarantool running in Cartridge
  • Tarantool with replica and automatic failover in Cartridge
  • Many Tarantool instances (or leader-follower setup) in Cartridge with consistent client-side sharding between them
  • Tarantool with synchronous replication (Raft-based, Tarantool >= 2.7)

After running Tarantool you can point Centrifugo to it (and of course scale Centrifugo nodes):

"engine": "tarantool",
"tarantool_address": ""

See centrifugal/rotor repo for ready-to-use engine based on Tarantool Cartridge framework.

See centrifugal/tarantool-centrifuge repo for examples on how to run engine with Standalone single Tarantool instance or with Raft-based synchronous replication.

Tarantool engine options


String or array of strings. Default tcp://

Connection address to Tarantool.


String, default standalone

A mode how to connect to Tarantool. Default is standalone which connects to a single Tarantool instance address. Possible values are: leader-follower (connects to a setup with Tarantool master and async replicas) and leader-follower-raft (connects to a Tarantool with synchronous Raft-based replication).

All modes support client-side consistent sharding (similar to what Redis engine provides).


String, default "". Allows setting a user.


String, default "". Allows setting a password.


Duration, default 0s.

Same option as for Memory engine and Redis engine also applies to Tarantool case.

Nats broker

It's possible to scale with Nats PUB/SUB server. Keep in mind, that Nats is called a broker here, not an Engine – Nats integration only implements PUB/SUB part of Engine, so carefully read limitations below.


  • Nats integration works only for unreliable at most once PUB/SUB. This means that history, presence, and message recovery Centrifugo features won't be available.
  • Nats wildcard channel subscriptions with symbols * and > not supported.

First start Nats server:

$ nats-server
[3569] 2020/07/08 20:28:44.324269 [INF] Starting nats-server version 2.1.7
[3569] 2020/07/08 20:28:44.324400 [INF] Git commit [not set]
[3569] 2020/07/08 20:28:44.325600 [INF] Listening for client connections on
[3569] 2020/07/08 20:28:44.325612 [INF] Server id is NDAM7GEHUXAKS5SGMA3QE6ZSO4IQUJP6EL3G2E2LJYREVMAMIOBE7JT4
[3569] 2020/07/08 20:28:44.325617 [INF] Server is ready

Then start Centrifugo with broker option:

centrifugo --broker=nats --config=config.json

And one more Centrifugo on another port (of course in real life you will start another Centrifugo on another machine):

centrifugo --broker=nats --config=config.json --port=8001

Now you can scale connections over Centrifugo instances, instances will be connected over Nats server.



String, default nats://

Connection url in format nats://derek:pass@localhost:4222.


String, default centrifugo.

Prefix for channels used by Centrifugo inside Nats.


Duration, default 1s.

Timeout for dialing with Nats.


Duration, default 1s.

Write (and flush) timeout for a connection to Nats.