Built-in API command async consumers
In server API chapter we've shown how to execute various Centrifugo server API commands (publish, broadcast, etc.) over HTTP or GRPC. In many cases you will call those APIs from your application business logic synchronously. But to deal with temporary network and availability issues, and achieve reliable execution of API commands upon changes in your primary application database you may want to use queuing techniques and call Centrifugo API asynchronously.
Asynchronous delivery of real-time events upon changes in primary database may be done is several ways. Some companies use transactional outbox pattern, some using techniques like Kafka Connect with CDC (Change Data Capture) approach. The fact Centrifugo provides API allows users to implement any of those techniques and build worker which will send API commands to Centrifugo reliably.
But Centrifugo also provides some built-in asynchronous consumers to simplify the integration process.
Supported consumers
The following built-in async consumers are available at this point:
- from PostgreSQL outbox table
- from Kafka topics
- from Nats Jetstream
- from Redis Streams
- from Google Cloud PUB/SUB
- from AWS SQS
- from Azure Service Bus
Again, while built-in consumers can simplify integration, you still can use whatever queue system you need and integrate your own consumer with Centrifugo sending requests to server API.
We also recommend looking at Pitfalls of async publishing part in our previous blog post – while in many cases you get reliable at least once processing, you may come across some pitfalls in the process, being prepared and understanding them is important. Then depending on the real-time feature you can decide which approach is better – synchronous publishing or asynchronous integration.
How consumers work
By default, consumers expect to consume messages which represent Centrifugo server API commands. I.e. while in synchronous server API you are using HTTP or GRPC to send commands – with asynchronous consumers you are inserting API command to PostgreSQL outbox table, or delivering to Kafka topic – and it will be soon consumed and processed asynchronously by Centrifugo.
Async consumers only process commands which modify state – such as publish, broadcast, unsubscribe, disconnect, etc. Sending read commands for async execution simply does not make any sense, and they will be ignored. Also, batch method is not supported.
Centrifugo only supports JSON payloads for asynchronous commands coming to consumers for now. If you need binary format – reach out with your use case.
If Centrifugo encounters an error while processing consumed messages – then internal errors will be retried, all other errors logged on error
level – and the message will be marked as processed. The processing logic for broadcast API is special: if any of the publications to any channel from broadcast channels
array failed – then the entire broadcast command will be retried. To prevent duplicate messages being published during such retries – consider using idempotency_key
in the broadcast command.
Our Chat/Messenger tutorial shows PostgreSQL outbox and Kafka consumer in action. It also shows techniques to avoid duplicate messages (idempotent publications) and deal with late message delivery (idempotent processing on client side). Whether you need those techniques – depends on the nature of app. Various real-time features may require different ways of sending real-time events. Both synchronous API calls and async calls have its own advantages and trade-offs. We also talk about this in Asynchronous message streaming to Centrifugo with Benthos blog post.
Publication data mode
As mentioned, Centrifugo expects server API commands in received message content. Once the command consumed – it's processed in the same way as HTTP or GRPC server APIs process the request.
Sometimes though, you may have a system that already produces messages in a format ready to be published into Centrifugo channels. Most Centrifugo async consumers have a special mode to consume publications – called Publication Data Mode. In that case, payload of message must contain a data ready to be published into Centrifugo channels. Users can provide Centrifugo-specific publication fields like a list of channels to publish into in message headers/attributes. See documentation of each specific consumer to figure out exact option names. For example, Kafka consumer has publication data mode. And similar for other consumers.
Note, since you can provide many consumers in Centrifugo configuration - it's totally possible to have consumers working in different modes.
Ordering guarantees
Carefully read specific consumer documentation for understanding message processing ordering properties – ordered processing can be achieved with some of them, and can not with others.
How to enable
Consumers can be set in the configuration using consumers
array:
{
"consumers": [
{
"enabled": true,
"name": "xxx",
"type": "postgresql",
"postgresql": {...}
},
{
"enabled": true,
"name": "yyy",
"type": "kafka",
"kafka": {...}
}
]
}
consumers[]
So consumers may be configured using consumers
array on configuration top level.
On top level each consumer object in the consumers
array has the following configuration options.
consumers[].enabled
Boolean. Default: false
.
When set to true
allows enabling the configured consumer.
consumers[].name
String. Default: ""
. Required.
Describes name of consumer. Must be unique for each consumer and match the regex ^[a-zA-Z0-9_]{2,}
- i.e. latin symbols, digits and underscores and be at least 2 symbols. This name will be used for logging purposes, metrics, also to override some options with environment variables.
consumers[].type
String. Default: ""
. Required.
Type of consumer. At this point can be:
postgresql
kafka
Configure via env vars
To provide consumers
over environment variable provide CENTRIFUGO_CONSUMERS
var with JSON array serialized to string.
It's also possible to override consumer options over environment variables by using the name of consumer. For example:
CENTRIFUGO_CONSUMERS_<CONSUMER_NAME>_<OPTION_NAME>="???"
Or for specific type configuration:
CENTRIFUGO_CONSUMERS_<CONSUMER_NAME>_POSTGRESQL_<OPTION_NAME2>="???"
PostgreSQL outbox consumer
Centrifugo can natively integrate with PostgreSQL table for Transactional outbox pattern. The table in PostgreSQL must have predefined format Centrifugo expects:
CREATE TABLE IF NOT EXISTS centrifugo_outbox (
id BIGSERIAL PRIMARY KEY,
method text NOT NULL,
payload JSONB NOT NULL,
partition INTEGER NOT NULL default 0,
created_at TIMESTAMP WITH TIME ZONE DEFAULT now() NOT NULL
);
Then configure consumer of postgresql
type in Centrifugo config:
{
...
"consumers": [
{
"enabled": true,
"name": "my_postgresql_consumer",
"type": "postgresql",
"postgresql": {
"dsn": "postgresql://user:password@localhost:5432/db",
"outbox_table_name": "centrifugo_outbox",
"num_partitions": 1,
"partition_select_limit": 100,
"partition_poll_interval": "300ms"
}
}
]
}
Here is how you can insert row in outbox table to publish into Centrifugo channel:
INSERT INTO centrifugo_outbox (method, payload, partition)
VALUES ('publish', '{"channel": "updates", "data": {"text": "Hello, world!"}}', 0);
Centrifugo supports LISTEN/NOTIFY mechanism of PostgreSQL to be notified about new data in the outbox table. To enable it you need first create a trigger in PostgreSQL:
CREATE OR REPLACE FUNCTION centrifugo_notify_partition_change()
RETURNS TRIGGER AS $$
BEGIN
PERFORM pg_notify('centrifugo_partition_change', NEW.partition::text);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE TRIGGER centrifugo_notify_partition_trigger
AFTER INSERT ON chat_outbox
FOR EACH ROW
EXECUTE FUNCTION centrifugo_notify_partition_change();
And then update consumer config – add "partition_notification_channel"
option to it:
{
...
"consumers": [
{
"enabled": true,
"name": "my_postgresql_consumer",
"type": "postgresql",
"postgresql": {
...
"partition_notification_channel": "centrifugo_partition_change"
}
}
]
}
consumers[].postgresql
Options for consumer of postgresql
type.
consumers[].postgresql.dsn
String. Default: ""
. Required.
DSN to PostgreSQL database, ex. "postgresql://user:password@localhost:5432/db"
. To override dsn
over environment variables use CENTRIFUGO_CONSUMERS_<CONSUMER_NAME>_POSTGRESQL_DSN
.
consumers[].postgresql.outbox_table_name
String. Default: ""
. Required.
The name of outbox table in selected database, ex. "centrifugo_outbox"
.
consumers[].postgresql.num_partitions
Integer. Default: 1
.
The number of partitions to use. Centrifugo keeps strict order of commands per-partition by default. This option provides a way to create concurrent consumers each consuming from different partition of outbox table. Note, that partition numbers in start with 0
, so when using 1
as num_partitions
insert data with partition
== 0
to the outbox table.
consumers[].postgresql.partition_select_limit
Integer. Default: 100
.
Max number of commands to select in one query to outbox table.
consumers[].postgresql.partition_poll_interval
Duration. Default: "300ms"
.
Polling interval for each partition.
consumers[].postgresql.partition_notification_channel
String. Default: ""
.
Optional name of LISTEN/NOTIFY channel to trigger consuming upon data added to outbox partition.
consumers[].postgresql.tls
TLS object. By default, no TLS is used.
Client TLS configuration for PostgreSQL connection.