GridGain Developers Hub

About Data Center Replication

GridGain enables you to replicate tables between multiple data centers and quickly recover when a data center goes offline.

Data replication affects tables. It does not copy cluster configurations, so you need to configure replica clusters manually. This method allows for various combinations of replica cluster configurations. For example, one cluster can replicate updates to multiple clusters, and multiple clusters can replicate data to each other.

When working with multiple data centers, it is important to make sure that if one data center goes down, another data center is fully capable of picking its load and data.

When data center replication is turned on, GridGain will automatically make sure that each cluster is consistently synchronizing its data to other data centers (there can be one or more).

Data Replication Modes

GridGain supports active-passive mode for replication.

In the active-passive mode, only one cluster (source cluster) interacts with the user application, and the other cluster (replica cluster) stores data purely for redundancy and failover purposes. Source cluster regularly sends updates to the replica cluster, keeping the data on it up to date.

Below is the minimum recommended configuration for data center replication:

  • Two sender and two receiver nodes to provide redundancy in case of connection issues.

  • 2 cores and 8Gb of memory available for each node involved in the replication.

    • Source nodes need less hardware in general.

    • Target nodes need better hardware in general.

Known Limitations

Data center replication has the following limitations:

  • Tables cannot be removed if replication is in progress, moreover any schema update operation is not possible until replication is finished (operation will be frozen). Replication needs to be stopped before truncating tables.

  • Replication will fail if the source and target table schemas are not synced. Schema should be synced manually before the replication and not changed during the replication process.

  • In case of rolling restart/upgrade all replications should be stopped/paused before nodes restart.