Control Script
GridGain provides a command line script — control.sh|bat
— that you can use to monitor and control your clusters.
The script is located under the /bin/
folder of the installation directory.
You can define the control.sh|bat
log directory via an environment variable:
$ CONTROL_JVM_OPTS=-Djava.util.logging.config.file=<PATH_TO_CONFIG> ./control.sh
$ CONTROL_JVM_OPTS=-Djava.util.logging.config.file=<PATH_TO_CONFIG> ./control.bat
Commands
The control script supports the following commands (in alphabetical order):
Script Syntax
The control script has the following syntax:
control.sh <connection parameters> <command> <arguments>
control.bat <connection parameters> <command> <arguments>
-
<connection parameters>
– parameters the script uses to connect to a cluster node. These parameters are required for the commands that are executed on the cluster nodes. If no connection parameters are provided, the control script tries to connect to a node running on localhost (localhost:11211
). -
<command>
– one of the commands listed above. -
<arguments>
– command-specific arguments.
Connection Parameters
Parameter | Description | Default Value |
---|---|---|
--host HOST_OR_IP |
The host name or IP address of the node. |
|
--port PORT |
The port to connect to. The port must be opened on the host specified in the |
|
--user USER |
The user name. |
|
--password PASSWORD |
The user password. |
|
--ping-interval PING_INTERVAL |
The ping interval. |
5000 |
--ping-timeout PING_TIMEOUT |
The ping response timeout. |
30000 |
--ssl-protocol PROTOCOL1, PROTOCOL2… |
A list of SSL protocols to try when connecting to the cluster. Supported protocols. |
|
--ssl-cipher-suites CIPHER1,CIPHER2… |
A list of SSL ciphers. Supported ciphers. |
|
--ssl-key-algorithm ALG |
The SSL key algorithm. |
|
--keystore-type KEYSTORE_TYPE |
The keystore type. |
|
--keystore KEYSTORE_PATH |
The path to the keystore. Specify a keystore to enable SSL for the control script. |
|
--keystore-password KEYSTORE_PWD |
The password to the keystore. |
|
--truststore-type TRUSTSTORE_TYPE |
The type of the truststore. |
|
--truststore TRUSTSTORE_PATH |
The path to the truststore. |
|
--truststore-password TRUSTSTORE_PWD |
The trustore password. |
Activation, Deactivation, and Topology Management
You can use the control script to activate or deactivate your cluster, and manage the Baseline Topology.
set-state
Use this command to set a cluster’s state:
control.sh --set-state <state> [--yes]
control.bat --set-state <state> [--yes]
The command arguments are as follows.
Argument | Description | Values |
---|---|---|
state |
The state to put the cluster in. |
|
--yes |
Optional; if used, fast-forwards the command by automatically answering "yes" to all of the command’s prompts. |
state
To get the state of a cluster (activated or not), use the following syntax:
control.sh --state
control.bat --state
baseline
To get a list of nodes registered in the baseline topology, run the following command:
control.sh --baseline
control.bat --baseline
The output contains the current topology version, the list of consistent IDs of the nodes included in the baseline topology, and the list of nodes that joined the cluster but were not added to the baseline topology.
Command [BASELINE] started
Arguments: --baseline
--------------------------------------------------------------------------------
Cluster state: active
Current topology version: 3
Current topology version: 3 (Coordinator: ConsistentId=dd3d3959-4fd6-4dc2-8199-bee213b34ff1, Order=1)
Baseline nodes:
ConsistentId=7d79a1b5-cbbd-4ab5-9665-e8af0454f178, State=ONLINE, Order=2
ConsistentId=dd3d3959-4fd6-4dc2-8199-bee213b34ff1, State=ONLINE, Order=1
--------------------------------------------------------------------------------
Number of baseline nodes: 2
Other nodes:
ConsistentId=30e16660-49f8-4225-9122-c1b684723e97, Order=3
Number of other nodes: 1
Command [BASELINE] finished with code: 0
Control utility has completed execution at: 2019-12-24T16:53:08.392865
Execution time: 333 ms
baseline add
To add a node (or multiple nodes) to the baseline topology, run the following command.
control.sh --baseline add <consistentId1,consistentId2,...> [--yes]
control.bat --baseline add <consistentId1,consistentId2,...> [--yes]
The command arguments are as follows.
Argument | Description |
---|---|
Id1,Id2,… |
A comma-separated list of consistent IDs of the nodes to add. |
--yes |
Optional; if used, fast-forwards the command by automatically answering "yes" to all of the command’s prompts. |
baseline remove
To remove a node from the baseline topology, use the following command.
control.sh --baseline remove <consistentId1,consistentId2,...> [--yes]
control.bat --baseline remove <consistentId1,consistentId2,...> [--yes]
The command arguments are as follows.
Argument | Description |
---|---|
Id1,Id2,… |
A comma-separated list of consistent IDs of the nodes to remove. |
--yes |
Optional; if used, fast-forwards the command by automatically answering "yes" to all of the command’s prompts. |
baseline set
You can set baseline topology by providing a list of nodes (consistent IDs).
The command syntax is as follows:
control.sh --baseline set <consistentId1,consistentId2,...> [--yes]
control.bat --baseline set <consistentId1,consistentId2,...> [--yes]
The command arguments are as follows.
Argument | Description |
---|---|
Id1,Id2,… |
A comma-separated list of consistent IDs of the nodes for setting baseline topology. |
--yes |
Optional; if used, it fast-forwards the command by automatically answering "yes" to all of the command’s prompts. |
baseline version
To restore a specific version of the baseline topology, use the following command:
control.sh --baseline version <topologyVersion> [--yes]
control.bat --baseline version <topologyVersion> [--yes]
The command arguments are as follows.
Argument | Description |
---|---|
topologyVersion |
The version of the baseline topology to restore. |
--yes |
Optional; if used, fast-forwards the command by automatically answering "yes" to all of the command’s prompts. |
baseline autoadjust enable
Baseline topology autoadjustment is an automatic update of baseline topology after the topology has been stable for a specific amount of time.
For in-memory clusters, autoadjustment is enabled by default with the timeout set to 0. It means that baseline topology changes immediately after server nodes join or leave the cluster.
For clusters with persistence, the automatic baseline adjustment is disabled by default.
To enable autoadjust, use the following command:
control.sh --baseline auto_adjust enable timeout <value>
control.bat --baseline auto_adjust enable timeout <value>
The command arguments are as follows.
Argument | Description | Default Value |
---|---|---|
timeout |
The autoadjust timeout, in milliseconds. The baseline is set to the current topology when a given number of milliseconds has passed after the last JOIN/LEFT/FAIL event. Every new JOIN/LEFT/FAIL event restarts the timeout countdown. |
|
baseline autoadjust disable
Baseline topology autoadjustment is an automatic update of baseline topology after the topology has been stable for a specific amount of time.
For clusters with persistence, the automatic baseline adjustment is disabled by default.
For in-memory clusters, autoadjustment is enabled by default with the timeout set to 0. It means that baseline topology changes immediately after server nodes join or leave the cluster.
To disable baseline autoadjustment, use the following command:
control.sh --baseline auto_adjust disable
control.bat --baseline auto_adjust disable
Transaction Management
The control script allows you to get information about the transactions that are executed in the cluster, as well as to cancel specific transactions.
tx --info
The following command returns a list of transactions that meet the filter conditions (or all transactions if no filter is defined):
control.sh --tx --info <transaction filter>
control.bat --tx --info <transaction filter>
The transaction filter parameters are as follows.
Parameter | Description |
---|---|
--xid XID |
The transaction ID. |
--min-duration SECONDS |
The minimum number of seconds a transaction has been executing. |
--min-size SIZE |
The minimum transaction size. |
--label LABEL |
The user label for transactions (you can use a regular expression). |
--servers|--clients |
Limits the scope of the operation to either the server or client nodes. |
--nodes nodeId1,nodeId2… |
The list of consistent IDs of the nodes to get transactions from. |
--limit NUMBER |
Limits the number of transactions to the given value. |
--order DURATION|SIZE|START_TIME |
The parameter to sort the output by. |
tx --kill
To cancel transactions, use the following command:
control.sh --tx <transaction filter> --kill
control.bat --tx <transaction filter> --kill
The transaction filter parameters are as follows.
Parameter | Description |
---|---|
--xid XID |
The transaction ID. |
--min-duration SECONDS |
The minimum number of seconds a transaction has been executing. |
--min-size SIZE |
The minimum transaction size. |
--label LABEL |
The user label for transactions (you can use a regular expression). |
--servers|--clients |
Limits the scope of the operation to either the server or client nodes. |
--nodes nodeId1,nodeId2… |
The list of consistent IDs of the nodes to get transactions from. |
--limit NUMBER |
Limits the number of transactions to the given value. |
--order DURATION|SIZE|START_TIME |
The parameter to sort the output by. |
The command affects the following transactions:
-
ACTIVE
-
PREPARING
-
PREPARED
For example, to cancel the transactions that have been running for more than 100 seconds, execute the following command:
control.sh --tx --min-duration 100 --kill
control.bat --tx --min-duration 100 --kill
cache contention
Use this command to detects situations where multiple transactions are in contention to create a lock for the same key. The command is useful if you have long-running or hanging transactions.
The command syntax is as follows:
control.sh --cache contention <arguments>
control.bat --cache contention <arguments>
The command arguments are as follows:
Argument | Description |
---|---|
min queue size |
The minimum number of transactions to wait for a specific key for the contention to be detected. |
node id |
Optional; the Id of the node to query for contentions |
max lines |
Optional; the number of lines to be printed in the output. |
Example:
# Reports all keys that are a point of contention for at least 5 transactions on all cluster nodes.
control.sh --cache contention 5
# Reports all keys that are a point of contention for at least 5 transactions on a specific server node.
control.sh --cache contention 5 f2ea-5f56-11e8-9c2d-fa7a
# Reports all keys that are a point of contention for at least 5 transactions on all cluster nodes.
control.bat --cache contention 5
# Reports all keys that are a point of contention for at least 5 transactions on a specific server node.
control.bat --cache contention 5 f2ea-5f56-11e8-9c2d-fa7a
If contended keys are detected, the command dumps extensive information including the keys, transactions, and nodes where the contention took place.
Example:
[node=TcpDiscoveryNode [id=d9620450-eefa-4ab6-a821-644098f00001, addrs=[127.0.0.1], sockAddrs=[/127.0.0.1:47501], discPort=47501, order=2, intOrder=2, lastExchangeTime=1527169443913, loc=false, ver=2.5.0#20180518-sha1:02c9b2de, isClient=false]]
// No contention on node d9620450-eefa-4ab6-a821-644098f00001.
[node=TcpDiscoveryNode [id=03379796-df31-4dbd-80e5-09cef5000000, addrs=[127.0.0.1], sockAddrs=[/127.0.0.1:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1527169443913, loc=false, ver=2.5.0#20180518-sha1:02c9b2de, isClient=false]]
TxEntry [cacheId=1544803905, key=KeyCacheObjectImpl [part=0, val=0, hasValBytes=false], queue=10, op=CREATE, val=UserCacheObjectImpl [val=0, hasValBytes=false], tx=GridNearTxLocal[xid=e9754629361-00000000-0843-9f61-0000-000000000001, xidVersion=GridCacheVersion [topVer=138649441, order=1527169439646, nodeOrder=1], concurrency=PESSIMISTIC, isolation=REPEATABLE_READ, state=ACTIVE, invalidate=false, rollbackOnly=false, nodeId=03379796-df31-4dbd-80e5-09cef5000000, timeout=0, duration=1247], other=[]]
TxEntry [cacheId=1544803905, key=KeyCacheObjectImpl [part=0, val=0, hasValBytes=false], queue=10, op=READ, val=null, tx=GridNearTxLocal[xid=8a754629361-00000000-0843-9f61-0000-000000000001, xidVersion=GridCacheVersion [topVer=138649441, order=1527169439656, nodeOrder=1], concurrency=PESSIMISTIC, isolation=REPEATABLE_READ, state=ACTIVE, invalidate=false, rollbackOnly=false, nodeId=03379796-df31-4dbd-80e5-09cef5000000, timeout=0, duration=1175], other=[]]
TxEntry [cacheId=1544803905, key=KeyCacheObjectImpl [part=0, val=0, hasValBytes=false], queue=10, op=READ, val=null, tx=GridNearTxLocal[xid=6a754629361-00000000-0843-9f61-0000-000000000001, xidVersion=GridCacheVersion [topVer=138649441, order=1527169439654, nodeOrder=1], concurrency=PESSIMISTIC, isolation=REPEATABLE_READ, state=ACTIVE, invalidate=false, rollbackOnly=false, nodeId=03379796-df31-4dbd-80e5-09cef5000000, timeout=0, duration=1175], other=[]]
TxEntry [cacheId=1544803905, key=KeyCacheObjectImpl [part=0, val=0, hasValBytes=false], queue=10, op=READ, val=null, tx=GridNearTxLocal[xid=7a754629361-00000000-0843-9f61-0000-000000000001, xidVersion=GridCacheVersion [topVer=138649441, order=1527169439655, nodeOrder=1], concurrency=PESSIMISTIC, isolation=REPEATABLE_READ, state=ACTIVE, invalidate=false, rollbackOnly=false, nodeId=03379796-df31-4dbd-80e5-09cef5000000, timeout=0, duration=1175], other=[]]
TxEntry [cacheId=1544803905, key=KeyCacheObjectImpl [part=0, val=0, hasValBytes=false], queue=10, op=READ, val=null, tx=GridNearTxLocal[xid=4a754629361-00000000-0843-9f61-0000-000000000001, xidVersion=GridCacheVersion [topVer=138649441, order=1527169439652, nodeOrder=1], concurrency=PESSIMISTIC, isolation=REPEATABLE_READ, state=ACTIVE, invalidate=false, rollbackOnly=false, nodeId=03379796-df31-4dbd-80e5-09cef5000000, timeout=0, duration=1175], other=[]]
// Node 03379796-df31-4dbd-80e5-09cef5000000 is place for contention on key KeyCacheObjectImpl [part=0, val=0, hasValBytes=false].
Cache Management
cache list
This command is used for cache monitoring. It retrieves a list of deployed caches, their affinity/distribution parameters, and their distribution within cache groups. There is also a command option for viewing existing atomic sequences.
Use the following command syntax:
control.sh --cache list <arguments>
control.bat --cache list <arguments>
The command arguments are as follows.
Argument | Description |
---|---|
. |
With this argument, the command lists all caches. |
account-.* |
With this argument, the command lists caches whose names start with "account-". |
. --groups |
With this argument, the command displays info about cache group distribution for all caches. |
account-.* --groups |
With this argument, the command displays info about cache group distribution for the caches whose names start with "account-". |
. --seq |
With this argument, the command displays info about all atomic sequences. |
account-.* --groups |
With this argument, the command displays info about the atomic sequences whose names start with "counter-". |
cache reset_lost_partitions
This command resets lost partitions in the specified caches. Refer to Partition Loss Policy for details.
Use the following command syntax:
control.sh --cache reset_lost_partitions <cacheName1,cacheName2,...> | --all
control.bat --cache reset_lost_partitions <cacheName1,cacheName2,...> | --all
The command arguments are as follows.
Argument | Description |
---|---|
cacheName1,cacheName2,… |
A comma-separated list of the caches to be affected by the command. |
--all |
Makes the command check all caches for lost partition, print a list of caches with lost partitions (if any are found), and reset lost partitions in all the caches it has found. |
cache idle_verify
This command verifies counters and hash sums of primary and backup partitions for the specified caches or scache groups on an idle cluster and prints out the differences, if any.
Use the following command syntax:
control.sh --cache idle_verify [--dump] [--skip-zeros] [--check-crc] [--exclude-caches <cacheName1,...,cacheNameN>] [--cache-filter ALL|USER|SYSTEM|PERSISTENT|NOT_PERSISTENT] [cacheName1,...,cacheNameN]
control.bat --cache idle_verify [--dump] [--skip-zeros] [--check-crc] [--exclude-caches <cacheName1,...,cacheNameN>] [--cache-filter ALL|USER|SYSTEM|PERSISTENT|NOT_PERSISTENT] [cacheName1,...,cacheNameN]
The command arguments are as follows.
Argument | Description |
---|---|
--dump |
Writes the command response to a local file on each node (in addition to returning response in the terminal window). |
--skip-zeros |
Skips zero-sized partitions. |
--check-crc |
Checks the CRC-sum of pages stored on disk before verifying partition data consistency between primary and backup nodes. |
--exclude-caches |
Excludes caches provided as a comma-separated list. |
--cache-filter |
Limits the caches affected by the command to only USER caches, only user PERSISTENT caches, only user NOT_PERSISTENT caches, only SYSTEM caches, or ALL of the above. |
cacheName1,…,cacheNameN |
A comma-separated list of the caches to be affected by the command. |
cache partition_reconciliation
Partition reconciliation is a process of consistency checking, with the goal to verify the internal data consistency invariants and fix the inconsistent entries. The main difference between idle_verify
and partition_reconciliation
is that the latter one can work under the load.
If the topology is changed while the script is running, or if the task execution fails, the command is automatically cancelled.
control.sh --cache partition_reconciliation <cache1,cache2,cache3...>
control.bat --cache partition_reconciliation <cache1,cache2,cache3...>
The command has the following optional arguments.
Argument | Description | Default Value |
---|---|---|
cache1,cache2,… |
A comma-separated list of the caches to be affected by the command. If caches are not specified, the command is executed on all caches. |
|
--repair {option} |
If specified, fixes all inconsistent data. You can choose the repair algorithm for the keys where the valid key is not obvious. The following values can be used:
|
|
--fast-check |
Checks and repairs only those partitions that did not pass validation during the last partition map exchange. If not specified, all partitions are checked and repaired. |
|
--parallelism |
The maximum number of threads that can be involved in the reconciliation activities. If not specified, the number of the node’s cores is used. |
|
--batch-size |
The number of keys to retrieve within one job. |
1000 |
--include-sensitive |
Includes sensitive information in the printout: keys and values. |
|
--recheck-attempts |
The number mount of recheck attempts for the potentially inconsistent keys. The recommended value is between 1 and 5. |
2 |
--recheck-delay |
Recheck delay in seconds. |
5 |
cache partition_reconciliation_cancel
Use this command to safely stop the partition reconciliation process. All changes done before the cancellation are preserved.
control.sh --cache partition_reconciliation_cancel
control.bat --cache partition_reconciliation_cancel
cache destroy
Use this command to destroy caches.
The syntax is as follows:
control.sh --cache destroy --caches <cache1,...,cacheN>|--destroy-all-caches
control.bat --cache destroy --caches <cache1,...,cacheN>|--destroy-all-caches
The command uses one of the following two arguments:
Argument | Description |
---|---|
--caches |
A comma-separated list of names of the caches to be destroyed. |
--destroy-all-caches |
Destroys all user-created caches. |
cache clear
Use this command to clear caches.
The syntax is as follows:
control.sh --cache clear --caches <cache1,...,cacheN>
control.bat --cache clear --caches <cache1,...,cacheN>
The command uses the following argument:
Argument | Description |
---|---|
--caches |
A comma-separated list of names of the caches to be cleared. |
Tracing Configuration Management
The control.sh|bat
script includes a set of commands that enable you to manage tracing configurations.
tracing-configuration
Use this command to print tracing configuration for all scopes and labels.
The syntax is as follows:
control.sh --tracing-configuration
control.bat --tracing-configuration
tracing-configuration get_all
Use this command to print tracing configuration for all labels and, optionally, the specified --scope
.
The syntax is as follows:
control.sh --tracing-configuration get_all [--scope <scope>]
control.bat --tracing-configuration get_all [--scope <scope>]
The command uses the following argument:
Argument | Description |
---|---|
--scope |
The scope to print the tracing configuration for: |
tracing-configuration get
Use this command to print tracing configuration for specified --scope
and --label
.
The syntax is as follows:
control.sh --tracing-configuration get --scope <scope> [--label<label>]
control.bat --tracing-configuration get --scope <scope> [--label<label>]
The command uses the following arguments:
Argument | Description |
---|---|
--scope |
The scope to print the tracing configuration for: |
--label |
The label to print the tracing configuration for. |
tracing-configuration reset_all
Use this command to reset tracing configurations to default values, then print the reset configurations, optionally for the specified --scope
.
The syntax is as follows:
control.sh --tracing-configuration reset_all [--scope <scope>]
control.bat --tracing-configuration reset_all [--scope <scope>]
The command uses the following argument:
Argument | Description |
---|---|
--scope |
If specified, the command applies the configurations only to the specified scope, which can be |
tracing-configuration reset
Use this command to reset tracing configurations to default values, then print the reset configurations. If both --scope
and --label
are specified, removes the current configuration. If only --scope
is specified, resets the current configuration to the default.
The syntax is as follows:
control.sh --tracing-configuration reset --scope <scope> [--label <label>]
control.bat --tracing-configuration reset --scope <scope> [--label <label>]
The command uses the following arguments:
Argument | Description |
---|---|
--scope |
The command applies to configurations only for the specified scope, which can be |
--label |
If specified, the command applies to configurations for the specified label. |
tracing-configuration set
Use this command to set a new tracing configuration, then print it.
The syntax is as follows:
control.sh --tracing-configuration set --scope <scope> [--label <label>] [--sampling-rate <samplingRate>] [--included-scopes <scope, ...>]
control.bat --tracing-configuration set --scope <scope> [--label <label>] [--sampling-rate <samplingRate>] [--included-scopes <scope, ...>]
The command uses the following arguments:
Argument | Description |
---|---|
--scope |
The command applies to configurations only for the specified scope, which can be |
--label |
If specified, the command applies to configurations for the specified label. |
--sampling-rate |
A decimal value between 0 and 1, where 0 means "never" and 1 means "always." Reflects the probability of sampling a specific trace. |
--included-scopes |
A comma-separated list of scopes that defines the sub-traces to be included in a given trace. In other words, if a child’s span scope equals the parent’s scope, or if it belongs to the set of scopes included in the parent’s span, then the specified child span is attached to the current trace. The scopes can be |
Consistency Checks
The control.sh|bat
script includes a set of commands that enable you to verify the internal data consistency.
These commands can be used for:
-
Debugging and troubleshooting, especially during active development
-
Checking for data inconsistencies when there is a suspicion that a query - such as an SQL query - returns an incomplete or wrong result set
-
Regular cluster health monitoring
cache idle_verify
This command compares the hash of the primary partition with that of the backup partitions and reports differences, if any. Said differences might result from node failure or incorrect shutdown during an update operation.
The command syntax is as follows:
control.sh --cache idle_verify [<cache1,cache2,cache3...>]
control.bat --cache idle_verify [<cache1,cache2,cache3...>]
The command has the following optional arguments.
Argument | Description |
---|---|
cache1,cache2,… |
A comma-separated list of the caches to be affected by the command. |
A list of diverging partitions is printed out, as follows:
idle_verify check has finished, found 2 conflict partitions.
Conflict partition: PartitionKey [grpId=1544803905, grpName=default, partId=5]
Partition instances: [PartitionHashRecord [isPrimary=true, partHash=97506054, updateCntr=3, size=3, consistentId=bltTest1], PartitionHashRecord [isPrimary=false, partHash=65957380, updateCntr=3, size=2, consistentId=bltTest0]]
Conflict partition: PartitionKey [grpId=1544803905, grpName=default, partId=6]
Partition instances: [PartitionHashRecord [isPrimary=true, partHash=97595430, updateCntr=3, size=3, consistentId=bltTest1], PartitionHashRecord [isPrimary=false, partHash=66016964, updateCntr=3, size=2, consistentId=bltTest0]]
cache check_index_inline_sizes
This command verifies that index inline size is the same on all cluster nodes. Every entry in the SQL index has a constant size, which is calculated during the index creation. Having different sizes on different nodes in a cluster may lead to performance issues.
The command syntax is as follows:
control.sh --cache check_index_inline_sizes
control.bat --cache check_index_inline_sizes
The command returns a response in the command line.
cache validate_indexes
This command validates the indexes of the specified caches on the specified cluster nodes.
It verifies that:
-
All key-value entries referenced from the primary index are reachable from the secondary SQL indexes.
-
All the key-value entries referenced from the primary index are reachable as defined.
-
All the key-value entries referenced from the secondary SQL indexes are reachable from the primary index.
The command syntax is as follows:
control.sh --cache validate_indexes <optional arguments>
control.bat --cache validate_indexes <optional arguments>
The command can use one or both of the following optional arguments:
Argument | Description |
---|---|
cache1,…,cacheN |
A comma-separated list of names of the caches to be validated. |
node ID; e.g., f2ea-5f56-11e8-9c2d-fa7a |
ID of the cluster node on which the caches should be validated. |
If indexes refer to non-existing entries, or if some entries are not indexed, errors appear in the command output. Example:
PartitionKey [grpId=-528791027, grpName=persons-cache-vi, partId=0] ValidateIndexesPartitionResult [updateCntr=313, size=313, isPrimary=true, consistentId=bltTest0]
IndexValidationIssue [key=0, cacheName=persons-cache-vi, idxName=_key_PK], class org.apache.ignite.IgniteCheckedException: Key is present in CacheDataTree, but can't be found in SQL index.
IndexValidationIssue [key=0, cacheName=persons-cache-vi, idxName=PERSON_ORGID_ASC_IDX], class org.apache.ignite.IgniteCheckedException: Key is present in CacheDataTree, but can't be found in SQL index.
validate_indexes has finished with errors (listed above).
cache distribution
This command prints information about partition distribution.
The command syntax is as follows:
control.sh --cache distribution nodeId|null [cacheName1,...,cacheNameN] [--user-attributes attrName1,...,attrNameN]
control.bat --cache distribution nodeId|null [cacheName1,...,cacheNameN] [--user-attributes attrName1,...,attrNameN]
The command can use one or both of the following optional arguments:
Argument | Description |
---|---|
node ID; e.g., f2ea-5f56-11e8-9c2d-fa7a |
ID of the cluster node whose cache distribution must be queried. Alternatively, "null", which means "all nodes." |
cacheName1,…,cacheNameN |
A comma-separated list of names of the caches whose distribution must be queried. Providing no cache names means "all caches." |
--user-attributes attrName1,…,attrNameN |
A comma-separated list of the names of node attributes to be included in the command response. User attributes are configurable. |
Cluster Properties
The control.sh|bat
script enables you to use the SQL statistics functionality.
property list
Use this command to get the full list of the available cluster properties.
The command syntax is as follows:
control.sh --property list
control.bat --property list
property set
Use this command to set property values.
The command syntax is as follows:
control.sh --property set --name <property name> --val <property value>
control.bat --property set --name <property name> --val <property value>
For example, you to enable SQL statistics in a cluster:
control.sh --property set --name 'statistics.usage.state' --val 'ON'
control.bat --property set --name 'statistics.usage.state' --val 'ON'
property get
Use this command to get a property value.
The command syntax is as follows:
control.sh --property get --name <name>
control.bat --property get --name <name>
checkpoint
When a user uploads a large volume of data to the cluster, to be sure the data is delivered successfully, you can trigger a manual checkpoint on the cluster.
Use the following command to trigger a checkpoint on all server nodes simultaneously and wait until the checkpoint completes:
control.sh --checkpoint
control.bat --checkpoint
In a large distributed database, triggering multiple checkpoints at the same time can create a spike in traffic and, consequently, negatively affect the cluster performance. To prevent this, you can set the checkpoint.deviation
property to offset checkpoints' time, so that they happen within specified interval of checkpoint.frequency
. For example, if your checkpoints trigger every 100 seconds, and you set checkpoint.deviation
at 10%, your triggers will happen every 95-105 seconds.
The example below offsets the checkpoints by 10% to reduce traffic issues:
control.sh --property set --name checkpoint.deviation --val 10
control.sh --property set --name checkpoint.deviation --val 10
Cluster Diagnostics
The control.sh|bat
script provides diagnostics for your cluster.
diagnostic
Use this command to get diagnostic information for your cluster (page locks or connectivity):
control.sh --diagnostic pageLocks|connectivity dump|dump_log [--path <absolute path>] [--all]|[--nodes <nodeId1,...,nodeIdN>]
control.bat --diagnostic pageLocks|connectivity dump|dump_log [--path <absolute path>] [--all]|[--nodes <nodeId1,...,nodeIdN>]
The command arguments are as follows:
Argument | Description |
---|---|
pageLocks |
Determines what pages are currently locked and outputs the page lock information to a file/log. |
connectivity |
Detects inter-node communication issues and outputs the corresponding information to log/file. |
dump |
Dumps the page lock information to a file. |
dump_log |
Dumps the page lock information to the log. |
--path <path> |
Optional. If used, provides an absolute path to the folder to write the output to. If not used, the data is saved in the work folder. |
--all |
Gets information on all nodes. |
--nodes |
A comma-separated list of IDs of the nodes to get information on. |
Example pageLock
output:
Thread=[name=main, id=1], state=RUNNABLE
Locked pages = [2[0000000000000002](r=1|w=0)]
Locked pages log: name=main time=(1673270864117, 2023-01-09 15:27:44.117)
L=1 -> Read lock pageId=2, structureId=null [pageIdHex=0000000000000002, partId=0, pageIdx=2, flags=00000000]
Example connectivity
output:
There is no connectivity between the following nodes:
SOURCE-NODE-ID SOURCE-CONSISTENT-ID SOURCE-NODE-TYPE DESTINATION-NODE-ID DESTINATION_CONSISTENT_ID DESTINATION-NODE-TYPE
ea0b75f5-8c31-4934-b076-edb911800001 gridCommandHandlerTest1 SERVER 3db4a4bb-f758-458c-8569-1ed3e3500003 gridCommandHandlerTest3 SERVER
99ac27d8-4e5e-43cd-9acc-1320e6500002 gridCommandHandlerTest2 SERVER 3db4a4bb-f758-458c-8569-1ed3e3500003 gridCommandHandlerTest3 SERVER
Cluster Encryption
You can use the control.sh|bat
script to manage cluster encryption parameters. For more information about encryption key, see the Transparent Data Encryption page.
encryption get_master_key_name
Use the this command to get the master key that is used in the cluster encryption:
control.sh --encryption get_master_key_name
control.bat --encryption get_master_key_name
encryption change_master_key
Use the this command to set the master key that is used in the cluster encryption:
control.sh --encryption change_master_key <newMasterKeyName>
control.bat --encryption change_master_key <newMasterKeyName>
encryption change_cache_key
Use this command to change cache-level encryption keys:
control.sh --encryption change_cache_key <cacheGroupName>
control.bat --encryption change_cache_key <cacheGroupName>
encryption cache_key_ids
Use this command to list IDs of the cache group encryption keys:
control.sh --encryption cache_key_ids <cacheGroupName>
control.bat --encryption cache_key_ids <cacheGroupName>
encryption reencryption_status
Use this command to monitor cluster reencryption:
control.sh --encryption reencryption_status <cacheGroupName>
control.bat --encryption reencryption_status cacheGroupName
encryption suspend_reencryption
Use this command to pause reencryption of your cluster:
control.sh --encryption suspend_reencryption <cacheGroupName>
control.bat --encryption suspend_reencryption <cacheGroupName>
encryption resume_reencryption
When re-encryption is initiated on the cluster, you can monitor it with the reencryption_status
subcommand.
Use this command to resume reencryption of your cluster that you had previously suspended (see encryption suspend_reencryption):
control.sh --encryption resume_reencryption <cacheGroupName>
control.bat --encryption resume_reencryption <cacheGroupName>
encryption reencryption_rate_limit
Encryption requires a large amount of system resources. One way to address this is to suspend reencryption to allow the system to handle the workload - see encryption suspend_reencryption. Alternatively, you can use this command to limit the reencryption rate.
control.sh --encryption reencryption_rate_limit <limit in MB/sec>
control.bat --encryption reencryption_rate_limit <limit in MB/sec>
Kill Queries
kill sql
This command stops the specific sql query.
control.sh --kill SQL <queryId>
control.bat --kill SQL <queryId>
Use the SQL_QUERIES system view to get the IDs of currently running queries.
kill continuous
This command stops the specific continuous query.
control.sh --kill CONTINUOUS <NodeId> <routineId>
control.bat --kill CONTINUOUS <NodeId> <routineId>
Use the CONTINUOUS_QUERIES system view to get the origin node ID and routine ID for your queries.
Rolling Upgrades
You can manage rolling upgrades using the control.sh|bat
script.
rolling-upgrade start
Use this command to enable rolling upgrades:
control.sh --rolling-upgrade start [--yes]
control.bat --rolling-upgrade start [--yes]
The command arguments are as follows.
Argument | Description |
---|---|
--yes |
Optional; if used, fast-forwards the command by automatically answering "yes" to all of the command’s prompts. |
rolling-upgrade finish
Use this command to disable rolling upgrades:
control.sh --rolling-upgrade finish [--yes]
control.bat --rolling-upgrade finish [--yes]
The command arguments are as follows.
Argument | Description |
---|---|
--yes |
Optional; if used, fast-forwards the command by automatically answering "yes" to all of the command’s prompts. |
rolling-upgrade force
Use this command to force a rolling upgrade on your cluster.
control.sh --rolling-upgrade force [--yes]
control.bat --rolling-upgrade force [--yes]
The command arguments are as follows.
Argument | Description |
---|---|
--yes |
Optional; if used, fast-forwards the command by automatically answering "yes" to all of the command’s prompts. |
rolling-upgrade status
Use this command to check the current status of the rolling upgrade on your cluster:
control.sh --rolling-upgrade status
control.bat --rolling-upgrade status
Cluster IDs and Tags
You can set cluster ID and tag values using the control.sh|bat
script.
change-id
Use this command to change the cluster ID of in-memory clusters.
control.sh --change-id <newIdValue> [--yes]
control.bat --change-id <newIdValue> [--yes]
The command arguments are as follows.
Argument | Description |
---|---|
newIdValue |
The new ID for the cluster. |
--yes |
Optional; if used, fast-forwards the command by automatically answering "yes" to all of the command’s prompts. |
change-tag
Use this command to change the cluster tag of in-memory clusters.
control.sh --change-tag <newTagValue> [--yes]
control.bat --change-tag <newTagValue> [--yes]
The command arguments are as follows.
Argument | Description |
---|---|
newTagValue |
The new tag for the cluster. |
--yes |
Optional; if used, fast-forwards the command by automatically answering "yes" to all of the command’s prompts. |
Data Center Replication
You can use the control.sh|bat
script to track data center replication.
dr state
Use this command to get the current state of data center replication:
With the --verbose
option, you can get extended status information.
control.sh --dr state [--verbose]
control.bat --dr state [--verbose]
The command arguments are as follows.
Argument | Description |
---|---|
--verbose |
Optional; if used, prints extended status information. |
dr topology
Use this command to print the cluster topology with the Data Center Replication details:
control.sh --dr topology [--sender-hubs] [--receiver-hubs] [--data-nodes] [--other-nodes]
control.bat --dr topology [--sender-hubs] [--receiver-hubs] [--data-nodes] [--other-nodes]
The command arguments are as follows.
Argument | Description |
---|---|
--sender-hubs |
Optional; if used, displays information about sender nodes. |
--receiver-hubs |
Optional; if used, displays information about receiver nodes. |
--data-nodes |
Optional; if used, displays information about data nodes in the cluster. |
--other-nodes |
Optional; if used, displays information about nodes that are not currently involved in the DR. |
dr pause
Use this command to pause data center replication on all sender nodes, or on a specific sender node.
control.sh --dr pause <remoteDataCenterId> [--yes]
control.bat --dr pause <remoteDataCenterId> [--yes]
The command arguments are as follows.
Argument | Description |
---|---|
remoteDataCenterId |
The ID of the remote data center. |
--yes |
Optional; if used, fast-forwards the command by automatically answering "yes" to all of the command’s prompts. |
dr resume
Use this command to resume data center replication on all sender nodes, or on a specific sender node.
control.sh --dr resume <remoteDataCenterId> [--yes]
control.bat --dr resume <remoteDataCenterId> [--yes]
The command arguments are as follows.
Argument | Description |
---|---|
remoteDataCenterId |
The ID of the remote data center. |
--yes |
Optional; if used, fast-forwards the command by automatically answering "yes" to all of the command’s prompts. |
dr full-state-transfer start
Use this command to execute a full state transfer on all caches in a cluster if the caches in the master cluster already have data:
control.sh --dr full-state-transfer start [--snapshot <snapshotId>] [--caches <cache1, ...>] [--sender-group <group1, ...>] [--data-centers <dcId, ...>] [--sync] [--yes]
control.bat --dr full-state-transfer start [--snapshot <snapshotId>] [--caches <cache1, ...>] [--sender-group <group1, ...>] [--data-centers <dcId, ...>] [--sync] [--yes]
The command arguments are as follows.
Argument | Description |
---|---|
--snapshot |
Optional; the ID of the snapshot to perform the state transfer on. |
--caches |
Optional; the list of caches to transfer. |
--sender-group |
Optional; the group of the sender caches. Possible values: <groupName>, ALL, DEFAULT, NONE. |
--data-centers |
Optional; IDs of the data centers involved in the transfer. |
--sync |
Optional; if used, executes the full state transfer in a synchronous way. Otherwise, the full state transfer is executed asynchronously. |
--yes |
Optional; if used, it fast-forwards the command by automatically answering "yes" to all of the command’s prompts. |
dr check-partition-counters
Use this command to check whether all partition counters are in order:
control.sh --dr check-partition-counters [--caches <cache1,...>]
control.bat --dr check-partition-counters [--caches cache1,cache2]
The command argument is as follows.
Argument | Description |
---|---|
--caches |
Optional; the list of caches to check the partition counters for. |
dr rebuild-partition-tree
Use this command to schedule/run the maintenance task for rebuilding DR trees:
control.sh --dr rebuild-partition-tree [--caches <cacheName1,...,cacheNameN>] [--groups <groupName1,...,groupNameN>] [--yes]
control.bat --dr rebuild-partition-tree [--caches <cacheName1,...,cacheNameN>] [--groups <groupName1,...,groupNameN>] [--yes]
The command arguments are as follows.
Argument | Description |
---|---|
--caches |
A comma-separated list of caches to be affected by the command. |
--groups |
A comma-separated list of cache groups to be affected by the command. |
--yes |
If used, fast-forwards the command by automatically answering "yes" to all of the command’s prompts. |
dr repair-partition-counters
Use this command to repair partition counters during data center replication:
control.sh --dr repair-partition-counters [--caches <cache1,...>]
control.bat --dr repair-partition-counters [--caches cache1,cache2]
The command argument is as follows.
Argument | Description |
---|---|
--caches |
Optional; the list of caches to check the partition counters for. |
dr cleanup-partition-tree
Use this command to schedule and run a maintenance task to clean up DR trees:
control.sh --dr cleanup-partition-tree [--caches <cacheName1,...,cacheNameN] [--groups groupName1,...,groupNameN] [--yes]
control.bat --dr cleanup-partition-tree [--caches <cacheName1,...,cacheNameN] [--groups groupName1,...,groupNameN] [--yes]
The command arguments are as follows.
Argument | Description |
---|---|
--caches |
Optional; the list of caches to to clean up. |
--groups |
Optional; the list of cache groups to clean up. |
--yes |
If used, fast-forwards the command by automatically answering "yes" to all of the command’s prompts. |
checkpointing force
Use this command to dump all data to persistent storage (for example, to make sure that all data is saved before deleting the source data):
control.sh --checkpointing force
control.bat --checkpointing force
dr full-state-transfer cancel
Use this command to cancel an active full state transfer at any time:
control.sh --dr full-state-transfer cancel <fullStateTransferUID> [--yes]
control.bat --dr full-state-transfer cancel <fullStateTransferUID> [--yes]
The command arguments are as follows.
Argument | Description |
---|---|
fullStateTransferUID |
The ID of the fill state transfer to cancel. |
--yes |
Optional; if used, fast-forwards the command by automatically answering "yes" to all of the command’s prompts. |
dr full-state-transfer list
Use this command to get a list of all full state transfers currently in progress.
control.sh --dr full-state-transfer list
control.bat --dr full-state-transfer list
dr node
Use this command to get information about a node’s status during data center replication:
control.sh --dr node <nodeId> [--config] [--metrics] [--clear-store] [--yes]
control.bat --dr node <nodeId> [--config] [--metrics] [--clear-store] [--yes]
The command arguments are as follows.
Argument | Description |
---|---|
--config |
Optional; displays node configuration. |
--metrics |
Optional; displays node metrics. |
--clear-store |
Optional; cleans store after the command execution. |
--yes |
Optional; if used, fast-forwards the command by automatically answering "yes" to all of the command’s prompts. |
dr cache
Use this command to get specific cache details related to data center replication:
control.sh --dr cache <regExp> [--config] [--metrics] [--cache-filter ALL|SENDING|RECEIVING|PAUSED|ERROR] [--sender-group <groupName>|ALL|DEFAULT|NONE] [--action stop|start|full-state-transfer] [--yes]
control.bat --dr cache <regExp> [--config] [--metrics] [--cache-filter ALL|SENDING|RECEIVING|PAUSED|ERROR] [--sender-group <groupName>|ALL|DEFAULT|NONE] [--action stop|start|full-state-transfer] [--yes]
The command arguments are as follows.
Argument | Description |
---|---|
--config |
Optional; displays cache configuration. |
--metrics |
Optional; displays node metrics. |
--cache-filter |
Optional; the cache filter. Possible values: ALL, SENDING, RECEIVING, PAUSED, ERROR. |
--sender-group |
Optional: the group of the sender caches. Possible values: <groupName>, ALL, DEFAULT, NONE. |
--action |
Optional; the action to perform. Possible values: stop, start, full-state-transfer. |
--yes |
Optional; if used, fast-forwards the command by automatically answering "yes" to all of the command’s prompts. |
Shutdown Policies
shutdown-policy
Use the shutdown-policy
command to set the node shutdown policy to:
-
GRACEFUL - to have nodes manage their ongoing tasks before they shut down
-
IMMEDIATE - to have nodes shut down immediately
The command syntax is as follows:
control.sh --shutdown-policy [IMMEDIATE|GRACEFUL]
control.bat --shutdown-policy [IMMEDIATE|GRACEFUL]
Warmup Configuration
warm-up
Use the warm-up
command to disable cache warmup on the cluster:
control.sh --warm-up --stop
control.bat --warm-up --stop
Persistence Configuration
You can use the control.sh|bat
script to define the way the data files and caches are treated.
persistence
Use this command to print information about the potentially corrupted caches on a local node:
control.sh --persistence
control.sh --persistence
persistence clean
Use this command to clean persistent caches.
control.sh --persistence clean [corrupted]|[all]|[caches <cache1,cache2,...,cacheN>]
control.bat --persistence clean [corrupted]|[all]|[caches <cache1,cache2,...,cacheN>]
The command arguments are as follows.
Argument | Description |
---|---|
corrupted |
Optional; cleans corrupted caches. |
all |
Optional; cleans all caches. |
caches |
Optional; cleans caches included in a comma-delimited list. |
persistence backup
Use this command to back up caches:
control.sh --persistence backup [corrupted]|[all]|[caches <cache1,cache2,...,cacheN>]
control.bat --persistence backup [corrupted]|[all]|[caches <cache1,cache2,...,cacheN>]
The command arguments are as follows.
Argument | Description |
---|---|
corrupted |
Optional; backs up corrupted caches. |
all |
Optional; backs up all caches. |
caches |
Optional; backs up caches included in a comma-delimited list. |
Defragmentation
When persistance is enabled on the disk, you need to perform data defragmentation on that disk.
defragmentation schedule
Use this command to schedule the disk defragmentation:
control.sh --defragmentation schedule --nodes <consistentId0,...,consistentIdN,> [--caches <cache1,cache2,...,cacheN>]
control.bat --defragmentation schedule --nodes <consistentId0,...,consistentIdN,> [--caches <cache1,cache2,...,cacheN>]
The command arguments are as follows.
Argument | Description |
---|---|
--nodes |
A comma-separate list of nodes to undergo defragmentation. |
--caches |
Optional; a comma-separated list pf caches to undergo defragmentation. |
defragmentation cancel
Use this command to cancel a scheduled or active defragmentation if it interferes with other operations:
control.sh --defragmentation cancel
control.bat --defragmentation cancel
defragmentation status
Use this command to check the status the current defragmentation process.
control.sh --defragmentation status
control.bat --defragmentation status
Binary Type Management
meta list
Use this command to list meta information:
control.sh --meta list
control.bat --meta list
meta details
Use this command to print information about the specified binary type:
-
typeId=<ID>
-
typeName=<name>
-
fields=<fields_count>
-
schemas=<schemas_count>
-
isEnum=<bool>
The command syntax is as follows:
control.sh --meta details --typeId <ID>|--typeName <name>
control.bat --meta details --typeId <ID>|--typeName <name>
One of the following command arguments must be used.
Argument | Description |
---|---|
--typeId |
The ID of the binary type. |
--typeName |
The name of the binary type. |
Following is an example of the command output:
typeId=0x1FBFBC0C (532659212)
typeName=TypeName1
Fields:
name=fld3, type=long[], fieldId=0x2FFF95 (3145621)
name=fld2, type=double, fieldId=0x2FFF94 (3145620)
name=fld1, type=Date, fieldId=0x2FFF93 (3145619)
name=fld0, type=int, fieldId=0x2FFF92 (3145618)
Schemas:
schemaId=0x6C5CC179 (1818018169), fields=[fld0]
schemaId=0x70E46431 (1894016049), fields=[fld0, fld1, fld2, fld3]
meta remove
Use this command to remove metadata for the specified type form the cluster and save the removed metadata to a file.
The command syntax is as follows:
control.sh --meta remove --typeId <ID>| --typeName <name> [--out <file_name>]
control.bat --meta remove --typeId <ID>| --typeName <name> [--out <file_name>]
The command arguments are as follows.
Argument | Description |
---|---|
--typeId |
The ID of the binary type. |
--typeName |
The name of the binary type. |
--out |
The name of the file to save the removed metadata to. If not specified, the default |
meta update
Use this command to updates cluster metadata from a specified file.
The command syntax is as follows:
control.sh --meta update --in <file_name>
control.bat --meta update --in <file_name>
The command arguments are as follows.
Argument | Description |
---|---|
--in |
The name of the file to use as a source of the metadata. |
Index Rebuild
You can use the control.sh|bat
script to manage the process of rebuilding indexes for specified caches or cache groups.
cache indexes_force_rebuild
Use this command to schedule an index rebuild:
control.sh --cache indexes_force_rebuild --node-id <nodeId1,...,nodeIdN> --cache-names <cacheName[index1,...indexN],...,cacheName3[index1] --group-names <groupName1,...groupNameN>
control.bat -cache indexes_force_rebuild --node-id <nodeId1,...,nodeIdN> --cache-names <cacheName[index1,...indexN],...,cacheName3[index1] --group-names <groupName1,...groupNameN>
The command arguments are as follows:
Argument | Description |
---|---|
--node-id |
A comma-separated list of nodes to rebuild indexes on. If not specified, rebuild is scheduled on all nodes. |
--cache-names |
A comma-separated list of cache names, optionally with indexes. If indexes are not specified, all indexes of the cache will be scheduled for the rebuild operation. Can be used in addition to cache group names. |
--group-names |
A comma-separated list of cache group names. Can be used in addition to cache names. |
© 2024 GridGain Systems, Inc. All Rights Reserved. Privacy Policy | Legal Notices. GridGain® is a registered trademark of GridGain Systems, Inc.
Apache, Apache Ignite, the Apache feather and the Apache Ignite logo are either registered trademarks or trademarks of The Apache Software Foundation.