Chapter 21 Using MySQL AdminAPI

Table of Contents

21.1 MySQL AdminAPI
21.2 MySQL InnoDB Cluster
21.2.1 MySQL InnoDB Cluster Requirements
21.2.2 Deploying a Production InnoDB Cluster
21.2.3 Monitoring InnoDB Cluster
21.2.4 Working with Instances
21.2.5 Working with InnoDB Cluster
21.2.6 Configuring InnoDB Cluster
21.2.7 Troubleshooting InnoDB Cluster
21.2.8 Upgrading an InnoDB Cluster
21.2.9 Tagging the Metadata
21.2.10 InnoDB Cluster Tips
21.2.11 Known Limitations
21.3 MySQL InnoDB ReplicaSet
21.3.1 Introducing InnoDB ReplicaSet
21.3.2 Deploying InnoDB ReplicaSet
21.3.3 Adding Instances to a ReplicaSet
21.3.4 Adopting an Existing Replication Set Up
21.3.5 Working with InnoDB ReplicaSet
21.4 MySQL Router
21.4.1 Bootstrapping MySQL Router
21.4.2 Using AdminAPI and MySQL Router
21.5 AdminAPI MySQL Sandboxes

This chapter covers MySQL AdminAPI, provided with MySQL Shell, which enables you to administer MySQL instances, using them to create InnoDB Clusters, InnoDB ReplicaSets, and integrating MySQL Router.

21.1 MySQL AdminAPI

This section provides an overview of AdminAPI and what you need to know to get started.

MySQL Shell includes the AdminAPI, which is accessed through the dba global variable and its associated methods. The dba variable's methods provide operations which enable you to deploy, configure, and administer InnoDB Cluster and InnoDB ReplicaSet. For example, use the dba.createCluster() method to create an InnoDB Cluster. In addition, AdminAPI supports administration of some MySQL Router related tasks, such as creating and updating users that enable you to integrate your InnoDB Cluster and InnoDB ReplicaSet.

MySQL Shell provides two scripting language modes, JavaScript and Python, in addition to a native SQL mode. Throughout this guide MySQL Shell is used primarily in JavaScript mode. When MySQL Shell starts it is in JavaScript mode by default. Switch modes by issuing \js for JavaScript mode, and \py for Python mode. Ensure you are in JavaScript mode by issuing the \js.

Important

MySQL Shell enables you to connect to servers over a socket connection, but AdminAPI requires TCP connections to a server instance. Socket based connections are not supported in AdminAPI.

This section assumes familiarity with MySQL Shell, see MySQL Shell 8.0 (part of MySQL 8.0) for further information. MySQL Shell also provides online help for the AdminAPI. To list all available dba commands, use the dba.help() method. For online help on a specific method, use the general format object.help('methodname'). For example:

mysql-js> dba.help('getCluster')

Retrieves a cluster from the Metadata Store.

SYNTAX

  dba.getCluster([name][, options])

WHERE

  name: Parameter to specify the name of the cluster to be returned.
  options: Dictionary with additional options.

>trimmed for brevity<

In addition to this documentation, there is developer documentation for all AdminAPI methods in the MySQL Shell JavaScript API Reference or MySQL Shell Python API Reference, available from Connectors and APIs.

This section applies to using InnoDB Cluster or InnoDB ReplicaSet and consists of:

Deployment Scenarios

AdminAPI supports the following deployment scenarios:

  • Production deployment: if you want to use a full production environment you need to configure the required number of machines and then deploy your server instances to the machines.

  • Sandbox deployment: if you want to test a deployment before committing to a full production deployment, the provided sandbox feature enables you to quickly set up a test environment on your local machine. Sandbox server instances are created with the required configuration and you can experiment to become familiar with the technologies employed.

    Important

    A sandbox deployment is not suitable for use in a full production environment.

Installing the Components

How you install the software components required by AdminAPI depends on the type of deployment you intend to use. For a production deployment, install the components to each machine. A production deployment uses multiple remote host machines running MySQL server instances, so you need to connect to each machine using a tool such as SSH or Windows remote desktop to carry out tasks such as installing components. For a sandbox deployment, install the components to a single machine. A sandbox deployment is local to a single machine, therefore the install needs to only be done once on the local machine. The following methods of installing are available:

Downloading and installing the components using the following documentation:

Important

Always use the matching version of components, for example run MySQL Shell 8.0.24 to administer instances running MySQL 8.0.24 with MySQL Router 8.0.24.

Once you have installed the software required, choose to follow either Section 21.2, “MySQL InnoDB Cluster” or Section 21.3, “MySQL InnoDB ReplicaSet”.

Configuring Host Name

In a production deployment, the instances which you use run on separate machines, therefore each machine must have a unique host name and be able to resolve the host names of the other machines which run server instances. If this is not the case, you can:

  • configure each machine to map the IP of each other machine to a host name. See your operating system documentation for details. This is the recommended solution.

  • set up a DNS service

  • configure the report_host variable in the MySQL configuration of each instance to a suitable externally reachable address

AdminAPI supports using IP addresses instead of host names. From MySQL Shell 8.0.18, AdminAPI supports IPv6 addresses if the target MySQL Server version is higher than 8.0.13. When using MySQL Shell 8.0.18 or higher, if all cluster instances are running 8.0.14 or higher then you can use an IPv6 or hostname that resolves to an IPv6 address for instance connection strings and with options such as localAddress, groupSeeds and ipAllowlist. For more information on using IPv6 see Section 18.4.5, “Support For IPv6 And For Mixed IPv6 And IPv4 Groups”. Previous versions support IPv4 addresses only.

To verify whether the hostname of a MySQL server is correctly configured, execute the following query to see how the instance reports its own address to other servers and try to connect to that MySQL server from other hosts using the returned address:

SELECT coalesce(@@report_host, @@hostname);

Specifying Instances

One of the core concepts of using AdminAPI is understanding connections to the MySQL instances which make up your InnoDB Cluster or InnoDB ReplicaSet. The requirements for connections to the instances when administering, and for the connections between the instances themselves, are:

  • only TCP/IP connections are supported, using Unix sockets or named pipes is not supported. InnoDB Cluster and InnoDB ReplicaSet are intended to be used in a local area network, running over a wide area network is not recommended.

  • only classic MySQL protocol connections are supported, X Protocol is not supported.

    Tip

    Your applications can use X Protocol, this requirement is for administration operations using AdminAPI.

MySQL Shell enables you to work with various APIs, and supports specifying connections as explained in Section 4.2.5, “Connecting to the Server Using URI-Like Strings or Key-Value Pairs”. You can specify connections using either URI-like strings, or key-value pairs. The Additional Connection parameters are not supported in AdminAPI. This documentation demonstrates AdminAPI using URI-like connection strings. For example, to connect as the user myuser to the MySQL server instance at www.example.com, on port 3306 use the connection string:

myuser@www.example.com:3306

To use this connection string with an AdminAPI operation such as dba.configureInstance(), you need to ensure the connection string is interpreted as a string, for example by surrounding the connection string with either single (') or double (") quote marks. If you are using the JavaScript implementation of AdminAPI issue:

MySQL JS > dba.configureInstance('myuser@www.example.com:3306')

Assuming you are running MySQL Shell in the default interactive mode, you are prompted for your password. AdminAPI supports MySQL Shell's Pluggable Password Store, and once you store the password you used to connect to the instance you are no longer prompted for it.

Persisting Settings

The AdminAPI commands you use to work with an InnoDB Cluster, InnoDB ReplicaSet, and their server instances modify the configuration of the MySQL on the instance. Depending on the way MySQL Shell is connected to an instance and the version of MySQL installed on the instance, these configuration changes can be persisted to the instance automatically. Persisting settings to the instance ensures that configuration changes are retained after the instance restarts, for background information see SET PERSIST. This is essential for reliable usage, for example if settings are not persisted then an instance which has been added to a cluster does not rejoin the cluster after a restart because configuration changes are lost.

Instances which meet the following requirements support persisting configuration changes automatically:

Instances which do not meet these requirements do not support persisting configuration changes automatically, and when AdminAPI operations result in changes to the instance's settings to be persisted you receive warnings such as:

	
WARNING: On instance 'localhost:3320' membership change cannot be persisted since MySQL version 5.7.21
does not support the SET PERSIST command (MySQL version >= 8.0.5 required). Please use the
<Dba>.configureLocalInstance command locally to persist the changes.

When AdminAPI commands are issued against the MySQL instance which MySQL Shell is currently running on, in other words the local instance, MySQL Shell persists configuration changes directly to the instance. On local instances which support persisting configuration changes automatically, configuration changes are persisted to the instance's mysqld-auto.cnf file and the configuration change does not require any further steps. On local instances which do not support persisting configuration changes automatically, you need to make the changes locally, see Configuring Instances with dba.configureLocalInstance().

When run against a remote instance, in other words an instance other than the one which MySQL Shell is currently running on, if the instance supports persisting configuration changes automatically, the AdminAPI commands persist configuration changes to the instance's mysql-auto.conf option file. If a remote instance does not support persisting configuration changes automatically, the AdminAPI commands can not automatically configure the instance's option file. This means that AdminAPI commands can read information from the instance, for example to display the current configuration, but changes to the configuration cannot be persisted to the instance's option file. In this case, you need to persist the changes locally, see Configuring Instances with dba.configureLocalInstance().

Retrieving a Handler Object

When you are working with AdminAPI, you use a handler object which represents the InnoDB Cluster or InnoDB ReplicaSet. You assign this object to a variable, and then use the operations available to monitor and administer the InnoDB Cluster or InnoDB ReplicaSet. To be able to retrieve the handler object, you establish a connection to one of the instances which belong to the InnoDB Cluster or InnoDB ReplicaSet. For example, when you create a cluster using dba.createCluster(), the operation returns a Cluster object which can be assigned to a variable. You use this object to work with the cluster, for example to add instances or check the cluster's status. If you want to retrieve a cluster again at a later date, for example after restarting MySQL Shell, use the dba.getCluster([name],[options]) function. For example:

mysql-js> var cluster1 = dba.getCluster()

Similarly, use the dba.getReplicaSet() operation to retrieve an InnoDB ReplicaSet. For example:

mysql-js> var replicaset1 = dba.getReplicaSet()

If you do not specify a name then the default object is returned. By default MySQL Shell attempts to connect to the primary instance when you retrieve a handler. Set the connectToPrimary option to configure this behavior. If connectToPrimary is true and the active global MySQL Shell session is not to a primary instance, MySQL Shell queries for the primary instance. If there is no quorum in a cluster, the operation fails. If connectToPrimary is false, the retrieved object uses the active session, in other words the same instance as the MySQL Shell's current global session. If connectToPrimary is not specified, MySQL Shell treats connectToPrimary as true, and falls back to connectToPrimary being false.

To force connecting to a secondary, establish a connection to the secondary instance and use the connectToPrimary option by issuing:

mysql-js> shell.connect(secondary_member)
mysql-js> var cluster1 = dba.getCluster(testCluster, {connectToPrimary:false})
Tip

Remember that secondary instances have super_read_only=ON, so you cannot write changes to them.

Creating User Accounts for Administration

The user account used to administer an instance does not have to be the root account, however the user needs to be assigned full read and write privileges on the metadata tables in addition to full MySQL administrator privileges (SUPER, GRANT OPTION, CREATE, DROP and so on). In this procedure the user icadmin is shown in InnoDB Cluster examples, and rsadmin in InnoDB ReplicaSet examples.

Important

The user name and password of an administrator must be the same on all instances.

In version 8.0.20 and later, use the setupAdminAccount(user) operation to create or upgrade a MySQL user account with the necessary privileges to administer an InnoDB Cluster or InnoDB ReplicaSet. To use the setupAdminAccount() operation, you must be connected as a MySQL user with privileges to create users, for example as root. The setupAdminAccount(user) operation also enables you to upgrade an existing MySQL account with the necessary privileges before a dba.upgradeMetadata() operation.

The mandatory user argument is the name of the MySQL account you want to create or upgrade to be used to administrator the account. The format of the user names accepted by the setupAdminAccount() operation follows the standard MySQL account name format, see Section 6.2.4, “Specifying Account Names”. The user argument format is username[@host] where host is optional and if it is not provided it defaults to the % wildcard character.

For example, to create a user named icadmin to administer an InnoDB Cluster assigned to the variable myCluster, issue:

mysql-js> myCluster.setupAdminAccount('icadmin')

Missing the password for new account icadmin@%. Please provide one.
Password for new account: ********
Confirm password: ********

Creating user icadmin@%.
Setting user password.
Account icadmin@% was successfully created.

If you already have an administration user, for example created with a version prior to 8.0.20, use the update option with the setupAdminAccount() operation to upgrade the privileges of the existing user. This is relevant during an upgrade, to ensure that the administration user is compatible. For example, to upgrade the user named icadmin issue:

mysql-js> myCluster.setupAdminAccount('icadmin', {'update':1})
Updating user icadmin@%.
Account icadmin@% was successfully updated.

In versions prior to 8.0.20, the preferred method to create users for administration is using the clusterAdmin option with the dba.configureInstance() operation. The clusterAdmin option must be used with a MySQL Shell connection based on a user which has the privileges to create users with suitable privileges, in this example the root user is used. For example:

mysql-js> dba.configureInstance('root@ic-1:3306', {clusterAdmin: "'icadmin'@'ic-1%'"});

The format of the user names accepted by the setupAdminAccount() operation and the clusterAdmin option follows the standard MySQL account name format, see Section 6.2.4, “Specifying Account Names”.

If only read operations are needed (such as for monitoring purposes), an account with more restricted privileges can be used. See Configuring Users for AdminAPI.

Verbose Logging

When working with a production deployment it can be useful to configure verbose logging for MySQL Shell. For example, the information in the log can help you to find and resolve any issues that might occur when you are preparing server instances to work as part of InnoDB Cluster. To start MySQL Shell with a verbose logging level, use the --log-level option:

shell> mysqlsh --log-level=DEBUG3

The DEBUG3 level is recommended, see --log-level for more information. When DEBUG3 is set the MySQL Shell log file contains lines such as Debug: execute_sql( ... ) which contain the SQL queries that are executed as part of each AdminAPI call. The log file generated by MySQL Shell is located in ~/.mysqlsh/mysqlsh.log for Unix-based systems; on Microsoft Windows systems it is located in %APPDATA%\MySQL\mysqlsh\mysqlsh.log. See MySQL Shell Logging and Debug for more information.

In addition to enabling the MySQL Shell log level, you can configure the amount of output AdminAPI provides in MySQL Shell after issuing each command. To enable the amount of AdminAPI output, in MySQL Shell issue:

mysql-js> dba.verbose=2

This enables the maximum output from AdminAPI calls. The available levels of output are:

  • 0 or OFF is the default. This provides minimal output and is the recommended level when not troubleshooting.

  • 1 or ON adds verbose output from each call to the AdminAPI.

  • 2 adds debug output to the verbose output providing full information about what each call to AdminAPI executes.

MySQL Shell can optionally log the SQL statements used by AdminAPI operations (with the exception of sandbox operations), and can also display them in the terminal as they are executed. To configure MySQL Shell to do this, see Logging AdminAPI Operations.

Finding the Primary

When you are working with a single-primary InnoDB Cluster or an InnoDB ReplicaSet, you need to connect to the primary instance for administration tasks so that configuration changes can be written to the metadata. To find the current primary you can:

  • use the --redirect-primary option at MySQL Shell start up to ensure that the target server is part of an InnoDB Cluster or InnoDB ReplicaSet. If the target instance is not the primary, MySQL Shell finds the primary and connects to it.

  • use the shell.connectToPrimary([instance, password]) operation (added in version 8.0.20), which checks whether the target instance belongs to a cluster or ReplicaSet. If so, MySQL Shell opens a new session to the primary, sets the active global MySQL Shell session to the established session and returns it.

    If an instance is not provided, the operation attempts to use the active global MySQL Shell session. If an instance is not provided and there is no active global MySQL Shell session, an exception is thrown. If the target instance does not belong to a cluster or ReplicaSet the operation fails with an error.

  • use the status operation, find the primary in the result, and manually connect to that instance.

Scripting AdminAPI

You can automate cluster configuration with scripts, which can be run using MySQL Shell. For example:

shell> mysqlsh -f setup-innodb-cluster.js
Note

Any command line options specified after the script file name are passed to the script and not to MySQL Shell. You can access those options using the os.argv array in JavaScript, or the sys.argv array in Python. In both cases, the first option picked up in the array is the script name.

The contents of an example script file is shown here:

print('InnoDB Cluster sandbox set up\n');
print('==================================\n');
print('Setting up a MySQL InnoDB cluster with 3 MySQL Server sandbox instances,\n');
print('installed in ~/mysql-sandboxes, running on ports 3310, 3320 and 3330.\n\n');

var dbPass = shell.prompt('Please enter a password for the MySQL root account: ', {type:"password"});

try {
   print('\nDeploying the sandbox instances.');
   dba.deploySandboxInstance(3310, {password: dbPass});
   print('.');
   dba.deploySandboxInstance(3320, {password: dbPass});
   print('.');
   dba.deploySandboxInstance(3330, {password: dbPass});
   print('.\nSandbox instances deployed successfully.\n\n');

   print('Setting up InnoDB cluster...\n');
   shell.connect('root@localhost:3310', dbPass);

   var cluster = dba.createCluster("prodCluster");

   print('Adding instances to the cluster.');
   cluster.addInstance({user: "root", host: "localhost", port: 3320, password: dbPass});
   print('.');
   cluster.addInstance({user: "root", host: "localhost", port: 3330, password: dbPass});
   print('.\nInstances successfully added to the cluster.');

   print('\nInnoDB cluster deployed successfully.\n');
} catch(e) {
   print('\nThe InnoDB cluster could not be created.\n\nError: ' +
   + e.message + '\n');
}

21.2 MySQL InnoDB Cluster

MySQL InnoDB Cluster provides a complete high availability solution for MySQL. By using the AdminAPI included with MySQL Shell you can easily configure and administer a group of at least three MySQL server instances to function as an InnoDB Cluster. In this procedure the host name ic-number is used in examples. Each MySQL server instance runs MySQL Group Replication, which provides the mechanism to replicate data within InnoDB Clusters, with built-in failover. AdminAPI removes the need to work directly with Group Replication in InnoDB Clusters, but for more information see Chapter 18, Group Replication which explains the details. MySQL Router can automatically configure itself based on the cluster you deploy, connecting client applications transparently to the server instances. In the event of an unexpected failure of a server instance the cluster reconfigures automatically. In the default single-primary mode, an InnoDB Cluster has a single read-write server instance - the primary. Multiple secondary server instances are replicas of the primary. If the primary fails, a secondary is automatically promoted to the role of primary. MySQL Router detects this and forwards client applications to the new primary. Advanced users can also configure a cluster to have multiple-primaries.

Important

InnoDB Cluster does not provide support for MySQL NDB Cluster. NDB Cluster depends on the NDB storage engine as well as a number of programs specific to NDB Cluster which are not furnished with MySQL Server 8.0; NDB is available only as part of the MySQL NDB Cluster distribution. In addition, the MySQL server binary (mysqld) that is supplied with MySQL Server 8.0 cannot be used with NDB Cluster. For more information about MySQL NDB Cluster, see Chapter 22, MySQL NDB Cluster 8.0. Section 22.1.6, “MySQL Server Using InnoDB Compared with NDB Cluster”, provides information about the differences between the InnoDB and NDB storage engines.

The following diagram shows an overview of how these technologies work together:

Figure 21.1 InnoDB Cluster overview

Three MySQL servers are grouped together as a high availability cluster. One of the servers is the read/write primary instance, and the other two are read-only secondary instances. Group Replication is used to replicate data from the primary instance to the secondary instances. MySQL Router connects client applications (in this example, a MySQL Connector) to the primary instance.

21.2.1 MySQL InnoDB Cluster Requirements

Before installing a production deployment of InnoDB Cluster, ensure that the server instances you intend to use meet the following requirements.

  • InnoDB Cluster uses Group Replication and therefore your server instances must meet the same requirements. See Section 18.9.1, “Group Replication Requirements”. AdminAPI provides the dba.checkInstanceConfiguration() method to verify that an instance meets the Group Replication requirements, and the dba.configureInstance() method to configure an instance to meet the requirements.

    Note

    When using a sandbox deployment the instances are configured to meet these requirements automatically.

  • Group Replication members can contain tables using a storage engine other than InnoDB, for example MyISAM. Such tables cannot be written to by Group Replication, and therefore when using InnoDB Cluster. To be able to write to such tables with InnoDB Cluster, convert all such tables to InnoDB before using the instance in an InnoDB Cluster.

  • The Performance Schema must be enabled on any instance which you want to use with InnoDB Cluster.

  • The provisioning scripts that MySQL Shell uses to configure servers for use in InnoDB Cluster require access to Python. On Windows MySQL Shell includes Python and no user configuration is required. On Unix Python must be found as part of the shell environment. To check that your system has Python configured correctly issue:

    $ /usr/bin/env python

    If a Python interpreter starts, no further action is required. If the previous command fails, create a soft link between /usr/bin/python and your chosen Python binary. For more information, see Supported Languages.

  • From version 8.0.17, instances must use a unique server_id within an InnoDB Cluster. When you use the Cluster.addInstance(instance) operation, if the server_id of instance is already used by an instance in the cluster then the operation fails with an error.

  • During the process of configuring an instance for InnoDB Cluster, the majority of the system variables required for using an instance are configured. But AdminAPI does not configure the transaction_isolation system variable, which means that it defaults to REPEATABLE READ. This does not impact a single-primary cluster, but if you are using a multi-primary cluster then unless you rely on REPEATABLE READ semantics in your applications, we recommend using the READ COMMITTED isolation level. See Section 18.9.2, “Group Replication Limitations”.

21.2.2 Deploying a Production InnoDB Cluster

When working in a production environment, the MySQL server instances which make up an InnoDB Cluster run on multiple host machines as part of a network rather than on single machine as described in Section 21.5, “AdminAPI MySQL Sandboxes”. Before proceeding with these instructions you must install the required software to each machine that you intend to add as a server instance to your cluster, see Installing the Components.

The following diagram illustrates the scenario you work with in this section:

Figure 21.2 Production Deployment

Three MySQL servers are grouped together as a production InnoDB cluster. One of the servers is the primary instance, and the other two are secondary instances. The IP address for the primary server is 139.59.177.10, and the IP addresses for the two secondary instances are 139.59.177.11 and 139.59.177.12. MySQL Router connects a client application to the primary instance. The admin capability in MySQL Shell interacts directly with the production InnoDB cluster.

Important

Unlike a sandbox deployment, where all instances are deployed locally to one machine which AdminAPI has local file access to and can persist configuration changes, for a production deployment you must persist any configuration changes on the instance. How you do this depends on the version of MySQL running on the instance, see Persisting Settings.

To pass a server's connection information to AdminAPI, use URI-like connection strings or a data dictionary; see Section 4.2.5, “Connecting to the Server Using URI-Like Strings or Key-Value Pairs”. In this documentation, URI-like strings are shown.

This section assumes that you have:

21.2.2.1 Deploying a New Production InnoDB Cluster

The following sections describe how to deploy a new production InnoDB Cluster.

Configuring Production Instances

AdminAPI provides the dba.configureInstance() function that checks if an instance is suitably configured for InnoDB Cluster usage, and configures the instance if it finds any settings which are not compatible with InnoDB Cluster. You run the dba.configureInstance() command against an instance and it checks all of the settings required to enable the instance to be used for InnoDB Cluster usage. If the instance does not require configuration changes, there is no need to modify the configuration of the instance, and the dba.configureInstance() command output confirms that the instance is ready for InnoDB Cluster usage. If any changes are required to make the instance compatible with InnoDB Cluster, a report of the incompatible settings is displayed, and you can choose to let the command make the changes to the instance's option file. Depending on the way MySQL Shell is connected to the instance, and the version of MySQL running on the instance, you can make these changes permanent by persisting them to a remote instance's option file, see Persisting Settings. Instances which do not support persisting configuration changes automatically require that you configure the instance locally, see Configuring Instances with dba.configureLocalInstance(). Alternatively you can make the changes to the instance's option file manually, see Section 4.2.2.2, “Using Option Files” for more information. Regardless of the way you make the configuration changes, you might have to restart MySQL to ensure the configuration changes are detected.

The syntax of the dba.configureInstance() command is:

dba.configureInstance([instance][, options])

where instance is an instance definition, and options is a data dictionary with additional options to configure the operation. The command returns a descriptive text message about the operation's result.

The instance definition is the connection data for the instance, see Section 4.2.5, “Connecting to the Server Using URI-Like Strings or Key-Value Pairs”. If the target instance already belongs to an InnoDB Cluster an error is generated and the process fails.

The options dictionary can contain the following:

  • mycnfPath - the path to the MySQL option file of the instance.

  • outputMycnfPath - alternative output path to write the MySQL option file of the instance.

  • password - the password to be used by the connection.

  • clusterAdmin - the name of an InnoDB Cluster administrator user to be created. The supported format is the standard MySQL account name format. Supports identifiers or strings for the user name and host name. By default if unquoted it assumes input is a string. See Creating User Accounts for Administration.

  • clusterAdminPassword - the password for the InnoDB Cluster administrator account being created using clusterAdmin. Although you can specify using this option, this is a potential security risk. If you do not specify this option, but do specify the clusterAdmin option, you are prompted for the password at the interactive prompt.

  • deprecated, and scheduled for removal in a future version

    clearReadOnly - a boolean value used to confirm that super_read_only should be set to off, see Super Read-only and Instances.

  • interactive - a boolean value used to disable the interactive wizards in the command execution, so that prompts are not provided to the user and confirmation prompts are not shown.

  • restart - a boolean value used to indicate that a remote restart of the target instance should be performed to finalize the operation.

Although the connection password can be contained in the instance definition, this is insecure and not recommended. Use the MySQL Shell Pluggable Password Store to store instace passwords securely.

Once dba.configureInstance() is issued against an instance, the command checks if the instance's settings are suitable for InnoDB Cluster usage. A report is displayed which shows the settings required by InnoDB Cluster . If the instance does not require any changes to its settings you can use it in an InnoDB Cluster, and can proceed to Creating the Cluster. If the instance's settings are not valid for InnoDB Cluster usage the dba.configureInstance() command displays the settings which require modification. Before configuring the instance you are prompted to confirm the changes shown in a table with the following information:

  • Variable - the invalid configuration variable.

  • Current Value - the current value for the invalid configuration variable.

  • Required Value - the required value for the configuration variable.

How you proceed depends on whether the instance supports persisting settings, see Persisting Settings. When dba.configureInstance() is issued against the MySQL instance which MySQL Shell is currently running on, in other words the local instance, it attempts to automatically configure the instance. When dba.configureInstance() is issued against a remote instance, if the instance supports persisting configuration changes automatically, you can choose to do this. If a remote instance does not support persisting the changes to configure it for InnoDB Cluster usage, you have to configure the instance locally. See Configuring Instances with dba.configureLocalInstance().

In general, a restart of the instance is not required after dba.configureInstance() configures the option file, but for some specific settings a restart might be required. This information is shown in the report generated after issuing dba.configureInstance(). If the instance supports the RESTART statement, MySQL Shell can shutdown and then start the instance. This ensures that the changes made to the instance's option file are detected by mysqld. For more information see RESTART.

Note

After executing a RESTART statement, the current connection to the instance is lost. If auto-reconnect is enabled, the connection is reestablished after the server restarts. Otherwise, the connection must be reestablished manually.

The dba.configureInstance() method verifies that a suitable user is available for cluster usage, which is used for connections between members of the cluster, see Creating User Accounts for Administration.

If you do not specify a user to administer the cluster, in interactive mode a wizard enables you to choose one of the following options:

  • enable remote connections for the root user, not recommended in a production environment

  • create a new user

  • no automatic configuration, in which case you need to manually create the user

Tip

If the instance has super_read_only=ON then you might need to confirm that AdminAPI can set super_read_only=OFF. See Super Read-only and Instances for more information.

Creating the Cluster

Once you have prepared your instances, use the dba.createCluster() function to create the cluster, using the instance which MySQL Shell is connected to as the seed instance for the cluster. The seed instance is replicated to the other instances that you add to the cluster, making them replicas of the seed instance. In this procedure the ic-1 instance is used as the seed. When you issue dba.createCluster(name) MySQL Shell creates a classic MySQL protocol session to the server instance connected to the MySQL Shell's current global session. For example, to create a cluster called testCluster and assign the returned cluster to a variable called cluster:

mysql-js> var cluster = dba.createCluster('testCluster')
Validating instance at icadmin@ic-1:3306...
This instance reports its own address as ic-1
Instance configuration is suitable.
Creating InnoDB cluster 'testCluster' on 'icadmin@ic-1:3306'...
Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.

This pattern of assigning the returned cluster to a variable enables you to then execute further operations against the cluster using the Cluster object's methods. The returned Cluster object uses a new session, independent from the MySQL Shell's global session. This ensures that if you change the MySQL Shell global session, the Cluster object maintains its session to the instance.

To be able to administer a cluster, you must ensure that you have a suitable user which has the required privileges. The recommended approach is to create an administration user. If you did not create an administration user when configuring your instances, use the Cluster.setupAdminAccount() operation. For example to create a user named icadmin that can administer the InnoDB Cluster assigned to the variable cluster, issue:

mysql-js> cluster.setupAdminAccount(icadmin)

See Configuring Users for AdminAPI for more information on cluster administration users.

The dba.createCluster() operation supports MySQL Shell's interactive option. When interactive is on, prompts appear in the following situations:

  • when run on an instance that belongs to a cluster and the adoptFromGr option is false, you are asked if you want to adopt an existing cluster

  • when the force option is not used (not set to true), you are asked to confirm the creation of a multi-primary cluster

Note

If you encounter an error related to metadata being inaccessible you might have the loopback network interface configured. For correct InnoDB Cluster usage disable the loopback interface.

To check the cluster has been created, use the cluster instance's status() function. See Checking a cluster's Status with Cluster.status().

Tip

Once server instances belong to a cluster it is important to only administer them using MySQL Shell and AdminAPI. Attempting to manually change the configuration of Group Replication on an instance once it has been added to a cluster is not supported. Similarly, modifying server variables critical to InnoDB Cluster, such as server_uuid, after an instance is configured using AdminAPI is not supported.

When you create a cluster using MySQL Shell 8.0.14 and later, you can set the timeout before instances are expelled from the cluster, for example when they become unreachable. Pass the expelTimeout option to the dba.createCluster() operation, which configures the group_replication_member_expel_timeout variable on the seed instance. The expelTimeout option can take an integer value in the range of 0 to 3600. All instances running MySQL server 8.0.13 and later which are added to a cluster with expelTimeout configured are automatically configured to have the same expelTimeout value as configured on the seed instance.

For information on the other options which you can pass to dba.createCluster(), see Section 21.2.5, “Working with InnoDB Cluster”.

Adding Instances to a Cluster

Use the Cluster.addInstance(instance) function to add more instances to the cluster, where instance is connection information to a configured instance, see Configuring Production Instances. From version 8.0.17, Group Replication implements compatibility policies which consider the patch version of the instances, and the Cluster.addInstance() operation detects this and in the event of an incompatibility the operation terminates with an error. See Checking the MySQL Version on Instances and Section 18.7.1, “Combining Different Member Versions in a Group”

You need a minimum of three instances in the cluster to make it tolerant to the failure of one instance. Adding further instances increases the tolerance to failure of an instance. To add an instance to the cluster issue:

mysql-js> cluster.addInstance('icadmin@ic-2:3306')
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
Please provide the password for 'icadmin@ic-2:3306': ********
Adding instance to the cluster ...
Validating instance at ic-2:3306...
This instance reports its own address as ic-2
Instance configuration is suitable.
The instance 'icadmin@ic-2:3306' was successfully added to the cluster.

When a new instance is added to the cluster, the local address for this instance is automatically added to the group_replication_group_seeds variable on all online cluster instances in order to allow them to use the new instance to rejoin the group, if needed.

Note

The instances listed in group_replication_group_seeds are used according to the order in which they appear in the list. This ensures user specified settings are used first and preferred. See Customizing InnoDB Clusters for more information.

If you are using MySQL 8.0.17 or later you can choose how the instance recovers the transactions it requires to synchronize with the cluster. Only when the joining instance has recovered all of the transactions previously processed by the cluster can it then join as an online instance and begin processing transactions. For more information, see Section 21.2.2.2, “Using MySQL Clone with InnoDB Cluster”.

Also in 8.0.17 and later, you can configure how Cluster.addInstance() behaves, letting recovery operations proceed in the background or monitoring different levels of progress in MySQL Shell.

Depending on which option you chose to recover the instance from the cluster, you see different output in MySQL Shell. Suppose that you are adding the instance ic-2 to the cluster, and ic-1 is the seed or donor.

  • When you use MySQL Clone to recover an instance from the cluster, the output looks like:

    Validating instance at ic-2:3306...
    This instance reports its own address as ic-2:3306
    Instance configuration is suitable.
    A new instance will be added to the InnoDB cluster. Depending on the amount of
    data on the cluster this might take from a few seconds to several hours.
    Adding instance to the cluster...
    Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
    Clone based state recovery is now in progress.
    NOTE: A server restart is expected to happen as part of the clone process. If the
    server does not support the RESTART command or does not come back after a
    while, you may need to manually start it back.
    * Waiting for clone to finish...
    NOTE: ic-2:3306 is being cloned from ic-1:3306
    ** Stage DROP DATA: Completed
    ** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed
    NOTE: ic-2:3306 is shutting down...
    * Waiting for server restart... ready
    * ic-2:3306 has restarted, waiting for clone to finish...
    ** Stage RESTART: Completed
    * Clone process has finished: 2.18 GB transferred in 7 sec (311.26 MB/s)
    State recovery already finished for 'ic-2:3306'
    The instance 'ic-2:3306' was successfully added to the cluster.
    

    The warnings about server restart should be observed, you might have to manually restart an instance. See Section 13.7.8.8, “RESTART Statement”.

  • When you use incremental recovery to recover an instance from the cluster, the output looks like:

    Incremental distributed state recovery is now in progress.
    * Waiting for incremental recovery to finish...
    NOTE: 'ic-2:3306' is being recovered from 'ic-1:3306'
    * Distributed recovery has finished
    

To cancel the monitoring of the recovery phase, issue CONTROL+C. This stops the monitoring but the recovery process continues in the background. The waitRecovery integer option can be used with the Cluster.addInstance() operation to control the behavior of the command regarding the recovery phase. The following values are accepted:

  • 0: do not wait and let the recovery process finish in the background;

  • 1: wait for the recovery process to finish;

  • 2: wait for the recovery process to finish; and show detailed static progress information;

  • 3: wait for the recovery process to finish; and show detailed dynamic progress information (progress bars);

By default, if the standard output which MySQL Shell is running on refers to a terminal, the waitRecovery option defaults to 3. Otherwise, it defaults to 2. See Monitoring Recovery Operations.

To verify the instance has been added, use the cluster instance's status() function. For example this is the status output of a sandbox cluster after adding a second instance:

mysql-js> cluster.status()
{
    "clusterName": "testCluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "ic-1:3306",
        "ssl": "REQUIRED",
        "status": "OK_NO_TOLERANCE",
        "statusText": "Cluster is NOT tolerant to any failures.",
        "topology": {
            "ic-1:3306": {
                "address": "ic-1:3306",
                "mode": "R/W",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "ic-2:3306": {
                "address": "ic-2:3306",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            }
        }
    },
    "groupInformationSourceMember": "mysql://icadmin@ic-1:3306"
}

How you proceed depends on whether the instance is local or remote to the instance MySQL Shell is running on, and whether the instance supports persisting configuration changes automatically, see Persisting Settings. If the instance supports persisting configuration changes automatically, you do not need to persist the settings manually and can either add more instances or continue to the next step. If the instance does not support persisting configuration changes automatically, you have to configure the instance locally. See Configuring Instances with dba.configureLocalInstance(). This is essential to ensure that instances rejoin the cluster in the event of leaving the cluster.

Tip

If the instance has super_read_only=ON then you might need to confirm that AdminAPI can set super_read_only=OFF. See Super Read-only and Instances for more information.

Once you have your cluster deployed you can configure MySQL Router to provide high availability, see Section 21.4, “MySQL Router”.

User Accounts Created by InnoDB Cluster

As part of using Group Replication, InnoDB Cluster creates internal recovery users which enable connections between the servers in the cluster. These users are internal to the cluster, and the user name of the generated users follows a naming scheme of mysql_innodb_cluster_server_id@%, where server_id is unique to the instance. In versions earlier than 8.0.17 the user name of the generated users followed a naming scheme of mysql_innodb_cluster_r[10_numbers]. The hostname used for the internal users depends on whether the ipAllowlist option has been configured. If ipAllowlist is not configured, it defaults to AUTOMATIC and the internal users are created using both the wildcard % character and localhost for the hostname value. When ipAllowlist has been configured, for each address in the ipAllowlist list an internal user is created. For more information, see Creating an Allowlist of Servers.

Each internal user has a randomly generated password. From version 8.0.18, AdminAPI enables you to change the generated password for internal users. See Resetting Recovery Account Passwords. The randomly generated users are given the following grants:

GRANT REPLICATION SLAVE ON *.* to internal_user;

The internal user accounts are created on the seed instance and then replicated to the other instances in the cluster. The internal users are:

  • generated when creating a new cluster by issuing dba.createCluster()

  • generated when adding a new instance to the cluster by issuing Cluster.addInstance().

In addition, the Cluster.rejoinInstance() operation can also result in a new internal user being generated when the ipAllowlist option is used to specify a hostname. For example by issuing:

Cluster.rejoinInstance({ipAllowlist: "192.168.1.1/22"});

all previously existing internal users are removed and a new internal user is created, taking into account the ipAllowlist value used.

For more information on the internal users required by Group Replication, see Section 18.2.1.3, “User Credentials For Distributed Recovery”.

Configuring InnoDB Cluster Ports

Instances that belong to a cluster use different ports for different types of communication. In addition to the default port at 3306, which is used for client connections over classic MySQL protocol, and the mysqlx_port, which defaults to 33060 and is used for X Protocol client connections, there is also a port for internal connections between the instances in the cluster which is not used for client connections. This port is configured by the localAddress option, which configures the group_replication_local_address system variable, and this port must be open so that the instances in the cluster can communicate with each other. For example, if your firewall is blocking this port then the instances cannot communicate with each other, and the cluster cannot function. Similarly, if your instances are using SELinux, you need to ensure that all of the required ports used by InnoDB Cluster are open so that the instances can communicate with each other. See Section 6.7.5.2, “Setting the TCP Port Context for MySQL Features” and MySQL Shell Ports.

When you create a cluster or add instances to a cluster, by default the localAddress port is calculated by multiplying the target instance's port value by 10 and then adding one to the result. For example, when the port of the target instance is the default value of 3306, the calculated localAddress port is 33061. You should ensure that port numbers used by your cluster instances are compatible with the way localAddress is calculated. For example, if the server instance being used to create a cluster has a port number higher than 6553, the dba.createCluster() operation fails because the calculated localAddress port number exceeds the maximum valid port which is 65535. To avoid this situation either use a lower port value on the instances you use for InnoDB Cluster, or manually assign the localAddress value, for example:

mysql-js> dba.createCluster('testCluster', {'localAddress':'icadmin@ic-1:33061'}

21.2.2.2 Using MySQL Clone with InnoDB Cluster

In MySQL 8.0.17, InnoDB Cluster integrates the MySQL Clone plugin to provide automatic provisioning of joining instances. The process of retrieving the cluster's data so that the instance can synchronize with the cluster is called distributed recovery. When an instance needs to recover a cluster's transactions we distinguish between the donor, which is the cluster instance that provides the data, and the receiver, which is the instance that receives the data from the donor. In previous versions, Group Replication provided only asynchronous replication to recover the transactions required for the joining instance to synchronize with the cluster so that it could join the cluster. For a cluster with a large amount of previously processed transactions it could take a long time for the new instance to recover all of the transactions before being able to join the cluster. Or a cluster which had purged GTIDs, for example as part of regular maintenance, could be missing some of the transactions required to recover the new instance. In such cases the only alternative was to manually provision the instance using tools such as MySQL Enterprise Backup, as shown in Section 18.4.6, “Using MySQL Enterprise Backup with Group Replication”.

MySQL Clone provides an alternative way for an instance to recover the transactions required to synchronize with a cluster. Instead of relying on asynchronous replication to recover the transactions, MySQL Clone takes a snapshot of the data on the donor instance and then transfers the snapshot to the receiver.

Warning

All previous data in the receiver is destroyed during a clone operation. All MySQL settings not stored in tables are however maintained.

Once a clone operation has transferred the snapshot to the receiver, if the cluster has processed transactions while the snapshot was being transferred, asynchronous replication is used to recover any required data for the receiver to be synchronized with the cluster. This can be much more efficient than the instance recovering all of the transactions using asynchronous replication, and avoids any issues caused by purged GTIDs, enabling you to quickly provision new instances for InnoDB Cluster. For more information, see Section 5.6.7, “The Clone Plugin” and Section 18.4.3.2, “Cloning for Distributed Recovery”

In contrast to using MySQL Clone, incremental recovery is the process where an instance joining a cluster uses only asynchronous replication to recover an instance from the cluster. When an InnoDB Cluster is configured to use MySQL Clone, instances which join the cluster use either MySQL Clone or incremental recovery to recover the cluster's transactions. By default, the cluster automatically chooses the most suitable method, but you can optionally configure this behavior, for example to force cloning, which replaces any transactions already processed by the joining instance. When you are using MySQL Shell in interactive mode, the default, if the cluster is not sure it can proceed with recovery it provides an interactive prompt. This section describes the different options you are offered, and the different scenarios which influence which of the options you can choose.

In addition, the output of Cluster.status() for members in RECOVERING state includes recovery progress information to enable you to easily monitor recovery operations, whether they are using MySQL Clone or incremental recovery. InnoDB Cluster provides additional information about instances using MySQL Clone in the output of Cluster.status().

21.2.2.2.1 Working with a Cluster that uses MySQL Clone

An InnoDB Cluster that uses MySQL Clone provides the following additional behavior.

dba.createCluster() and MySQL Clone

From version 8.0.17, by default when a new cluster is created on an instance where the MySQL Clone plugin is available then it is automatically installed and the cluster is configured to support cloning. The InnoDB Cluster recovery accounts are created with the required BACKUP_ADMIN privilege.

Set the disableClone Boolean option to true to disable MySQL Clone for the cluster. In this case a metadata entry is added for this configuration and the MySQL Clone plugin is uninstalled if it is installed. You can set the disableClone option when you issue dba.createCluster(), or at any time when the cluster is running using Cluster.setOption().

Cluster.addInstance(instance) and MySQL Clone

MySQL Clone can be used for a joining instance if the new instance is running MySQL 8.0.17 or later, and there is at least one donor in the cluster (included in the group_replication_group_seeds list) running MySQL 8.0.17 or later. A cluster using MySQL Clone follows the behavior documented at Adding Instances to a Cluster, with the addition of a possible choice of how to transfer the data required to recover the instance from the cluster. How Cluster.addInstance(instance) behaves depends on the following factors:

  • Whether MySQL Clone is supported.

  • Whether incremental recovery is possible or not, which depends on the availability of binary logs. For example, if a donor instance has all binary logs required (GTID_PURGED is empty) then incremental recovery is possible. If no cluster instance has all binary logs required then incremental recovery is not possible.

  • Whether incremental recovery is appropriate or not. Even though incremental recovery might be possible, because it has the potential to clash with data already on the instance, the GTID sets on the donor and receiver are checked to make sure that incremental recovery is appropriate. The following are possible results of the comparison:

    • New: the receiver has an empty GTID_EXECUTED GTID set

    • Identical: the receiver has a GTID set identical to the donor’s GTID set

    • Recoverable: the receiver has a GTID set that is missing transactions but these can be recovered from the donor

    • Irrecoverable: the donor has a GTID set that is missing transactions, possibly they have been purged

    • Diverged: the GTID sets of the donor and receiver have diverged

    When the result of the comparison is determined to be Identical or Recoverable, incremental recovery is considered appropriate. When the result of the comparison is determined to be Irrecoverable or Diverged, incremental recovery is not considered appropriate.

    For an instance considered New, incremental recovery cannot be considered appropriate because it is impossible to determine if the binary logs have been purged, or even if the GTID_PURGED and GTID_EXECUTED variables were reset. Alternatively, it could be that the server had already processed transactions before binary logs and GTIDs were enabled. Therefore in interactive mode, you have to confirm that you want to use incremental recovery.

  • The state of the gtidSetIsComplete option. If you are sure a cluster has been created with a complete GTID set, and therefore instances with empty GTID sets can be added to it without extra confirmations, set the cluster level gtidSetIsComplete Boolean option to true.

    Warning

    Setting the gtidSetIsComplete option to true means that joining servers are recovered regardless of any data they contain, use with caution. If you try to add an instance which has applied transactions you risk data corruption.

The combination of these factors influence how instances join the cluster when you issue Cluster.addInstance(). The recoveryMethod option is set to auto by default, which means that in MySQL Shell's interactive mode, the cluster selects the best way to recover the instance from the cluster, and the prompts advise you how to proceed. In other words the cluster recommends using MySQL Clone or incremental recovery based on the best approach and what the server supports. If you are not using interactive mode and are scripting MySQL Shell, you must set recoveryMethod to the type of recovery you want to use - either clone or incremental. This section explains the different possible scenarios.

When you are using MySQL Shell in interactive mode, the main prompt with all of the possible options for adding the instance is:

Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone):

Depending on the factors mentioned, you might not be offered all of these options. The scenarios described later in this section explain which options you are offered. The options offered by this prompt are:

  • Clone: choose this option to clone the donor to the instance which you are adding to the cluster, deleting any transactions the instance contains. The MySQL Clone plugin is automatically installed. The InnoDB Cluster recovery accounts are created with the required BACKUP_ADMIN privilege. Assuming you are adding an instance which is either empty (has not processed any transactions) or which contains transactions you do not want to retain, select the Clone option. The cluster then uses MySQL Clone to completely overwrite the joining instance with a snapshot from an donor cluster member. To use this method by default and disable this prompt, set the cluster's recoveryMethod option to clone.

  • Incremental recovery choose this option to use incremental recovery to recover all transactions processed by the cluster to the joining instance using asynchronous replication. Incremental recovery is appropriate if you are sure all updates ever processed by the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the recoveryMethod option to incremental.

The combination of factors mentioned influences which of these options is available at the prompt as follows:

Note

If the group_replication_clone_threshold system variable has been manually changed outside of AdminAPI, then the cluster might decide to use Clone recovery instead of following these scenarios.

  • In a scenario where

    • incremental recovery is possible

    • incremental recovery is not appropriate

    • Clone is supported

    you can choose between any of the options. It is recommended that you use MySQL Clone, the default.

  • In a scenario where

    • incremental recovery is possible

    • incremental recovery is appropriate

    you are not provided with the prompt, and incremental recovery is used.

  • In a scenario where

    • incremental recovery is possible

    • incremental recovery is not appropriate

    • Clone is not supported or is disabled

    you cannot use MySQL Clone to add the instance to the cluster. You are provided with the prompt, and the recommended option is to proceed with incremental recovery.

  • In a scenario where

    • incremental recovery is not possible

    • Clone is not supported or is disabled

    you cannot add the instance to the cluster and an ERROR: The target instance must be either cloned or fully provisioned before it can be added to the target cluster. Cluster.addInstance: Instance provisioning required (RuntimeError) is shown. This could be the result of binary logs being purged from all cluster instances. It is recommended to use MySQL Clone, by either upgrading the cluster or setting the disableClone option to false.

  • In a scenario where

    • incremental recovery is not possible

    • Clone is supported

    you can only use MySQL Clone to add the instance to the cluster. This could be the result of the cluster missing binary logs, for example when they have been purged.

Once you select an option from the prompt, by default the progress of the instance recovering the transactions from the cluster is displayed. This monitoring enables you to check the recovery phase is working and also how long it should take for the instance to join the cluster and come online. To cancel the monitoring of the recovery phase, issue CONTROL+C.

Cluster.checkInstanceState() and MySQL Clone

When the Cluster.checkInstanceState() operation is run to verify an instance against a cluster that is using MySQL Clone, if the instance does not have the binary logs, for example because they were purged but Clone is available and not disabled (disableClone is false) the operation provides a warning that the Clone can be used. For example:

The cluster transactions cannot be recovered on the instance, however,
Clone is available and can be used when adding it to a cluster.

{
"reason": "all_purged",
"state": "warning"
}

Similarly, on an instance where Clone is either not available or has been disabled and the binary logs are not available, for example because they were purged, then the output includes:

The cluster transactions cannot be recovered on the instance.

{
"reason": "all_purged",
"state": "warning"
}
dba.checkInstanceConfiguration() and MySQL Clone

When the dba.checkInstanceConfiguration() operation is run against an instance that has MySQL Clone available but it is disabled, a warning is displayed.

21.2.2.3 Adopting a Group Replication Deployment

If you have an existing deployment of Group Replication and you want to use it to create a cluster, pass the adoptFromGR option to the dba.createCluster() function. The created InnoDB Cluster matches whether the replication group is running as single-primary or multi-primary.

To adopt an existing Group Replication group, connect to a group member using MySQL Shell. In the following example a single-primary group is adopted. We connect to gr-member-2, a secondary instance, while gr-member-1 is functioning as the group's primary. Create a cluster using dba.createCluster(), passing in the adoptFromGR option. For example:

mysql-js> var cluster = dba.createCluster('prodCluster', {adoptFromGR: true});

A new InnoDB cluster will be created on instance 'root@gr-member-2:3306'.

Creating InnoDB cluster 'prodCluster' on 'root@gr-member-2:3306'...
Adding Seed Instance...

Cluster successfully created. Use cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.
Tip

If the instance has super_read_only=ON then you might need to confirm that AdminAPI can set super_read_only=OFF. See Super Read-only and Instances for more information.

The new cluster matches the mode of the group. If the adopted group was running in single-primary mode then a single-primary cluster is created. If the adopted group was running in multi-primary mode then a multi-primary cluster is created.

21.2.3 Monitoring InnoDB Cluster

This section describes how to use AdminAPI to monitor an InnoDB Cluster.

Using Cluster.describe()

To get information about the structure of the InnoDB Cluster itself, use the Cluster.describe() function:

mysql-js> cluster.describe();
{
    "clusterName": "testCluster",
    "defaultReplicaSet": {
        "name": "default",
        "topology": [
            {
                "address": "ic-1:3306",
                "label": "ic-1:3306",
                "role": "HA"
            },
            {
                "address": "ic-2:3306",
                "label": "ic-2:3306",
                "role": "HA"
            },
            {
                "address": "ic-3:3306",
                "label": "ic-3:3306",
                "role": "HA"
            }
        ]
    }
}

The output from this function shows the structure of the InnoDB Cluster including all of its configuration information, and so on. The address, label and role values match those described at Checking a cluster's Status with Cluster.status() .

Checking a cluster's Status with Cluster.status()

Cluster objects provide the status() method that enables you to check how a cluster is running. Before you can check the status of the InnoDB Cluster, you need to get a reference to the InnoDB Cluster object by connecting to any of its instances. However, if you want to make changes to the configuration of the cluster, you must connect to a "R/W" instance. Issuing status() retrieves the status of the cluster based on the view of the cluster which the server instance you are connected to is aware of and outputs a status report.

Important

The instance's state in the cluster directly influences the information provided in the status report. Therefore ensure the instance you are connected to has a status of ONLINE.

For information about how the InnoDB Cluster is running, use the cluster's status() method:

mysql-js> var cluster = dba.getCluster()
mysql-js> cluster.status()
{
    "clusterName": "testcluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "ic-1:3306",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "ic-1:3306": {
                "address": "ic-1:3306",
                "mode": "R/W",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "ic-2:3306": {
                "address": "ic-2:3306",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "ic-3:3306": {
                "address": "ic-3:3306",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            }
        }
    },
    "groupInformationSourceMember": "mysql://icadmin@ic-1:3306"
}

The output of Cluster.status() provides the following information:

  • clusterName: name assigned to this cluster during dba.createCluster().

  • defaultReplicaSet: the server instances which belong to an InnoDB Cluster and contain the data set.

  • primary: displayed when the cluster is operating in single-primary mode only. Shows the address of the current primary instance. If this field is not displayed, the cluster is operating in multi-primary mode.

  • ssl: whether secure connections are used by the cluster or not. Shows values of REQUIRED or DISABLED, depending on how the memberSslMode option was configured during either createCluster() or addInstance(). The value returned by this parameter corresponds to the value of the group_replication_ssl_mode server variable on the instance. See Securing your Cluster.

  • status: The status of this element of the cluster. For the overall cluster this describes the high availability provided by this cluster. The status is one of the following:

    • ONLINE: The instance is online and participating in the cluster.

    • OFFLINE: The instance has lost connection to the other instances.

    • RECOVERING: The instance is attempting to synchronize with the cluster by retrieving transactions it needs before it can become an ONLINE member.

    • UNREACHABLE: The instance has lost communication with the cluster.

    • ERROR: The instance has encountered an error during the recovery phase or while applying a transaction.

      Important

      Once an instance enters ERROR state, the super_read_only option is set to ON. To leave the ERROR state you must manually configure the instance with super_read_only=OFF.

    • (MISSING): The state of an instance which is part of the configured cluster, but is currently unavailable.

      Note

      The MISSING state is specific to InnoDB cluster, it is not a state generated by Group Replication. MySQL Shell uses this state to indicate instances that are registered in the metadata, but cannot be found in the live cluster view.

  • topology: The instances which have been added to the cluster.

  • Host name of instance: The host name of an instance, for example localhost:3310.

  • role: what function this instance provides in the cluster. Currently only HA, for high availability.

  • mode: whether the server is read-write ("R/W") or read-only ("R/O"). From version 8.0.17, this is derived from the current state of the super_read_only variable on the instance, and whether the cluster has quorum. In previous versions the value of mode was derived from whether the instance was serving as a primary or secondary instance. Usually if the instance is a primary, then the mode is "R/W", and if the instance is a secondary the mode is "R/O". Any instances in a cluster that have no visible quorum are marked as "R/O", regardless of the state of the super_read_only variable.

  • groupInformationSourceMember: the internal connection used to get information about the cluster, shown as a URI-like connection string. Usually the connection initially used to create the cluster.

To display more information about the cluster use the extended option. From version 8.0.17, the extended option supports integer or Boolean values. To configure the additional information that Cluster.status({'extended':value}) provides, use the following values:

  • 0: disables the additional information, the default

  • 1: includes information about the Group Replication Protocol Version, Group name, cluster member UUIDs, cluster member roles and states as reported by Group Replication, and the list of fenced system variables

  • 2: includes information about transactions processed by connection and applier

  • 3: includes more detailed statistics about the replication performed by each cluster member.

Setting extended using Boolean values is the equivalent of setting the integer values 0 and 1. In versions prior to 8.0.17, the extended option was only Boolean. Similarly prior versions used the queryMembers Boolean option to provide more information about the instances in the cluster, which is the equivalent of setting extended to 3. The queryMembers option is deprecated and scheduled to be removed in a future release.

When you issue Cluster.status({'extended':1}), or the extended option is set to true, the output includes:

  • the following additional attributes for the defaultReplicaSet object:

    • GRProtocolVersion is the Group Replication Protocol Version being used in the cluster.

      Tip

      InnoDB Cluster manages the Group Replication Protocol version being used automatically, see InnoDB Cluster and Group Replication Protocol for more information.

    • groupName is the group's name, a UUID.

  • the following additional attributes for each object of the topology object:

    • fenceSysVars a list containing the name of the fenced system variables which are enabled. Currently the fenced system variables considered are read_only, super_read_only and offline_mode.

    • memberId Each cluster member UUID.

    • memberRole the Member Role as reported by the Group Replication plugin, see the MEMBER_ROLE column of the replication_group_members table.

    • memberState the Member State as reported by the Group Replication plugin, see the MEMBER_STATE column of the replication_group_members table.

To see information about recovery and regular transaction I/O, applier worker thread statistics and any lags; applier coordinator statistics, if parallel apply is enabled; error, and other information from I/O and applier threads issue use the values 2 and 3. A value of 3 is the equivalent of setting the deprecated queryMembers option to true. When you use these values, a connection to each instance in the cluster is opened so that additional instance specific statistics can be queried. The exact statistics that are included in the output depend on the state and configuration of the instance and the server version. This information matches that shown in the replication_group_member_stats table, see the descriptions of the matching columns for more information. Instances which are ONLINE have a transactions section included in the output. Instances which are RECOVERING have a recovery section included in the output. When you set extended to 2, in either case, these sections can contain the following:

  • appliedCount: see COUNT_TRANSACTIONS_REMOTE_APPLIED

  • checkedCount: see COUNT_TRANSACTIONS_CHECKED

  • committedAllMembers: see TRANSACTIONS_COMMITTED_ALL_MEMBERS

  • conflictsDetectedCount: see COUNT_CONFLICTS_DETECTED

  • inApplierQueueCount: see COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE

  • inQueueCount: see COUNT_TRANSACTIONS_IN_QUEUE

  • lastConflictFree: see LAST_CONFLICT_FREE_TRANSACTION

  • proposedCount: see COUNT_TRANSACTIONS_LOCAL_PROPOSED

  • rollbackCount: see COUNT_TRANSACTIONS_LOCAL_ROLLBACK

When you set extended to 3, the connection section shows information from the replication_connection_status table. The connection section can contain the following:

The currentlyQueueing section has information about the transactions currently queued:

  • immediateCommitTimestamp: see QUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP

  • immediateCommitToNowTime: see QUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP minus NOW()

  • originalCommitTimestamp: see QUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP

  • originalCommitToNowTime: see QUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP minus NOW()

  • startTimestamp: see QUEUEING_TRANSACTION_START_QUEUE_TIMESTAMP

  • transaction: see QUEUEING_TRANSACTION

  • lastHeartbeatTimestamp: see LAST_HEARTBEAT_TIMESTAMP

The lastQueued section has information about the most recently queued transaction:

  • endTimestamp: see LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP

  • immediateCommitTimestamp: see LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP

  • immediateCommitToEndTime: LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP minus NOW()

  • originalCommitTimestamp: see LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP

  • originalCommitToEndTime: LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP minus NOW()

  • queueTime: LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP minus LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP

  • startTimestamp: see LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP

  • transaction: see LAST_QUEUED_TRANSACTION

  • receivedHeartbeats: see COUNT_RECEIVED_HEARTBEATS

  • receivedTransactionSet: see RECEIVED_TRANSACTION_SET

  • threadId: see THREAD_ID

Instances which are using a multithreaded replica have a workers section which contains information about the worker threads, and matches the information shown by the replication_applier_status_by_worker table.

The lastApplied section shows the following information about the last transaction applied by the worker:

  • applyTime: see LAST_APPLIED_TRANSACTION_END_APPLY_TIMESTAMP minus LAST_APPLIED_TRANSACTION_START_APPLY_TIMESTAMP

  • endTimestamp: see LAST_APPLIED_TRANSACTION_END_APPLY_TIMESTAMP

  • immediateCommitTimestamp: see LAST_APPLIED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP

  • immediateCommitToEndTime: see LAST_APPLIED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP minus NOW()

  • originalCommitTimestamp: see LAST_APPLIED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP

  • originalCommitToEndTime: see LAST_APPLIED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP minus NOW()

  • startTimestamp: see LAST_APPLIED_TRANSACTION_START_APPLY_TIMESTAMP

  • transaction: see LAST_APPLIED_TRANSACTION

The currentlyApplying section shows the following information about the transaction currently being applied by the worker:

  • immediateCommitTimestamp: see APPLYING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP

  • immediateCommitToNowTime: see APPLYING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP minus NOW()

  • originalCommitTimestamp: see APPLYING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP

  • originalCommitToNowTime: see APPLYING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP minus NOW()

  • startTimestamp: see APPLYING_TRANSACTION_START_APPLY_TIMESTAMP

  • transaction: see APPLYING_TRANSACTION

The lastProcessed section has the following information about the last transaction processed by the worker:

  • bufferTime: LAST_PROCESSED_TRANSACTION_END_BUFFER_TIMESTAMP minus LAST_PROCESSED_TRANSACTION_START_BUFFER_TIMESTAMP

  • endTimestamp: see LAST_PROCESSED_TRANSACTION_END_BUFFER_TIMESTAMP

  • immediateCommitTimestamp: see LAST_PROCESSED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP

  • immediateCommitToEndTime: LAST_PROCESSED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP minus LAST_PROCESSED_TRANSACTION_END_BUFFER_TIMESTAMP

  • originalCommitTimestamp: see LAST_PROCESSED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP

  • originalCommitToEndTime: LAST_PROCESSED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP minus LAST_PROCESSED_TRANSACTION_END_BUFFER_TIMESTAMP

  • startTimestamp: see LAST_PROCESSED_TRANSACTION_START_BUFFER_TIMESTAMP

  • transaction: see LAST_PROCESSED_TRANSACTION

If parallel applier workers are enabled, then the number of objects in the workers array in transactions or recovery matches the number of configured workers and an additional coordinator object is included. The information shown matches the information in the replication_applier_status_by_coordinator table. The object can contain:

The currentlyProcessing section has the following information about the transaction being processed by the worker:

  • immediateCommitTimestamp: see PROCESSING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP

  • immediateCommitToNowTime: PROCESSING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP minus NOW()

  • originalCommitTimestamp: see PROCESSING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP

  • originalCommitToNowTime: PROCESSING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP minus NOW()

  • startTimestamp: see PROCESSING_TRANSACTION_START_BUFFER_TIMESTAMP

  • transaction: see PROCESSING_TRANSACTION

worker objects have the following information if an error was detected in the replication_applier_status_by_worker table:

  • lastErrno: see LAST_ERROR_NUMBER

  • lastError: see LAST_ERROR_MESSAGE

  • lastErrorTimestamp: see LAST_ERROR_TIMESTAMP

connection objects have the following information if an error was detected in the replication_connection_status table:

  • lastErrno: see LAST_ERROR_NUMBER

  • lastError: see LAST_ERROR_MESSAGE

  • lastErrorTimestamp: see LAST_ERROR_TIMESTAMP

coordinator objects have the following information if an error was detected in the replication_applier_status_by_coordinator table:

  • lastErrno: see LAST_ERROR_NUMBER

  • lastError: see LAST_ERROR_MESSAGE

  • lastErrorTimestamp: see LAST_ERROR_TIMESTAMP

Monitoring Recovery Operations

The output of Cluster.status() shows information about the progress of recovery operations for instances in RECOVERING state. Information is shown for instances recovering using either MySQL Clone, or incremental recovery. Monitor these fields:

  • The recoveryStatusText field includes information about the type of recovery being used. When MySQL Clone is working the field shows Cloning in progress. When incremental recovery is working the field shows Distributed recovery in progress.

  • When MySQL Clone is being used, the recovery field includes a dictionary with the following fields:

    • cloneStartTime: The timestamp of the start of the clone process

    • cloneState: The state of the clone progress

    • currentStage: The current stage which the clone process has reached

    • currentStageProgress: The current stage progress as a percentage of completion

    • currentStageState: The current stage state

    Example Cluster.status() output, trimmed for brevity:

    ...
    "recovery": {
    "cloneStartTime": "2019-07-15 12:50:22.730",
    "cloneState": "In Progress",
    "currentStage": "FILE COPY",
    "currentStageProgress": 61.726837675213865,
    "currentStageState": "In Progress"
    },
    "recoveryStatusText": "Cloning in progress",
    ...
    
  • When incremental recovery is being used, the recovery field includes a dictionary with the following field:

    • state: The state of the group_replication_recovery channel

    Example output Cluster.status(), trimmed for brevity:

    ...
    "recovery": {
    "state": "ON"
    },
    ...
    

InnoDB Cluster and Group Replication Protocol

From MySQL 8.0.16, Group Replication has the concept of a communication protocol for the group, see Section 18.4.1.4, “Setting a Group's Communication Protocol Version” for background information. The Group Replication communication protocol version usually has to be managed explicitly, and set to accommodate the oldest MySQL Server version that you want the group to support. However, InnoDB Cluster automatically and transparently manages the communication protocol versions of its members, whenever the cluster topology is changed using AdminAPI operations. A cluster always uses the most recent communication protocol version that is supported by all the instances that are currently part of the cluster or joining it.

  • When an instance is added to, removed from, or rejoins the cluster, or a rescan or reboot operation is carried out on the cluster, the communication protocol version is automatically set to a version supported by the instance that is now at the earliest MySQL Server version.

  • When you carry out a rolling upgrade by removing instances from the cluster, upgrading them, and adding them back into the cluster, the communication protocol version is automatically upgraded when the last remaining instance at the old MySQL Server version is removed from the cluster prior to its upgrade.

To see the communication protocol version being used in a cluster, use the Cluster.status() function with the extended option enabled. The communication protocol version is returned in the GRProtocolVersion field, provided that the cluster has quorum and no cluster members are unreachable.

Checking the MySQL Version on Instances

The following operations can report information about the MySQL Server version running on the instance:

  • Cluster.status()

  • Cluster.describe()

  • Cluster.rescan()

The behavior varies depending on the MySQL Server version of the Cluster object session.

  • Cluster.status()

    If either of the following requirements are met, a version string attribute is returned for each instance JSON object of the topology object:

    • The Cluster object's current session is version 8.0.11 or later.

    • The Cluster object's current session is running a version earlier than version 8.0.11 but the extended option is set to 3 (or the deprecated queryMembers is true).

    For example on an instance running version 8.0.16:

    "topology": {
        "ic-1:3306": {
            "address": "ic-1:3306",
            "mode": "R/W",
            "readReplicas": {},
            "role": "HA",
            "status": "ONLINE",
            "version": "8.0.16"
    }
    

    For example on an instance running version 5.7.24:

    "topology": {
        "ic-1:3306": {
            "address": "ic-1:3306",
            "mode": "R/W",
            "readReplicas": {},
            "role": "HA",
            "status": "ONLINE",
            "version": "5.7.24"
    }
    
  • Cluster.describe()

    If the Cluster object's current session is version 8.0.11 or later, a version string attribute is returned for each instance JSON object of the topology object

    For example on an instance running version 8.0.16:

    "topology": [
        {
            "address": "ic-1:3306",
            "label": "ic-1:3306",
            "role": "HA",
            "version": "8.0.16"
        }
    ]
    
  • Cluster.rescan()

    If the Cluster object's current session is version 8.0.11 or later, and the Cluster.rescan() operation detects instances which do not belong to the cluster, a version string attribute is returned for each instance JSON object of the newlyDiscoveredInstance object.

    For example on an instance running version 8.0.16:

    "newlyDiscoveredInstances": [
        {
            "host": "ic-4:3306",
            "member_id": "82a67a06-2ba3-11e9-8cfc-3c6aa7197deb",
            "name": null,
            "version": "8.0.16"
        }
    ]
    

21.2.4 Working with Instances

This section describes the AdminAPI operations which apply to instances. You can configure instances before using them with InnoDB Cluster, check the state of an instance, and so on.

Using dba.checkInstanceConfiguration()

Before creating a production deployment from server instances you need to check that MySQL on each instance is correctly configured. In addition to dba.configureInstance(), which checks the configuration as part of configuring an instance, you can use the dba.checkInstanceConfiguration(instance) function. This ensures that the instance satisfies the Section 21.2.1, “MySQL InnoDB Cluster Requirements” without changing any configuration on the instance. This does not check any data that is on the instance, see Checking Instance State for more information.

The user which you use to connect to the instance must have suitable privileges, for example as configured at Configuring Users for AdminAPI. The following demonstrates issuing this in a running MySQL Shell:

mysql-js> dba.checkInstanceConfiguration('icadmin@ic-1:3306')
Please provide the password for 'icadmin@ic-1:3306': ***
Validating MySQL instance at ic-1:3306 for use in an InnoDB cluster...

This instance reports its own address as ic-1
Clients and other cluster members will communicate with it through this address by default.
If this is not correct, the report_host MySQL system variable should be changed.

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...

Some configuration options need to be fixed:
+--------------------------+---------------+----------------+--------------------------------------------------+
| Variable                 | Current Value | Required Value | Note                                             |
+--------------------------+---------------+----------------+--------------------------------------------------+
| enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                | OFF           | ON             | Update read-only variable and restart the server |
| server_id                | 1             |                | Update read-only variable and restart the server |
+--------------------------+---------------+----------------+--------------------------------------------------+

Please use the dba.configureInstance() command to repair these issues.

{
    "config_errors": [
        {
            "action": "restart",
            "current": "OFF",
            "option": "enforce_gtid_consistency",
            "required": "ON"
        },
        {
            "action": "restart",
            "current": "OFF",
            "option": "gtid_mode",
            "required": "ON"
        },
        {
            "action": "restart",
            "current": "1",
            "option": "server_id",
            "required": ""
        }
    ],
    "status": "error"
}

Repeat this process for each server instance that you plan to use as part of your cluster. The report generated after running dba.checkInstanceConfiguration() provides information about any configuration changes required before you can proceed. The action field in the config_error section of the report tells you whether MySQL on the instance requires a restart to detect any change made to the configuration file.

Configuring Instances with dba.configureLocalInstance()

Instances which do not support persisting configuration changes automatically (see Persisting Settings) require you to connect to the server, run MySQL Shell, connect to the instance locally and issue dba.configureLocalInstance(). This enables MySQL Shell to modify the instance's option file after running the following commands against a remote instance:

  • dba.configureInstance()

  • dba.createCluster()

  • Cluster.addInstance()

  • Cluster.removeInstance()

  • Cluster.rejoinInstance()

Important

Failing to persist configuration changes to an instance's option file can result in the instance not rejoining the cluster after the next restart.

The recommended method is to log in to the remote machine, for example using SSH, run MySQL Shell as the root user and then connect to the local MySQL server. For example, use the --uri option to connect to the local instance:

shell> sudo -i mysqlsh --uri=instance

Alternatively use the \connect command to log in to the local instance. Then issue dba.configureInstance(instance), where instance is the connection information to the local instance, to persist any changes made to the local instance's option file.

mysql-js> dba.configureLocalInstance('icadmin@ic-2:3306')

Repeat this process for each instance in the cluster which does not support persisting configuration changes automatically. For example if you add 2 instances to a cluster which do not support persisting configuration changes automatically, you must connect to each server and persist the configuration changes required for InnoDB Cluster before the instance restarts. Similarly if you modify the cluster structure, for example changing the number of instances, you need to repeat this process for each server instance to update the InnoDB Cluster metadata accordingly for each instance in the cluster.

Checking Instance State

The cluster.checkInstanceState() function can be used to verify the existing data on an instance does not prevent it from joining a cluster. This process works by validating the instance's global transaction identifier (GTID) state compared to the GTIDs already processed by the cluster. For more information on GTIDs see Section 17.1.3.1, “GTID Format and Storage”. This check enables you to determine if an instance which has processed transactions can be added to the cluster.

The following demonstrates issuing this in a running MySQL Shell:

mysql-js> cluster.checkInstanceState('icadmin@ic-4:3306')

The output of this function can be one of the following:

  • OK new: the instance has not executed any GTID transactions, therefore it cannot conflict with the GTIDs executed by the cluster

  • OK recoverable: the instance has executed GTIDs which do not conflict with the executed GTIDs of the cluster seed instances

  • ERROR diverged: the instance has executed GTIDs which diverge with the executed GTIDs of the cluster seed instances

  • ERROR lost_transactions: the instance has more executed GTIDs than the executed GTIDs of the cluster seed instances

Instances with an OK status can be added to the cluster because any data on the instance is consistent with the cluster. In other words the instance being checked has not executed any transactions which conflict with the GTIDs executed by the cluster, and can be recovered to the same state as the rest of the cluster instances.

21.2.5 Working with InnoDB Cluster

This section explains how to work with InnoDB Cluster, and how to handle common administration tasks.

Removing Instances from the InnoDB Cluster

You can remove an instance from a cluster at any time should you wish to do so. This can be done with the Cluster.removeInstance(instance) method, as in the following example:

mysql-js> cluster.removeInstance('root@localhost:3310')

The instance will be removed from the InnoDB cluster. Depending on the instance
being the Seed or not, the Metadata session might become invalid. If so, please
start a new session to the Metadata Storage R/W instance.

Attempting to leave from the Group Replication group...

The instance 'localhost:3310' was successfully removed from the cluster.

You can optionally pass in the interactive option to control whether you are prompted to confirm the removal of the instance from the cluster. In interactive mode, you are prompted to continue with the removal of the instance (or not) in case it is not reachable. The cluster.removeInstance() operation ensures that the instance is removed from the metadata of all the cluster members which are ONLINE, and the instance itself.

When the instance being removed has transactions which still need to be applied, AdminAPI waits for up to the number of seconds configured by the MySQL Shell dba.gtidWaitTimeout option for transactions (GTIDs) to be applied. The MySQL Shell dba.gtidWaitTimeout option has a default value of 60 seconds, see Configuring MySQL Shell Options for information on changing the default. If the timeout value defined by dba.gtidWaitTimeout is reached when waiting for transactions to be applied and the force option is false (or not defined) then an error is issued and the remove operation is aborted. If the timeout value defined by dba.gtidWaitTimeout is reached when waiting for transactions to be applied and the force option is set to true then the operation continues without an error and removes the instance from the cluster.

Important

The force option should only be used with Cluster.removeInstance(instance) when you want to ignore any errors, for example unprocessed transactions or an instance being UNREACHABLE, and do not plan to reuse the instance with the cluster. Ignoring errors when removing an instance from the cluster could result in an instance which is not in synchrony with the cluster, preventing it from rejoining the cluster at a later time. Only use the force option when you plan to no longer use the instance with the cluster, in all other cases you should always try to recover the instance and only remove it when it is available and healthy, in other words with the status ONLINE.

Dissolving an InnoDB Cluster

To dissolve an InnoDB Cluster you connect to a read-write instance, for example the primary in a single-primary cluster, and use the Cluster.dissolve() command. This removes all metadata and configuration associated with the cluster, and disables Group Replication on the instances. Any data that was replicated between the instances is not removed.

Important

There is no way to undo the dissolving of a cluster. To create it again use dba.createCluster().

The Cluster.dissolve() operation can only configure instances which are ONLINE or reachable. If members of a cluster cannot be reached by the member where you issued the Cluster.dissolve() command you have to decide how the dissolve operation should proceed. If there is any chance you want to rejoin any instances that are identified as missing from the cluster, it is strongly recommended to cancel the dissolve operation and first bring the missing instances back online, before proceeding with a dissolve operation. This ensures that all instances can have their metadata updated correctly, and that there is no chance of a split-brain situation. However, if the instances from the cluster which cannot be reached have permanently left the cluster there could be no choice but to force the dissolve operation, which means that the missing instances are ignored and only online instances are affected by the operation.

Warning

Forcing the dissolve operation to ignore cluster instances can result in instances which could not be reached during the dissolve operation continuing to operate, creating the risk of a split-brain situation. Only ever force a dissolve operation to ignore missing instances if you are sure there is no chance of the instance coming online again.

In interactive mode, if members of a cluster are not reachable during a dissolve operation then an interactive prompt is displayed, for example:

mysql-js> Cluster.dissolve()
The cluster still has the following registered instances:
{
    "clusterName": "testCluster",
    "defaultReplicaSet": {
        "name": "default",
        "topology": [
            {
                "address": "ic-1:3306",
                "label": "ic-1:3306",
                "role": "HA"
            },
            {
                "address": "ic-2:3306",
                "label": "ic-2:3306",
                "role": "HA"
            },
            {
                "address": "ic-3:3306",
                "label": "ic-3:3306",
                "role": "HA"
            }
        ]
    }
}
WARNING: You are about to dissolve the whole cluster and lose the high
availability features provided by it. This operation cannot be reverted. All
members will be removed from the cluster and replication will be stopped,
internal recovery user accounts and the cluster metadata will be dropped. User
data will be maintained intact in all instances.

Are you sure you want to dissolve the cluster? [y/N]: y

ERROR: The instance 'ic-2:3306' cannot be removed because it is on a '(MISSING)'
state. Please bring the instance back ONLINE and try to dissolve the cluster
again. If the instance is permanently not reachable, then you can choose to
proceed with the operation and only remove the instance from the Cluster
Metadata.

Do you want to continue anyway (only the instance metadata will be removed)?
[y/N]: y

Instance 'ic-3:3306' is attempting to leave the cluster...  Instance 'ic-1:3306'
is attempting to leave the cluster...

WARNING: The cluster was successfully dissolved, but the following instance was
skipped: 'ic-2:3306'. Please make sure this instance is permanently unavailable
or take any necessary manual action to ensure the cluster is fully dissolved.

In this example, the cluster consisted of three instances, one of which was offline when dissolve was issued. The error is caught, and you are given the choice how to proceed. In this case the missing ic-2 instance is ignored and the reachable members have their metadata updated.

When MySQL Shell is running in non-interactive mode, for example when running a batch file, you can configure the behavior of the Cluster.dissolve() operation using the force option. To force the dissolve operation to ignore any instances which are unreachable, issue:

mysql-js> Cluster.dissolve({force: true})

Any instances which can be reached are removed from the cluster, and any unreachable instances are ignored. The warnings in this section about forcing the removal of missing instances from a cluster apply equally to this technique of forcing the dissolve operation.

You can also use the interactive option with the Cluster.dissolve() operation to override the mode which MySQL Shell is running in, for example to make the interactive prompt appear when running a batch script. For example:

mysql-js> Cluster.dissolve({interactive: true})

The dba.gtidWaitTimeout MySQL Shell option configures how long the Cluster.dissolve() operation waits for cluster transactions to be applied before removing a target instance from the cluster, but only if the target instance is ONLINE. An error is issued if the timeout is reached when waiting for cluster transactions to be applied on any of the instances being removed, except if force: true is used, which skips the error in that case.

Note

After issuing cluster.dissolve(), any variable assigned to the Cluster object is no longer valid.

Changing a Cluster's Topology

By default, an InnoDB Cluster runs in single-primary mode, where the cluster has one primary server that accepts read and write queries (R/W), and all of the remaining instances in the cluster accept only read queries (R/O). When you configure a cluster to run in multi-primary mode, all of the instances in the cluster are primaries, which means that they accept both read and write queries (R/W). If a cluster has all of its instances running MySQL server version 8.0.15 or later, you can make changes to the topology of the cluster while the cluster is online. In previous versions it was necessary to completely dissolve and re-create the cluster to make the configuration changes. This uses the group action coordinator exposed through the UDFs described at Section 18.4.1, “Configuring an Online Group”, and as such you should observe the rules for configuring online groups.

Note

multi-primary mode is considered an advanced mode

Usually a single-primary cluster elects a new primary when the current primary leaves the cluster unexpectedly, for example due to an unexpected halt. The election process is normally used to choose which of the current secondaries becomes the new primary. To override the election process and force a specific server to become the new primary, use the Cluster.setPrimaryInstance(instance) function, where instance specifies the connection to the instance which should become the new primary. This enables you to configure the underlying Group Replication group to choose a specific instance as the new primary, bypassing the election process.

You can change the mode (sometimes described as the topology) which a cluster is running in between single-primary and multi-primary using the following operations:

  • Cluster.switchToMultiPrimaryMode(), which switches the cluster to multi-primary mode. All instances become primaries.

  • Cluster.switchToSinglePrimaryMode([instance]), which switches the cluster to single-primary mode. If instance is specified, it becomes the primary and all the other instances become secondaries. If instance is not specified, the new primary is the instance with the highest member weight (and the lowest UUID in case of a tie on member weight).

21.2.6 Configuring InnoDB Cluster

This section describes how to use AdminAPI to configure an InnoDB Cluster.

Setting Options for InnoDB Cluster

You can check and modify the settings in place for an InnoDB Cluster while the instances are online. To check the current settings of a cluster, use the following operation:

  • Cluster.options(), which lists the configuration options for the cluster and its instances. A boolean option all can also be specified to include information about all Group Replication system variables in the output.

You can configure the options of an InnoDB Cluster at a cluster level or instance level, while instances remain online. This avoids the need to remove, reconfigure and then again add the instance to change InnoDB Cluster options. Use the following operations:

  • Cluster.setOption(option, value) to change the settings of all cluster instances globally or cluster global settings such as clusterName.

  • Cluster.setInstanceOption(instance, option, value) to change the settings of individual cluster instances

The way which you use InnoDB Cluster options with the operations listed depends on whether the option can be changed to be the same on all instances or not. These options are changeable at both the cluster (all instances) and per instance level:

  • exitStateAction

  • memberWeight

This option is changeable at the per instance level only:

  • label

These options are changeable at the cluster level only:

  • consistency

  • expelTimeout

  • clusterName

Customizing InnoDB Clusters

When you create a cluster and add instances to it, values such as the group name, the local address, and the seed instances are configured automatically by AdminAPI. These default values are recommended for most deployments, but advanced users can override the defaults by passing the following options to the dba.createCluster() and Cluster.addInstance().

To customize the name of the replication group created by InnoDB Cluster, pass the groupName option to the dba.createCluster() command. This sets the group_replication_group_name system variable. The name must be a valid UUID.

To customize the address which an instance provides for connections from other instances, pass the localAddress option to the dba.createCluster() and cluster.addInstance() commands. Specify the address in the format host:port. This sets the group_replication_local_address system variable on the instance. The address must be accessible to all instances in the cluster, and must be reserved for internal cluster communication only. In other words do not use this address for communication with the instance.

To customize the instances used as seeds when an instance joins the cluster, pass the groupSeeds option to the dba.createCluster() and Cluster.addInstance() operations. Seed instances are contacted when a new instance joins a cluster and are used to provide data to the new instance. The addresses of the seed instances are specified as a comma separated list such as host1:port1,host2:port2. This configures the group_replication_group_seeds system variable. When a new instance is added to a cluster, the local address of this instance is automatically appended to the list of group seeds of all online cluster members in order to allow them to use the new instance to rejoin the group if necessary.

Note

the instances in the seed list are used according to the order in which they appear in the list. This means that a user specified seed is used first and preferred over automatically added instances.

For more information see the documentation of the system variables configured by these AdminAPI options.

Configuring the Election Process

You can optionally configure how a single-primary cluster elects a new primary, for example to prefer one instance as the new primary to fail over to. Use the memberWeight option and pass it to the dba.createCluster() and Cluster.addInstance() methods when creating your cluster. The memberWeight option accepts an integer value between 0 and 100, which is a percentage weight for automatic primary election on failover. When an instance has a higher precentage number set by memberWeight, it is more likely to be elected as primary in a single-primary cluster. When a primary election takes place, if multiple instances have the same memberWeight value, the instances are then prioritized based on their server UUID in lexicographical order (the lowest) and by picking the first one.

Setting the value of memberWeight configures the group_replication_member_weight system variable on the instance. Group Replication limits the value range from 0 to 100, automatically adjusting it if a higher or lower value is provided. Group Replication uses a default value of 50 if no value is provided. See Section 18.1.3.1, “Single-Primary Mode” for more information.

For example to configure a cluster where ic-3 is the preferred instance to fail over to in the event that ic-1, the current primary, leaves the cluster unexpectedly use memberWeight as follows:

dba.createCluster('cluster1', {memberWeight:35})
var mycluster = dba.getCluster()
mycluster.addInstance('icadmin@ic2', {memberWeight:25})
mycluster.addInstance('icadmin@ic3', {memberWeight:50})

Configuring Failover Consistency

Group Replication provides the ability to specify the failover guarantees (eventual or read your writes) if a primary failover happens in single-primary mode (see Section 18.4.2.2, “Configuring Transaction Consistency Guarantees”). You can configure the failover guarantees of an InnoDB Cluster at creation by passing the consistency option (prior to version 8.0.16 this option was the failoverConsistency option, which is now deprecated) to the dba.createCluster() operation, which configures the group_replication_consistency system variable on the seed instance. This option defines the behavior of a new fencing mechanism used when a new primary is elected in a single-primary group. The fencing restricts connections from writing and reading from the new primary until it has applied any pending backlog of changes that came from the old primary (sometimes referred to as read your writes). While the fencing mechanism is in place, applications effectively do not see time going backward for a short period of time while any backlog is applied. This ensures that applications do not read stale information from the newly elected primary.

The consistency option is only supported if the target MySQL server version is 8.0.14 or later, and instances added to a cluster which has been configured with the consistency option are automatically configured to have group_replication_consistency the same on all cluster members that have support for the option. The variable default value is controlled by Group Replication and is EVENTUAL, change the consistency option to BEFORE_ON_PRIMARY_FAILOVER to enable the fencing mechanism. Alternatively use consistency=0 for EVENTUAL and consistency=1 for BEFORE_ON_PRIMARY_FAILOVER.

Note

Using the consistency option on a multi-primary InnoDB Cluster has no effect but is allowed because the cluster can later be changed into single-primary mode with the Cluster.switchToSinglePrimaryMode() operation.

Configuring Automatic Rejoin of Instances

Instances running MySQL 8.0.16 and later support the Group Replication automatic rejoin functionality, which enables you to configure instances to automatically rejoin the cluster after being expelled. See Section 18.6.6, “Responses to Failure Detection and Network Partitioning” for background information. AdminAPI provides the autoRejoinTries option to configure the number of tries instances make to rejoin the cluster after being expelled. By default instances do not automatically rejoin the cluster. You can configure the autoRejoinTries option at either the cluster level or for an individual instance using the following commands:

  • dba.createCluster()

  • Cluster.addInstance()

  • Cluster.setOption()

  • Cluster.setInstanceOption()

The autoRejoinTries option accepts positive integer values between 0 and 2016 and the default value is 0, which means that instances do not try to automatically rejoin. When you are using the automatic rejoin functionality, your cluster is more tolerant to faults, especially temporary ones such as unreliable networks. But if quorum has been lost, you should not expect members to automatically rejoin the cluster, because majority is required to rejoin instances.

Instances running MySQL version 8.0.12 and later have the group_replication_exit_state_action variable, which you can configure using the AdminAPI exitStateAction option. This controls what instances do in the event of leaving the cluster unexpectedly. By default the exitStateAction option is READ_ONLY, which means that instances which leave the cluster unexpectedly become read-only. If exitStateAction is set to OFFLINE_MODE (available from MySQL 8.0.18), instances which leave the cluster unexpectedly become read-only and also enter offline mode, where they disconnect existing clients and do not accept new connections (except from clients with administrator privileges). If exitStateAction is set to ABORT_SERVER then in the event of leaving the cluster unexpectedly, the instance shuts down MySQL, and it has to be started again before it can rejoin the cluster. Note that when you are using the automatic rejoin functionality, the action configured by the exitStateAction option only happens in the event that all attempts to rejoin the cluster fail.

There is a chance you might connect to an instance and try to configure it using the AdminAPI, but at that moment the instance could be rejoining the cluster. This could happen whenever you use any of these operations:

  • Cluster.status()

  • dba.getCluster()

  • Cluster.rejoinInstance()

  • Cluster.addInstance()

  • Cluster.removeInstance()

  • Cluster.rescan()

  • Cluster.checkInstanceState()

These operations might provide extra information while the instance is automatically rejoining the cluster. In addition, when you are using Cluster.removeInstance(), if the target instance is automatically rejoining the cluster the operation aborts unless you pass in force:true.

Securing your Cluster

Server instances can be configured to use secure connections. For general information on using secure connections with MySQL see Section 6.3, “Using Encrypted Connections”. This section explains how to configure a cluster to use encrypted connections. An additional security possibility is to configure which servers can access the cluster, see Creating an Allowlist of Servers.

Important

Once you have configured a cluster to use encrypted connections you must add the servers to the ipAllowlist.

When using dba.createCluster() to set up a cluster, if the server instance provides encryption then it is automatically enabled on the seed instance. Pass the memberSslMode option to the dba.createCluster() method to specify a different SSL mode. The SSL mode of a cluster can only be set at the time of creation. The memberSslMode option is a string that configures the SSL mode to be used, it defaults to AUTO. The permitted values are DISABLED, REQUIRED, and AUTO. These modes are defined as:

  • Setting createCluster({memberSslMode:'DISABLED'}) ensures SSL encryption is disabled for the seed instance in the cluster.

  • Setting createCluster({memberSslMode:'REQUIRED'}) then SSL encryption is enabled for the seed instance in the cluster. If it cannot be enabled an error is raised.

  • Setting createCluster({memberSslMode:'AUTO'}) (the default) then SSL encryption is automatically enabled if the server instance supports it, or disabled if the server does not support it.

Note

When using the commercial version of MySQL, SSL is enabled by default and you might need to configure the allowlist for all instances. See Creating an Allowlist of Servers.

When you issue the cluster.addInstance() and cluster.rejoinInstance() commands, SSL encryption on the instance is enabled or disabled based on the setting found for the seed instance.

When using createCluster() with the adoptFromGR option to adopt an existing Group Replication group, no SSL settings are changed on the adopted cluster:

  • memberSslMode cannot be used with adoptFromGR.

  • If the SSL settings of the adopted cluster are different from the ones supported by the MySQL Shell, in other words SSL for Group Replication recovery and Group Communication, both settings are not modified. This means you are not be able to add new instances to the cluster, unless you change the settings manually for the adopted cluster.

MySQL Shell always enables or disables SSL for the cluster for both Group Replication recovery and Group Communication, see Section 18.5.2, “Securing Group Communication Connections with Secure Socket Layer (SSL)”. A verification is performed and an error issued in case those settings are different for the seed instance (for example as the result of a dba.createCluster() using adoptFromGR) when adding a new instance to the cluster. SSL encryption must be enabled or disabled for all instances in the cluster. Verifications are performed to ensure that this invariant holds when adding a new instance to the cluster.

The dba.deploySandboxInstance() command attempts to deploy sandbox instances with SSL encryption support by default. If it is not possible, the server instance is deployed without SSL support. Use the ignoreSslError option set to false to ensure that sandbox instances are deployed with SSL support, issuing an error if SSL support cannot be provided. When ignoreSslError is true, which is the default, no error is issued during the operation if the SSL support cannot be provided and the server instance is deployed without SSL support.

Creating an Allowlist of Servers

When using a cluster's createCluster(), addInstance(), and rejoinInstance() methods you can optionally specify a list of approved servers that belong to the cluster, referred to as an allowlist. By specifying the allowlist explicitly in this way you can increase the security of your cluster because only servers in the allowlist can connect to the cluster. Using the ipAllowlist option (previously ipWhitelist, now deprecated) configures the group_replication_ip_allowlist system variable on the instance. By default, if not specified explicitly, the allowlist is automatically set to the private network addresses that the server has network interfaces on. To configure the allowlist, specify the servers to add with the ipAllowlist option when using the method. IP addresses must be specified in IPv4 format. Pass the servers as a comma separated list, surrounded by quotes. For example:

mysql-js> cluster.addInstance("icadmin@ic-3:3306", {ipAllowlist: "203.0.113.0/24, 198.51.100.110"})

This configures the instance to only accept connections from servers at addresses 203.0.113.0/24 and 198.51.100.110. The allowlist can also include host names, which are resolved only when a connection request is made by another server.

Warning

Host names are inherently less secure than IP addresses in an allowlist. MySQL carries out FCrDNS verification, which provides a good level of protection, but can be compromised by certain types of attack. Specify host names in your allowlist only when strictly necessary, and ensure that all components used for name resolution, such as DNS servers, are maintained under your control. You can also implement name resolution locally using the hosts file, to avoid the use of external components.

21.2.7 Troubleshooting InnoDB Cluster

This section describes how to troubleshoot an InnoDB Cluster.

Rejoining an Instance to a Cluster

If an instance leaves the cluster, for example because it lost connection, and for some reason it could not automatically rejoin the cluster, it might be necessary to rejoin it to the cluster at a later stage. To rejoin an instance to a cluster issue Cluster.rejoinInstance(instance).

Tip

If the instance has super_read_only=ON then you might need to confirm that AdminAPI can set super_read_only=OFF. See Super Read-only and Instances for more information.

In the case where an instance has not had its configuration persisted (see Persisting Settings), upon restart the instance does not rejoin the cluster automatically. The solution is to issue cluster.rejoinInstance() so that the instance is added to the cluster again and ensure the changes are persisted. Once the InnoDB Cluster configuration is persisted to the instance's option file it rejoins the cluster automatically.

If you are rejoining an instance which has changed in some way then you might have to modify the instance to make the rejoin process work correctly. For example, when you restore a MySQL Enterprise Backup backup, the server_uuid changes. Attempting to rejoin such an instance fails because InnoDB Cluster instances are identified by the server_uuid variable. In such a situation, information about the instance's old server_uuid must be removed from the InnoDB Cluster metadata and then a Cluster.rescan() must be executed to add the instance to the metadata using it's new server_uuid. For example:

cluster.removeInstance("root@instanceWithOldUUID:3306", {force: true})

cluster.rescan()

In this case you must pass the force option to the Cluster.removeInstance() method because the instance is unreachable from the cluster's perspective and we want to remove it from the InnoDB Cluster metadata anyway.

Restoring a Cluster from Quorum Loss

If an instance (or instances) fail, then a cluster can lose its quorum, which is the ability to vote in a new primary. This can happen when there is a failure of enough instances that there is no longer a majority of the instances which make up the cluster to vote on Group Replication operations. See Section 18.1.4.3, “Fault-tolerance”. When a cluster loses quorum you can no longer process write transactions with the cluster, or change the cluster's topology, for example by adding, rejoining, or removing instances. However if you have an instance online which contains the InnoDB Cluster metadata, it is possible to restore a cluster with quorum. This assumes you can connect to an instance that contains the InnoDB Cluster metadata, and that instance can contact the other instances you want to use to restore the cluster.

Important

This operation is potentially dangerous because it can create a split-brain scenario if incorrectly used and should be considered a last resort. Make absolutely sure that there are no partitions of this group that are still operating somewhere in the network, but not accessible from your location.

Connect to an instance which contains the cluster's metadata, then use the Cluster.forceQuorumUsingPartitionOf(instance) operation, which restores the cluster based on the metadata on instance, and then all the instances that are ONLINE from the point of view of the given instance definition are added to the restored cluster.

mysql-js> cluster.forceQuorumUsingPartitionOf("icadmin@ic-1:3306")

  Restoring replicaset 'default' from loss of quorum, by using the partition composed of [icadmin@ic-1:3306]

  Please provide the password for 'icadmin@ic-1:3306': ******
  Restoring the InnoDB cluster ...

  The InnoDB cluster was successfully restored using the partition from the instance 'icadmin@ic-1:3306'.

  WARNING: To avoid a split-brain scenario, ensure that all other members of the replicaset
  are removed or joined back to the group that was restored.

In the event that an instance is not automatically added to the cluster, for example if its settings were not persisted, use Cluster.rejoinInstance() to manually add the instance back to the cluster.

The restored cluster might not, and does not have to, consist of all of the original instances which made up the cluster. For example, if the original cluster consisted of the following five instances:

  • ic-1

  • ic-2

  • ic-3

  • ic-4

  • ic-5

and the cluster experiences a split-brain scenario, with ic-1, ic-2, and ic-3 forming one partition while ic-4 and ic-5 form another partition. If you connect to ic-1 and issue Cluster.forceQuorumUsingPartitionOf('icadmin@ic-1:3306') to restore the cluster the resulting cluster would consist of these three instances:

  • ic-1

  • ic-2

  • ic-3

because ic-1 sees ic-2 and ic-3 as ONLINE and does not see ic-4 and ic-5.

Rebooting a Cluster from a Major Outage

If your cluster suffers from a complete outage, you can ensure it is reconfigured correctly using dba.rebootClusterFromCompleteOutage(). This operation takes the instance which MySQL Shell is currently connected to and uses its metadata to recover the cluster. In the event that a cluster's instances have completely stopped, the instances must be started and only then can the cluster be started. For example if the machine a sandbox cluster was running on has been restarted, and the instances were at ports 3310, 3320 and 3330, issue:

mysql-js> dba.startSandboxInstance(3310)
mysql-js> dba.startSandboxInstance(3320)
mysql-js> dba.startSandboxInstance(3330)
    

This ensures the sandbox instances are running. In the case of a production deployment you would have to start the instances outside of MySQL Shell. Once the instances have started, you need to connect to an instance with the GTID superset, which means the instance which had applied the most transaction before the outage. If you are unsure which instance contains the GTID superset, connect to any instance and follow the interactive messages from the dba.rebootClusterFromCompleteOutage(), which detects if the instance you are connected to contains the GTID superset. Reboot the cluster by issuing:

mysql-js> var cluster = dba.rebootClusterFromCompleteOutage();

The dba.rebootClusterFromCompleteOutage() operation then follows these steps to ensure the cluster is correctly reconfigured:

  • The InnoDB Cluster metadata found on the instance which MySQL Shell is currently connected to is checked to see if it contains the GTID superset, in other words the transactions applied by the cluster. If the currently connected instance does not contain the GTID superset, the operation aborts with that information. See the subsequent paragraphs for more information.

  • If the instance contains the GTID superset, the cluster is recovered based on the metadata of the instance.

  • Assuming you are running MySQL Shell in interactive mode, a wizard is run that checks which instances of the cluster are currently reachable and asks if you want to rejoin any discovered instances to the rebooted cluster.

  • Similarly, in interactive mode the wizard also detects instances which are currently not reachable and asks if you would like to remove such instances from the rebooted cluster.

If you are not using MySQL Shell's interactive mode, you can use the rejoinInstances and removeInstances options to manually configure instances which should be joined or removed during the reboot of the cluster.

If you encounter an error such as The active session instance isn't the most updated in comparison with the ONLINE instances of the Cluster's metadata. then the instance you are connected to does not have the GTID superset of transactions applied by the cluster. In this situation, connect MySQL Shell to the instance suggested in the error message and issue dba.rebootClusterFromCompleteOutage() from that instance.

Tip

To manually detect which instance has the GTID superset rather than using the interactive wizard, check the gtid_executed variable on each instance. For example issue:

mysql-sql> SHOW VARIABLES LIKE 'gtid_executed';

The instance which has applied the largest GTID set of transactions contains the GTID superset.

If this process fails, and the cluster metadata has become badly corrupted, you might need to drop the metadata and create the cluster again from scratch. You can drop the cluster metadata using dba.dropMetadataSchema().

Warning

The dba.dropMetadataSchema() method should only be used as a last resort, when it is not possible to restore the cluster. It cannot be undone.

If you are using MySQL Router with the cluster, when you drop the metadata, all current connections are dropped and new connections are forbidden. This causes a full outage.

Rescanning a Cluster

If you make configuration changes to a cluster outside of the AdminAPI commands, for example by changing an instance's configuration manually to resolve configuration issues or after the loss of an instance, you need to update the InnoDB Cluster metadata so that it matches the current configuration of instances. In these cases, use the Cluster.rescan() operation, which enables you to update the InnoDB Cluster metadata either manually or using an interactive wizard. The Cluster.rescan() operation can detect new active instances that are not registered in the metadata and add them, or obsolete instances (no longer active) still registered in the metadata, and remove them. You can automatically update the metadata depending on the instances found by the command, or you can specify a list of instance addresses to either add to the metadata or remove from the metadata. You can also update the topology mode stored in the metadata, for example after changing from single-primary mode to multi-primary mode outside of AdminAPI.

The syntax of the command is Cluster.rescan([options]). The options dictionary supports the following:

  • interactive: boolean value used to disable or enable the wizards in the command execution. Controls whether prompts and confirmations are provided. The default value is equal to MySQL Shell wizard mode, specified by shell.options.useWizards.

  • addInstances: list with the connection data of the new active instances to add to the metadata, or auto to automatically add missing instances to the metadata. The value auto is case-insensitive.

    • Instances specified in the list are added to the metadata, without prompting for confirmation

    • In interactive mode, you are prompted to confirm the addition of newly discovered instances that are not included in the addInstances option

    • In non-interactive mode, newly discovered instances that are not included in the addInstances option are reported in the output, but you are not prompted to add them

  • removeInstances: list with the connection data of the obsolete instances to remove from the metadata, or auto to automatically remove obsolete instances from the metadata.

    • Instances specified in the list are removed from the metadata, without prompting for confirmation

    • In interactive mode, you are prompted to confirm the removal of obsolete instances that are not included in the removeInstances option

    • In non-interactive mode, obsolete instances that are not included in the removeInstances option are reported in the output but you are not prompted to remove them

  • updateTopologyMode: boolean value used to indicate if the topology mode (single-primary or multi-primary) in the metadata should be updated (true) or not (false) to match the one being used by the cluster. By default, the metadata is not updated (false).

    • If the value is true then the InnoDB Cluster metadata is compared to the current mode being used by Group Replication, and the metadata is updated if necessary. Use this option to update the metadata after making changes to the topology mode of your cluster outside of AdminAPI.

    • If the value is false then InnoDB Cluster metadata about the cluster's topology mode is not updated even if it is different from the topology used by the cluster's Group Replication group

    • If the option is not specified and the topology mode in the metadata is different from the topology used by the cluster's Group Replication group, then:

      • In interactive mode, you are prompted to confirm the update of the topology mode in the metadata

      • In non-interactive mode, if there is a difference between the topology used by the cluster's Group Replication group and the InnoDB Cluster metadata, it is reported and no changes are made to the metadata

    • When the metadata topology mode is updated to match the Group Replication mode, the auto-increment settings on all instances are updated as described at InnoDB Cluster and Auto-increment.

21.2.8 Upgrading an InnoDB Cluster

This section explains how to upgrade your cluster. Much of the process of upgrading an InnoDB Cluster consists of upgrading the instances in the same way as documented at Section 18.7, “Upgrading Group Replication”. This section focuses on the additional considerations for upgrading InnoDB Cluster. Before starting an upgrade, you can use the MySQL Shell Upgrade Checker Utility to verify instances are ready for the upgrade.

From version 8.0.19, if you try to bootstrap MySQL Router against a cluster and it discovers that the metadata version is 0.0.0, this indicates that a metadata upgrade is in progress, and the bootstrap fails. Wait for the metadata upgrade to complete before you try to bootstrap again. When MySQL Router is operating normally (not bootstrapping), if it discovers the metadata version is 0.0.0 (upgrade in progress) it does not proceed with the metadata refresh it was about to begin. Instead, MySQL Router continues using the last metadata it has cached. All the existing user connections are maintained, and any new connections are routed according to the cached metadata. The Metadata refresh restarts when the Metadata version is no longer 0.0.0. In the regular (not bootstrapping) mode, MySQL Router works with both version 1.x.x and 2.x.x. metadata, and the version can change between TTL refreshes. This ensures routing continues while the cluster is upgraded.

Although it is possible to have instances in a cluster which run multiple MySQL versions, for example version 5.7 and 8.0, such a mix is not recommended for prolonged use. For example, in a cluster using a mix of versions, if an instance running version 5.7 leaves the cluster and then MySQL Clone is used for a recovery operation, the instance running the lower version can no longer join the cluster because the BACKUP_ADMIN privilege becomes a requirement. Running a cluster with multiple versions is intended as a temporary situation to aid in migration from one version to another, and should not be relied on for long term use.

21.2.8.1 Rolling Upgrades

When upgrading the metadata schema of clusters deployed by MySQL Shell versions before 8.0.19, a rolling upgrade of existing MySQL Router instances is required. This process minimizes disruption to applications during the upgrade. The rolling upgrade process must be performed in the following order:

  1. Run the latest MySQL Shell version, connect the global session to the cluster and issue dba.upgradeMetadata(). This step ensures that any MySQL Router accounts configured for the cluster have their privileges modified to be compatible with the new version. The upgrade function stops if an outdated MySQL Router instance is detected, at which point you can stop the upgrade process in MySQL Shell to resume later.

  2. Upgrade any detected out of date MySQL Router instances to the latest version. It is recommended to use the same MySQL Router version as MySQL Shell version.

  3. Continue or restart the dba.upgradeMetadata() operation to complete the metadata upgrade.

21.2.8.2 Upgrading InnoDB Cluster Metadata

As AdminAPI evolves, some releases might require you to upgrade the metadata of existing clusters to ensure they are compatible with newer versions of MySQL Shell. For example, the addition of InnoDB ReplicaSet in version 8.0.19 means that the metadata schema has been upgraded to version 2.0. Regardless of whether you plan to use InnoDB ReplicaSet or not, to use MySQL Shell 8.0.19 or later with a cluster deployed using an earlier version of MySQL Shell, you must upgrade the metadata of your cluster.

Warning

Without upgrading the metadata you cannot use MySQL Shell 8.0.19 to change the configuration of a cluster created with earlier versions. For example, you can only perform read operations against the cluster such as Cluster.status(), Cluster.describe(), and Cluster.options().

This dba.upgradeMetadata() operation compares the version of the metadata schema found on the cluster MySQL Shell is currently connected to with the version of the metadata schema supported by this MySQL Shell version. If the installed metadata version is lower, an upgrade process is started. The dba.upgradeMetadata() operation then upgrades any automatically created MySQL Router users to have the correct privileges. Manually created MySQL Router users with a name not starting with mysql_router_ are not automatically upgraded. This is an important step in upgrading your cluster, only then can the MySQL Router application be upgraded. To get information on which of the MySQL Router instances registered with a cluster require the metadata upgrade, issue:

cluster.listRouters({'onlyUpgradeRequired':'true'})
{
    "clusterName": "mycluster",
    "routers": {
        "example.com::": {
            "hostname": "example.com",
            "lastCheckIn": "2019-11-26 10:10:37",
            "roPort": 6447,
            "roXPort": 64470,
            "rwPort": 6446,
            "rwXPort": 64460,
            "version": "8.0.18"
        }
    }
}
Warning

A cluster which is using the new metadata cannot be administered by earlier MySQL Shell versions, for example once you upgrade to version 8.0.19 you can no longer use version 8.0.18 or earlier to administer the cluster.

To upgrade a cluster's metadata, connect MySQL Shell's global session to your cluster and use the dba.upgradeMetadata() operation to upgrade the cluster's metadata to the new metadata. For example:

mysql-js> \connect user@example.com:3306

mysql-js> dba.upgradeMetadata()
InnoDB Cluster Metadata Upgrade

The cluster you are connected to is using an outdated metadata schema version
1.0.1 and needs to be upgraded to 2.0.0.

Without doing this upgrade, no AdminAPI calls except read only operations will
be allowed.

The grants for the MySQL Router accounts that were created automatically when
bootstrapping need to be updated to match the new metadata version's
requirements.
Updating router accounts...
NOTE: 2 router accounts have been updated.

Upgrading metadata at 'example.com:3306' from version 1.0.1 to version 2.0.0.
Creating backup of the metadata schema...
Step 1 of 1: upgrading from 1.0.1 to 2.0.0...
Removing metadata backup...
Upgrade process successfully finished, metadata schema is now on version 2.0.0

If you encounter an error related to the cluster administration user missing privileges, use the Cluster.setupAdminAccount() operation with the update option to grant the user the correct privileges. See Configuring Users for AdminAPI.

21.2.8.3 Troubleshooting InnoDB Cluster Upgrades

This section covers trouble shooting the upgrade process.

Handling Host Name Changes

MySQL Shell uses the host value of the provided connection parameters as the target hostname used for AdminAPI operations, namely to register the instance in the metadata (for the dba.createCluster() and Cluster.addInstance() operations). However, the actual host used for the connection parameters might not match the hostname that is used or reported by Group Replication, which uses the value of the report_host system variable when it is defined (in other words it is not NULL), otherwise the value of hostname is used. Therefore, AdminAPI now follows the same logic to register the target instance in the metadata and as the default value for the group_replication_local_address variable on instances, instead of using the host value from the instance connection parameters. When the report_host variable is set to empty, Group Replication uses an empty value for the host but AdminAPI (for example in commands such as dba.checkInstanceConfiguration(), dba.configureInstance(), dba.createCluster(), and so on) reports the hostname as the value used which is inconsistent with the value reported by Group Replication. If an empty value is set for the report_host system variable, an error is generated. (Bug #28285389)

For a cluster created using a MySQL Shell version earlier than 8.0.16, an attempt to reboot the cluster from a complete outage performed using version 8.0.16 or higher results in this error. This is caused by a mismatch of the Metadata values with the report_host or hostname values reported by the instances. The workaround is to:

  1. Identify which of the instances is the seed, in other words the one with the most recent GTID set. The dba.rebootClusterFromCompleteOutage() operation detects whether the instance is a seed and the operation generates an error if the current session is not connected to the most up-to-date instance.

  2. Set the report_host system variable to the value that is stored in the Metadata schema for the target instance. This value is the hostname:port pair used in the instance definition upon cluster creation. The value can be consulted by querying the mysql_innodb_cluster_metadata.instances table.

    For example, suppose a cluster was created using the following sequence of commands:

    mysql-js> \c clusterAdmin@localhost:3306
    mysql-js> dba.createCluster("myCluster")
    

    Therefore the hostname value stored in the metadata is localhost and for that reason, report_host must be set to localhost on the seed.

  3. Reboot the cluster using only the seed instance. At the interactive prompts do not add the remaining instances to the cluster.

  4. Use Cluster.rescan() to add the other instances back to the cluster.

  5. Remove the seed instance from the cluster

  6. Stop mysqld on the seed instance and either remove the forced report_host setting (step 2), or replace it with the value previously stored in the Metadata value.

  7. Restart the seed instance and add it back to the cluster using Cluster.addInstance()

This allows a smooth and complete upgrade of the cluster to the latest MySQL Shell version. Another possibility, that depends on the use-case, is to simply set the value of report_host on all cluster members to match what has been registered in the Metadata schema upon cluster creation.

21.2.9 Tagging the Metadata

From version 8.0.21, a configurable tag framework is available, to allow the metadata of a Cluster or ReplicaSet to be marked with additional information. Tags make it possible to associate custom key-value pairs to a Cluster, ReplicaSet, or instance. Tags have been reserved for MySQL Router usage, which enable a compatible MySQL Router to support hiding instances from applications. The following tags are reserved for this purpose:

  • _hidden instructs MySQL Router to exclude the instance from the list of possible destinations for client applications

  • _disconnect_existing_sessions_when_hidden instructs the router to disconnect existing connections from instances that are marked to be hidden

For more information, see Removing Instances from Routing.

In addition, the tags framework is user configurable. Custom tags can consist of any ASCII character and provide a namespace, which serves as a dictionary of key-value pairs that can be associated with Clusters, ReplicaSets or their specific instances. Tag values may be any JSON value. This enables you to add your own attributes on top of the metadata.

Showing Tags

The Cluster.options() operation shows information about the tags assigned to individual cluster instances as well as to the cluster itself. For example, the InnoDB Cluster assigned to myCluster could show:

mysql-js> myCluster.options()
{
    "cluster": {
        "name": "test1",
        "tags": {
            "ic-1:3306": [
                {
                    "option": "_disconnect_existing_sessions_when_hidden",
                    "value": true
                },
                {
                    "option": "_hidden",
                    "value": false
                }
            ],
            "ic-2:3306": [],
            "ic-3:3306": [],
            "global": [
                {
                    "option": "location:",
                    "value": "US East"
                }
            ]
        }
    }
}

This cluster has a global tag named location which has the value US East, and instance ic-1 has been tagged.

Setting Tags on a Cluster Instance

You can set tags at the instance level, which enables you for example to mark an instance as not available, so that applications and router treat it as offline. Use the Cluster.setInstanceOption(instance, option, value) operation to set the value of a tag for the instance. The instance argument is a connection string to the target instance. The option argument must be a string with the format namespace:option. The value parameter is the value that should be assigned to option in the specified namespace. If the value is null, the option is removed from the specified namespace. For instances which belong to a cluster, the setInstanceOption() operation only accepts the tag namespace. Any other namespace results in an ArgumentError.

For example, to set the tag test to true on the myCluster instance ic-1, issue:

mysql-js> myCluster.setInstanceOption(icadmin@ic-1:3306, "tag:test", true);

Removing Instances from Routing

When AdminAPI and MySQL Router are working together, they support specific tags that enable you to mark instances as hidden and remove them from routing. MySQL Router then excludes such tagged instances from the routing destination candidates list. This enables you to safely take a server instance offline, so that applications and MySQL Router ignore it, for example while you perform maintenance tasks, such as server upgrade or configuration changes.

When the _hidden tag is set to true, this instructs MySQL Router to exclude the instance from the list of possible destinations for client applications. The instance remains online, but is not routed to for new incoming connections. The _disconnect_existing_sessions_when_hidden tag controls how existing connections to the instance are closed. This tag is assumed to be true, and it instructs any MySQL Routers bootstrapped against the InnoDB Cluster or InnoDB ReplicaSet to disconnect any existing connections from the instance when the _hidden tag is true. When _disconnect_existing_sessions_when_hidden is false, any existing client connections to the instance are not closed if _hidden is true.

Warning

When the use_gr_notifications MySQL Router option is enabled, it defaults to 60 seconds. This means that when you set tags, it takes up to 60 seconds for MySQL Router to detect the change. To reduce the waiting time, change use_gr_notifications to a lower value.

For example, suppose the you want to remove the ic-1 instance which is part of an InnoDB Cluster assigned to myCluster from the routing destinations. Use the setInstanceOption() operation to enable the _hidden and _disconnect_existing_sessions_when_hidden tags:

mysql-js> myCluster.setInstanceOption(icadmin@ic-1:3306, "tag:_hidden", true);

You can verify the change in the metadata by checking the options. For example the change made to ic-1 would show in the options as:

mysql-js> myCluster.options()
{
    "cluster": {
        "name": "test1",
        "tags": {
            "ic-1:3306": [
                {
                    "option": "_disconnect_existing_sessions_when_hidden",
                    "value": true
                },
                {
                    "option": "_hidden",
                    "value": true
                }
            ],
            "ic-2:3306": [],
            "ic-3:3306": [],
            "global": []
        }
    }
}

You can verify that MySQL Router has detected the change in the metadata by viewing the log file. A MySQL Router that has detected the change made to ic-1 would show a change such as:

2020-07-03 16:32:16 metadata_cache INFO [7fa9d164c700] Potential changes detected in cluster 'testCluster' after metadata refresh
2020-07-03 16:32:16 metadata_cache INFO [7fa9d164c700] view_id = 4, (3 members)
2020-07-03 16:32:16 metadata_cache INFO [7fa9d164c700]     ic-1:3306 / 33060 - mode=RW
2020-07-03 16:32:16 metadata_cache INFO [7fa9d164c700]     ic-1:3306 / 33060 - mode=RO
2020-07-03 16:32:16 metadata_cache INFO [7fa9d164c700]     ic-1:3306 / 33060 - mode=RO hidden=yes disconnect_when_hidden=yes
2020-07-03 16:32:16 routing INFO [7fa9d164c700] Routing routing:testCluster_x_ro listening on 64470 got request to disconnect invalid connections: metadata change
2020-07-03 16:32:16 routing INFO [7fa9d164c700] Routing routing:testCluster_x_rw listening on 64460 got request to disconnect invalid connections: metadata change
2020-07-03 16:32:16 routing INFO [7fa9d164c700] Routing routing:testCluster_rw listening on 6446 got request to disconnect invalid connections: metadata change
2020-07-03 16:32:16 routing INFO [7fa9d164c700] Routing routing:testCluster_ro listening on 6447 got request to disconnect invalid connections: metadata change

To bring the instance back online, use the setInstanceOption() operation to remove the tags, and MySQL Router automatically adds the instance back to the routing destinations, and it becomes online for applications. For example:

mysql-js> myCluster.setInstanceOption(icadmin@ic-1:3306, "tag:_hidden", false);

Verify the change in the metadata by checking the options again:

mysql-js> myCluster.options()
{
    "cluster": {
        "name": "test1",
        "tags": {
            "ic-1:3306": [
                {
                    "option": "_disconnect_existing_sessions_when_hidden",
                    "value": true
                },
                {
                    "option": "_hidden",
                    "value": false
                }
            ],
            "ic-2:3306": [],
            "ic-3:3306": [],
            "global": []
        }
    }
}

Setting Tags on a Cluster

The Cluster.setOption(option, value) operation enables you to change the value of a namespace option for the whole cluster. The option argument must be a string with the format namespace:option. The value parameter is the value to be assigned to option in the specified namespace. If the value is null, the option is removed from the specified namespace. For Clusters, the setOption() operation accepts the tag namespace. Any other namespace results in an ArgumentError.

There is no requirement for all the instances to be online, only that the cluster has quorum. To tag the InnoDB Cluster assigned to myCluster with the location tag set to US East, issue:

mysql-js> myCluster.setOption("tag:location", "US East")
mysql-js> myCluster.options()
{
    "cluster": {
        "name": "test1",
        "tags": {
            "ic-1:3306": [],
            "ic-2:3306": [],
            "ic-3:3306": [],
            "global": [
                {
                    "option": "location:",
                    "value": "US East"
                }
            ]
        }
    }
}

User Defined Tagging

AdminAPI supports the tag namespace, where you can store information in the key-value pairs associated with a given Cluster, ReplicaSet or instance. The options under the tag namespace are not constrained, meaning you can tag with whatever information you choose, as long as it is a valid MySQL ASCII identifier. You can use any name and value for a tag, as long as the name follows the following syntax: _ or letters followed by alphanumeric and _ characters.

The namespace option is a colon separated string with the format namespace:option, where namespace is the name of the namespace and option is the actual option name. You can set and remove tags at the instance level, or at the Cluster or ReplicaSet level.

Tag names can have any value as long as it starts with a letter or underscore, optionally followed by alphanumeric and _ characters, for example, ^[a-zA-Z_][0-9a-zA-Z_]*. Only built-in tags are allowed to start with the underscore _ character.

How you use custom tags is up to you. You could set a custom tag on a Cluster to mark the region which it is operating in. For example, you could set a custom tag named location, with a value of EMEA on the cluster.

21.2.10 InnoDB Cluster Tips

This section describes some information which is good to know when using InnoDB Cluster.

Super Read-only and Instances

Whenever Group Replication stops, the super_read_only variable is set to ON to ensure no writes are made to the instance. When you try to use such an instance with the following AdminAPI commands you are given the choice to set super_read_only=OFF on the instance:

  • dba.configureInstance()

  • dba.configureLocalInstance()

  • dba.dropMetadataSchema()

When AdminAPI encounters an instance which has super_read_only=ON, in interactive mode you are given the choice to set super_read_only=OFF. For example:

mysql-js> var myCluster = dba.dropMetadataSchema()
Are you sure you want to remove the Metadata? [y/N]: y
The MySQL instance at 'localhost:3310' currently has the super_read_only system
variable set to protect it from inadvertent updates from applications. You must
first unset it to be able to perform any changes to this instance.
For more information see:
https://dev.mysql.com/doc/refman/en/server-system-variables.html#sysvar_super_read_only.

Do you want to disable super_read_only and continue? [y/N]: y

Metadata Schema successfully removed.

The number of current active sessions to the instance is shown. You must ensure that no applications can write to the instance inadvertently. By answering y you confirm that AdminAPI can write to the instance. If there is more than one open session to the instance listed, exercise caution before permitting AdminAPI to set super_read_only=OFF.

Configuring Users for AdminAPI

Working with instances that belong to an InnoDB Cluster or InnoDB ReplicaSet requires that you connect to the instances with a user that has the correct privileges to administer the instances. AdminAPI provides the following ways to administer suitable users:

  • In version 8.0.20 and later, use the setupAdminAccount(user) operation, which creates or upgrades a MySQL user account with the necessary privileges to administer an InnoDB Cluster or InnoDB ReplicaSet.

  • In versions prior to 8.0.20, the preferred method to create users for administration is using the clusterAdmin option with the dba.configureInstance() operation.

For more information, see Creating User Accounts for Administration. If you want to manually configure an administration user, that user requires the following privileges, all with GRANT OPTION:

Note

This list of privileges is based on the current version of MySQL Shell. The privileges are subject to change between releases. Therefore the recommended way to administer accounts is using the operations described at Creating User Accounts for Administration

If only read operations are needed, for example to create a user for monitoring purposes, an account with more restricted privileges can be used. To give the user your_user the privileges needed to monitor InnoDB Cluster issue:

GRANT SELECT ON mysql_innodb_cluster_metadata.* TO your_user@'%';
GRANT SELECT ON performance_schema.global_status TO your_user@'%';
GRANT SELECT ON performance_schema.replication_applier_configuration TO your_user@'%';
GRANT SELECT ON performance_schema.replication_applier_status TO your_user@'%';
GRANT SELECT ON performance_schema.replication_applier_status_by_coordinator TO your_user@'%';
GRANT SELECT ON performance_schema.replication_applier_status_by_worker TO your_user@'%';
GRANT SELECT ON performance_schema.replication_connection_configuration TO your_user@'%';
GRANT SELECT ON performance_schema.replication_connection_status TO your_user@'%';
GRANT SELECT ON performance_schema.replication_group_member_stats TO your_user@'%';
GRANT SELECT ON performance_schema.replication_group_members TO your_user@'%';
GRANT SELECT ON performance_schema.threads TO your_user@'%' WITH GRANT OPTION;

For more information, see Section 13.7.1, “Account Management Statements”.

InnoDB Cluster and Auto-increment

When you are using an instance as part of an InnoDB Cluster, the auto_increment_increment and auto_increment_offset variables are configured to avoid the possibility of auto increment collisions for multi-primary clusters up to a size of 9 (the maximum supported size of a Group Replication group). The logic used to configure these variables can be summarized as:

InnoDB Cluster and Binary Log Purging

In MySQL 8, the binary log is automatically purged (as defined by binlog_expire_logs_seconds). This means that a cluster which has been running for a longer time than binlog_expire_logs_seconds could eventually not contain an instance with a complete binary log that contains all of the transactions applied by the instances. This could result in instances needing to be provisioned automatically, for example using MySQL Enterprise Backup, before they could join the cluster. Instances running 8.0.17 and later support the MySQL Clone plugin, which resolves this issue by providing an automatic provisioning solution which does not rely on incremental recovery, see Section 21.2.2.2, “Using MySQL Clone with InnoDB Cluster”. Instances running a version earlier than 8.0.17 only support incremental recovery, and the result is that, depending on which version of MySQL the instance is running, instances might have to be provisioned automatically. Otherwise operations which rely on distributed recovery, such as Cluster.addInstance() and so on might fail.

On instances running earlier versions of MySQL the following rules are used for binary log purging:

  • Instances running a version earlier than 8.0.1 have no automatic binary log purging because the default value of expire_logs_days is 0.

  • Instances running a version later than 8.0.1 but earlier than 8.0.4 purge the binary log after 30 days because the default value of expire_logs_days is 30.

  • Instances running a version later than 8.0.10 purge the binary log after 30 days because the default value of binlog_expire_logs_seconds is 2592000 and the default value of expire_logs_days is 0.

Thus, depending on how long the cluster has been running binary logs could have been purged and you might have to provision instances manually. Similarly, if you manually purged binary logs you could encounter the same situation. Therefore you are strongly advised to upgrade to a version of MySQL later than 8.0.17 to take full advantage of the automatic provisioning provided by MySQL Clone for distributed recovery, and to minimize downtime while provisioning instances for your InnoDB Cluster.

Resetting Recovery Account Passwords

From version 8.0.18, you can use the Cluster.resetRecoveryAccountsPassword() operation to reset the passwords for the internal recovery accounts created by InnoDB Cluster, for example to follow a custom password lifetime policy. Use the Cluster.resetRecoveryAccountsPassword() operation to reset the passwords for all internal recovery accounts used by the cluster. The operation sets a new random password for the internal recovery account on each instance which is online. If an instance cannot be reached, the operation fails. You can use the force option to ignore such instances, but this is not recommended, and it is safer to bring the instance back online before using this operation. This operation only applies to the passwords created by InnoDB cluster and cannot be used to update manually created passwords.

Note

The user which executes this operation must have all the required administer privileges, in particular CREATE USER, in order to ensure that the password of recovery accounts can be changed regardless of the password verification-required policy. In other words, independent of whether the password_require_current system variable is enabled or not.

21.2.11 Known Limitations

This section describes the known limitations of InnoDB Cluster. As InnoDB Cluster uses Group Replication, you should also be aware of its limitations, see Section 18.9.2, “Group Replication Limitations”.

  • If a session type is not specified when creating the global session, MySQL Shell provides automatic protocol detection which attempts to first create a NodeSession and if that fails it tries to create a ClassicSession. With an InnoDB cluster that consists of three server instances, where there is one read-write port and two read-only ports, this can cause MySQL Shell to only connect to one of the read-only instances. Therefore it is recommended to always specify the session type when creating the global session.

  • When adding non-sandbox server instances (instances which you have configured manually rather than using dba.deploySandboxInstance()) to a cluster, MySQL Shell is not able to persist any configuration changes in the instance's configuration file. This leads to one or both of the following scenarios:

    1. The Group Replication configuration is not persisted in the instance's configuration file and upon restart the instance does not rejoin the cluster.

    2. The instance is not valid for cluster usage. Although the instance can be verified with dba.checkInstanceConfiguration(), and MySQL Shell makes the required configuration changes in order to make the instance ready for cluster usage, those changes are not persisted in the configuration file and so are lost once a restart happens.

    If only a happens, the instance does not rejoin the cluster after a restart.

    If b also happens, and you observe that the instance did not rejoin the cluster after a restart, you cannot use the recommended dba.rebootClusterFromCompleteOutage() in this situation to get the cluster back online. This is because the instance loses any configuration changes made by MySQL Shell, and because they were not persisted, the instance reverts to the previous state before being configured for the cluster. This causes Group Replication to stop responding, and eventually the command times out.

    To avoid this problem it is strongly recommended to use dba.configureInstance() before adding instances to a cluster in order to persist the configuration changes.

  • The use of the --defaults-extra-file option to specify an option file is not supported by InnoDB Cluster server instances. InnoDB Cluster only supports a single option file on instances and no extra option files are supported. Therefore for any operation working with the instance's option file the main one should be specified. If you want to use multiple option files you have to configure the files manually and make sure they are updated correctly considering the precedence rules of the use of multiple option files and ensuring that the desired settings are not incorrectly overwritten by options in an extra unrecognized option file.

  • Attempting to use instances with a host name that resolves to an IP address which does not match a real network interface fails with an error that This instance reports its own address as the hostname. This is not supported by the Group Replication communication layer. On Debian based instances this means instances cannot use addresses such as user@localhost because localhost resolves to a non-existent IP (such as 127.0.1.1). This impacts on using a sandbox deployment, which usually uses local instances on a single machine.

    A workaround is to configure the report_host system variable on each instance to use the actual IP address of your machine. Retrieve the IP of your machine and add report_host=IP of your machine to the my.cnf file of each instance. You need to ensure the instances are then restarted to make the change.

  • When executing dba.createCluster() or adding an instance to an existing InnoDB Cluster by running Cluster.addInstance(), the following errors are logged to MySQL error log:

    2020-02-10T10:53:43.727246Z 12 [ERROR] [MY-011685] [Repl] Plugin
    group_replication reported: 'The group name option is mandatory'
    2020-02-10T10:53:43.727292Z 12 [ERROR] [MY-011660] [Repl] Plugin
    group_replication reported: 'Unable to start Group Replication on boot'
    

    These messages are harmless and relate to the way AdminAPI starts Group Replication.

  • When using a sandbox deployment, each sandbox instance uses a copy of the mysqld binary found in the $PATH in the local mysql-sandboxes directory. If the version of mysqld changes, for example after an upgrade, sandboxes based on the previous version fail to start. This is because the sandbox binary is outdated compared to the dependencies found under the basedir. Sandbox instances are not designed for production, therefore they are considered transient and are not supported for upgrade.

    A workaround for this issue is to manually copy the upgraded mysqld binary into the bin directory of each sandbox. Then start the sandbox by issuing dba.startSandboxInstance(). The operation fails with a timeout, and the error log contains:

    2020-03-26T11:43:12.969131Z 5 [System] [MY-013381] [Server] Server upgrade
    from '80019' to '80020' started.
    2020-03-26T11:44:03.543082Z 5 [System] [MY-013381] [Server] Server upgrade
    from '80019' to '80020' completed.
    

    Although the operation seems to fail with a timeout, the sandbox has started successfully.

21.3 MySQL InnoDB ReplicaSet

This section documents InnoDB ReplicaSet, added in version 8.0.19.

21.3.1 Introducing InnoDB ReplicaSet

The AdminAPI includes support for InnoDB ReplicaSet, that enables you to administer a set of MySQL instances running asynchronous GTID-based replication in a similar way to InnoDB Cluster. An InnoDB ReplicaSet consists of a single primary and multiple secondaries (traditionally referred to as the MySQL replication source and replicas). You administer your ReplicaSets using a ReplicaSet object and the AdminAPI operations, for example to check the status of the InnoDB ReplicaSet, and manually failover to a new primary in the event of a failure. Similar to InnoDB Cluster, MySQL Router supports bootstrapping against InnoDB ReplicaSet, which means you can automatically configure MySQL Router to use your InnoDB ReplicaSet without having to manually configure it. This makes InnoDB ReplicaSet a quick and easy way to get MySQL replication and MySQL Router up and running, making it well suited to scaling out reads, and provides manual failover capabilities in use cases that do not require the high availability offered by InnoDB Cluster.

In addition to deploying an InnoDB ReplicaSet using AdminAPI, you can adopt an existing replication setup. AdminAPI configures the InnoDB ReplicaSet based on the topology of the replication setup. Once the replication setup has been adopted, you administer it in the same way as an InnoDB ReplicaSet deployed from scratch. This enables you to take advantage of AdminAPI and MySQL Router without the need to create a new ReplicaSet. For more information see Section 21.3.4, “Adopting an Existing Replication Set Up”.

InnoDB ReplicaSet Limitations

An InnoDB ReplicaSet has several limitations compared to an InnoDB Cluster and thus, it is recommended that you deploy InnoDB Cluster wherever possible. Generally, an InnoDB ReplicaSet on its own does not provide high availability. Among the limitations of InnoDB ReplicaSet are:

  • No automatic failover. In events where the primary becomes unavailable, a failover needs to be triggered manually using AdminAPI before any changes are possible again. However, secondary instances remain available for reads.

  • No protection from partial data loss due to an unexpected halt or unavailability. Transactions that have not yet been applied by the time of the halt could become lost.

  • No protection against inconsistencies after an unexpected exit or unavailability. If a failover promotes a secondary while the former primary is still available (for example due to a network partition), inconsistencies could be introduced because of the split-brain.

21.3.2 Deploying InnoDB ReplicaSet

You deploy InnoDB ReplicaSet in a similar way to InnoDB Cluster. First you configure some MySQL server instances, the minimum is two instances, see Section 21.1, “MySQL AdminAPI”. One functions as the primary, in this tutorial rs-1; the other instance functions as the secondary, in this tutorial rs-2; which replicates the transactions applied by the primary. This is the equivalent of the source and replica known from asynchronous MySQL replication. Then you connect to one of the instances using MySQL Shell, and create a ReplicaSet. Once the ReplicaSet has been created, you can add instances to it.

InnoDB ReplicaSet is compatible with sandbox instances, which you can use to deploy locally, for example for testing purposes. See Deploying Sandbox Instances for instructions. However, this tutorial assumes you are deploying a production InnoDB ReplicaSet, where each instance is running on a different host.

InnoDB ReplicaSet Prerequisites

To use InnoDB ReplicaSet you should be aware of the following prerequisites:

  • Only instances running MySQL version 8.0 and later are supported

  • Only GTID-based replication is supported, binary log file position replication is not compatible with InnoDB ReplicaSet

  • Only Row Based Replication (RBR) is supported, Statement Based Replication (SBR) is unsupported

  • Replication filters are not supported

  • Unmanaged replication channels are not allowed on any instance

  • A ReplicaSet consists of maximum one primary instance, and one or multiple secondaries are supported. Although there is no limit to the number of secondaries you can add to a ReplicaSet, each MySQL Router connected to a ReplicaSet has to monitor each instance. Therefore, the more instances that are added to a ReplicaSet, the more monitoring has to be done.

  • The ReplicaSet must be entirely managed by MySQL Shell. For example, the replication account is created and managed by MySQL Shell. Making configuration changes to the instance outside of MySQL Shell, for example using SQL statements directly to change the primary, is not supported. Always use MySQL Shell to work with InnoDB ReplicaSet.

AdminAPI and InnoDB ReplicaSet enable you to work with MySQL replication without a deep understanding of the underlying concepts. However, for background information see Chapter 17, Replication.

Configuring InnoDB ReplicaSet Instances

Use dba.configureReplicaSetInstance(instance) to configure each instance you want to use in your replica set. MySQL Shell can either connect to an instance and then configure it, or you can pass in instance to configure a specific remote instance. To use an instance in a ReplicaSet, it must support persisting settings. See Persisting Settings.

When you connect to the instance for administration tasks you require a user with suitable privileges. The preferred method to create users to administer a ReplicaSet is using the setupAdminAccount() operation. See Creating User Accounts for Administration. Alternatively, the dba.configureReplicaSetInstance() operation can optionally create an administrator account, if the clusterAdmin option is provided. The account is created with the correct set of privileges required to manage InnoDB ReplicaSet.

Tip

The administrator account must have the same user name and password across all instances of the same cluster or replica set.

To configure the instance at rs-1:3306, with a cluster administrator named rsadmin issue:

mysql-js> dba.configureReplicaSetInstance('root@rs-1:3306', {clusterAdmin: "'rsadmin'@'rs-1%'"});

The interactive prompt requests the password required by the specified user. To configure the instance MySQL Shell is currently connected to, you can specify a null instance definition. For example issue:

mysql-js> dba.configureReplicaSetInstance('', {clusterAdmin: "'rsadmin'@'rs-1%'"});

The interactive prompt requests the password required by the specified user. This checks the instance which MySQL Shell is currently connected to is valid for use in an InnoDB ReplicaSet. Settings which are not compatible with InnoDB ReplicaSet are configured if possible. The cluster administrator account is created with the privileges required for InnoDB ReplicaSet.

Creating an InnoDB ReplicaSet

Once you have configured your instances, connect to an instance and use dba.createReplicaSet() to create a managed ReplicaSet that uses MySQL asynchronous replication, as opposed to MySQL Group Replication used by InnoDB Cluster. The MySQL instance which MySQL Shell is currently connected to is used as the initial primary of the ReplicaSet.

The dba.createReplicaSet() operation performs several checks to ensure that the instance state and configuration are compatible with a managed ReplicaSet and if so, a metadata schema is initialized on the instance. If you want to check the operation but not actually make any changes to the instances, use the dryRun option. This checks and shows what actions the MySQL Shell would take to create the ReplicaSet. If the ReplicaSet is created successfully, a ReplicaSet object is returned. Therefore it is best practice to assign the returned ReplicaSet to a variable. This enables you to work with the ReplicaSet, for example by calling the ReplicaSet.status() operation. To create a ReplicaSet named example on instance rs-1 and assign it to the rs variable, issue:

mysql-js> \connect root@rs-1:3306
...
mysql-js> var rs = dba.createReplicaSet("example")
A new replicaset with instance 'rs-1:3306' will be created.

* Checking MySQL instance at rs-1:3306

This instance reports its own address as rs-1:3306
rs-1:3306: Instance configuration is suitable.

* Updating metadata...

ReplicaSet object successfully created for rs-1:3306.
Use rs.add_instance() to add more asynchronously replicated instances to this replicaset
and rs.status() to check its status.

To verify that the operation was successful, you work with the returned ReplicaSet object. For example this provides the ReplicaSet.status() operation, which displays information about the ReplicaSet. We already assigned the returned ReplicaSet to the variable rs, so issue:

mysql-js> rs.status()
{
    "replicaSet": {
        "name": "example",
        "primary": "rs-1:3306",
        "status": "AVAILABLE",
        "statusText": "All instances available.",
        "topology": {
            "rs-1:3306": {
                "address": "rs-1:3306",
                "instanceRole": "PRIMARY",
                "mode": "R/W",
                "status": "ONLINE"
            }
        },
        "type": "ASYNC"
    }
}

This output shows that the ReplicaSet named example has been created, and that the primary is rs-1. Currently there is only one instance, and the next task is to add more instances to the ReplicaSet.

21.3.3 Adding Instances to a ReplicaSet

When you have created a ReplicaSet you can use the ReplicaSet.addInstance() operation to add an instance as a read-only secondary replica of the current primary of the ReplicaSet. The primary of the ReplicaSet must be reachable and available during this operation. MySQL Replication is configured between the added instance and the primary, using an automatically created MySQL account with a random password. Before the instance can be an operational secondary, it must be in synchrony with the primary. This process is called recovery, and InnoDB ReplicaSet supports different methods which you configure with the recoveryMethod option.

For an instance to be able to join a ReplicaSet, various prerequisites must be satisfied. They are automatically checked by ReplicaSet.addInstance(), and the operation fails if any issues are found. Use dba.configureReplicaSetInstance() to validate and configure binary log and replication related options before adding an instance. MySQL Shell connects to the target instance using the same user name and password used to obtain the ReplicaSet handle object. All instances of the ReplicaSet are expected to have the same administrator account with the same grants and passwords. A custom administrator account with the required grants can be created while an instance is configured with dba.configureReplicaSetInstance(). See Configuring InnoDB ReplicaSet Instances.

Recovery Methods for InnoDB ReplicaSet

When new instances are added to an InnoDB ReplicaSet they need to be provisioned with the existing data which it contains. This can be done automatically using one of the following methods:

  • MySQL Clone, which takes a snapshot from an online instance and then replaces any data on the new instance with the snapshot. MySQL Clone is well suited for joining a new blank instance to an InnoDB ReplicaSet. It does not rely on there being a complete binary log of all transactions applied by the InnoDB ReplicaSet.

    Warning

    All previous data on the instance being added is destroyed during a clone operation. All MySQL settings not stored in tables are however maintained.

  • incremental recovery, which relies on MySQL Replication to apply all missing transactions on the new instance. If the amount of transactions missing on the new instance is small, this can be the fastest method. However, this method is only usable if at least one online instance in the InnoDB ReplicaSet has a complete binary log, containing the entire transaction history of the InnoDB ReplicaSet. This method cannot be used if the binary logs have been purged from all members or if the binary log was only enabled after databases already existed in the instance. If there is a very large amount of transactions to apply, there could be a long delay before the instance can join the InnoDB ReplicaSet.

When an instance is joining a ReplicaSet, recovery is used in much the same way that it is in InnoDB Cluster. MySQL Shell attempts to automatically select a suitable recovery method. If it is not possible to choose a method safely, MySQL Shell prompts for what to use. For more information, see Section 21.2.2.2, “Using MySQL Clone with InnoDB Cluster”. This section covers the differences when adding instances to a ReplicaSet.

Adding Instances to a ReplicaSet

Use the ReplicaSet.addInstance(instance) operation to add secondary instances to the ReplicaSet. You specify the instance as a URI-like connection string. The user you specify must have the privileges required and must be the same on all instances in the ReplicaSet, see Configuring InnoDB ReplicaSet Instances.

For example, to add the instance at rs-2 with user rsadmin, issue:

mysql-js> rs.addInstance('rsadmin@rs-2')

Adding instance to the replicaset...

* Performing validation checks

This instance reports its own address as rsadmin@rs-2
rsadmin@rs-2: Instance configuration is suitable.

* Checking async replication topology...

* Checking transaction state of the instance...

NOTE: The target instance 'rsadmin@rs-2' has not been pre-provisioned (GTID set
is empty). The Shell is unable to decide whether replication can completely
recover its state.  The safest and most convenient way to provision a new
instance is through automatic clone provisioning, which will completely
overwrite the state of 'rsadmin@rs-2' with a physical snapshot from an existing
replicaset member. To use this method by default, set the 'recoveryMethod'
option to 'clone'.

WARNING: It should be safe to rely on replication to incrementally recover the
state of the new instance if you are sure all updates ever executed in the
replicaset were done with GTIDs enabled, there are no purged transactions and
the new instance contains the same GTID set as the replicaset or a subset of it.
To use this method by default, set the 'recoveryMethod' option to 'incremental'.
Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone):

In this case we did not specify the recovery method, so the operation advises you on how to best proceed. In this example we choose the clone option because we do not have any existing transactions on the instance joining the ReplicaSet. Therefore there is no risk of deleting data from the joining instance.

Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): C
* Updating topology
Waiting for clone process of the new member to complete. Press ^C to abort the operation.
* Waiting for clone to finish...
NOTE: rsadmin@rs-2 is being cloned from rsadmin@rs-1
** Stage DROP DATA: Completed
** Clone Transfer
FILE COPY  ############################################################  100%  Completed
PAGE COPY  ############################################################  100%  Completed
REDO COPY  ############################################################  100%  Completed
** Stage RECOVERY: \
NOTE: rsadmin@rs-2 is shutting down...

* Waiting for server restart... ready
* rsadmin@rs-2 has restarted, waiting for clone to finish...
* Clone process has finished: 59.63 MB transferred in about 1 second (~1.00 B/s)

** Configuring rsadmin@rs-2 to replicate from rsadmin@rs-1
** Waiting for new instance to synchronize with PRIMARY...

The instance 'rsadmin@rs-2' was added to the replicaset and is replicating from rsadmin@rs-1.

Assuming the instance is valid for InnoDB ReplicaSet usage, recovery proceeds. In this case the newly joining instance uses MySQL Clone to copy all of the transactions it has not yet applied from the primary, then it joins the ReplicaSet as an online instance. To verify, use the ReplicaSet.status() operation:

mysql-js> rs.status()
{
    "replicaSet": {
        "name": "example",
        "primary": "rs-1:3306",
        "status": "AVAILABLE",
        "statusText": "All instances available.",
        "topology": {
            "rs-1:3306": {
                "address": "rs-1:3306",
                "instanceRole": "PRIMARY",
                "mode": "R/W",
                "status": "ONLINE"
            },
            "rs-2:3306": {
                "address": "rs-2:3306",
                "instanceRole": "SECONDARY",
                "mode": "R/O",
                "replication": {
                    "applierStatus": "APPLIED_ALL",
                    "applierThreadState": "Replica has read all relay log; waiting for more updates",
                    "receiverStatus": "ON",
                    "receiverThreadState": "Waiting for source to send event",
                    "replicationLag": null
                },
                "status": "ONLINE"
            }
        },
        "type": "ASYNC"
    }
}

This output shows that the ReplicaSet named example now consists of two MySQL instances, and that the primary is rs-1. Currently there is one secondary instance at rs-2, which is a replica of the primary. The ReplicaSet is online, which means that the primary and secondary are in synchrony. At this point the ReplicaSet is ready to process transactions.

If you want to override the interactive MySQL Shell mode trying to choose the most suitable recovery method, use the recoveryMethod option to configure how the instance recovers the data required to be able to join the ReplicaSet. For more information, see Section 21.2.2.2, “Using MySQL Clone with InnoDB Cluster”.

21.3.4 Adopting an Existing Replication Set Up

As an alternative to creating a new InnoDB ReplicaSet, you can also adopt an existing replication setup using the adoptFromAR option with dba.createReplicaSet(). The replication setup is scanned, and if it is compatible with the InnoDB ReplicaSet Prerequisites, AdminAPI creates the necessary metadata. Once the replication setup has been adopted, you can only use AdminAPI to administer the InnoDB ReplicaSet.

To convert an existing replication setup to an InnoDB ReplicaSet connect to the primary, also referred to as the source. The replication topology is automatically scanned and validated, starting from the instance MySQL Shell's global session is connected to. The configuration of all instances is checked during adoption, to ensure they are compatible with InnoDB ReplicaSet usage. All replication channels must be active and their transaction sets as verified through GTID sets must be consistent. Instances are assumed to have the same state or be able to converge. All instances that are part of the topology are automatically added to the ReplicaSet. The only changes made by this operation to an adopted ReplicaSet are the creation of the metadata schema. Existing replication channels are not changed during adoption, although they could be changed during subsequent primary switch operations.

For example, to adopt a replication topology consisting of the MySQL server instances on example1 and example2 to an InnoDB ReplicaSet, connect to the primary at example1 and issue:

mysql-js> rs = dba.createReplicaSet('testadopt', {'adoptFromAR':1})
A new replicaset with the topology visible from 'example1:3306' will be created.

* Scanning replication topology...
** Scanning state of instance example1:3306
** Scanning state of instance example2:3306

* Discovering async replication topology starting with example1:3306
Discovered topology:
- example1:3306: uuid=00371d66-3c45-11ea-804b-080027337932 read_only=no
- example2:3306: uuid=59e4f26e-3c3c-11ea-8b65-080027337932 read_only=no
    - replicates from example1:3306
	source="localhost:3310" channel= status=ON receiver=ON applier=ON

* Checking configuration of discovered instances...

This instance reports its own address as example1:3306
example1:3306: Instance configuration is suitable.

This instance reports its own address as example2:3306
example2:3306: Instance configuration is suitable.

* Checking discovered replication topology...
example1:3306 detected as the PRIMARY.
Replication state of example2:3306 is OK.

Validations completed successfully.

* Updating metadata...

ReplicaSet object successfully created for example1:3306.
Use rs.add_instance() to add more asynchronously replicated instances to
this replicaset and rs.status() to check its status.

Once the InnoDB ReplicaSet has been adopted, you can use it in the same way that you would use a ReplicaSet which was created from scratch. From this point you must administer the InnoDB ReplicaSet using only AdminAPI.

21.3.5 Working with InnoDB ReplicaSet

You work with an InnoDB ReplicaSet in much the same way as you would work with an InnoDB Cluster. For example as seen in Adding Instances to a ReplicaSet, you assign a ReplicaSet object to a variable and call operations that administer the ReplicaSet, such as ReplicaSet.addInstance() to add instances, which is the equivalent of Cluster.addInstance() in InnoDB Cluster. Thus, much of the documentation at Section 21.2.5, “Working with InnoDB Cluster” also applies to InnoDB ReplicaSet. The following operations are supported by ReplicaSet objects:

  • You get online help for ReplicaSet objects, and the AdminAPI, using \help ReplicaSet or ReplicaSet.help() and \help dba or dba.help(). See Using AdminAPI.

  • You can quickly check the name of a ReplicaSet object using either name or ReplicaSet.getName(). For example the following are equivalent:

    mysql-js> rs.name
    example
    mysql-js> rs.getName()
    example
    
  • You check information about a ReplicaSet using the ReplicaSet.status() operation, which supports the extended option to get different levels of detail. For example:

    • the default for extended is 0, a regular level of details. Only basic information about the status of the instance and replication is included, in addition to non-default or unexpected replication settings and status.

    • setting extended to 1 includes Metadata Version, server UUID and the raw information used to derive the status of the instance, size of the applier queue, value of system variables that protect against unexpected writes and so on.

    • setting extended to 2 includes important replication related configuration settings, such as encrypted connections, worker threads, replication delay and heartbeat delay.

    See Checking a cluster's Status with Cluster.status().

  • You change the instances being used for a ReplicaSet using the ReplicaSet.addInstance() and ReplicaSet.removeInstance() operations. See Adding Instances to a ReplicaSet, and Removing Instances from the InnoDB Cluster.

  • Use ReplicaSet.rejoinInstance() to add an instance that was removed back to a ReplicaSet, for example after a failover.

  • Use the ReplicaSet.setPrimaryInstance() operation to safely perform a change of the primary of a ReplicaSet to another instance. See Planned Changes of the ReplicaSet Primary.

  • Use the ReplicaSet.forcePrimaryInstance() operation to perform a forced failover of the primary. See Forcing the Primary Instance in a ReplicaSet.

  • You work with the MySQL Router instances which have been bootstrapped against a ReplicaSet in exactly the same way as with InnoDB Cluster. See Working with a Cluster's Routers for information on ReplicaSet.listRouters() and ReplicaSet.removeRouterMetadata(). For specific information on using MySQL Router with InnoDB ReplicaSet see Using ReplicaSets with MySQL Router.

For more information, see the linked InnoDB Cluster sections.

The following operations are specific to InnoDB ReplicaSet and can only be called against a ReplicaSet object:

Planned Changes of the ReplicaSet Primary

Use the ReplicaSet.setPrimaryInstance() operation to safely perform a change of the primary of a ReplicaSet to another instance. The current primary is demoted to a secondary and made read-only, while the promoted instance becomes the new primary and is made read-write. All other secondary instances are updated to replicate from the new primary. MySQL Router instances which have been bootstrapped against the ReplicaSet automatically start redirecting read-write clients to the new primary.

For a safe change of the primary to be possible, all replica set instances must be reachable by MySQL Shell and have consistent GTID_EXECUTED sets. If the primary is not available, and there is no way to restore it, a forced failover might be the only option instead, see Forcing the Primary Instance in a ReplicaSet.

During a change of primary, the promoted instance is synchronized with the old primary, ensuring that all transactions present on the primary are applied before the topology change is committed. If this synchronization step takes too long or is not possible on any of the secondary instances, the operation is aborted. In such a situation, these problematic secondary instances must be either repaired or removed from the ReplicaSet for the fail over to be possible.

Forcing the Primary Instance in a ReplicaSet

Unlike InnoDB Cluster, which supports automatic failover in the event of an unexpected failure of the primary, InnoDB ReplicaSet does not have automatic failure detection or a consensus based protocol such as that provided by Group Replication. If the primary is not available, a manual failover is required. An InnoDB ReplicaSet which has lost its primary is effectively read-only, and for any write changes to be possible a new primary must be chosen. In the event that you cannot connect to the primary, and you cannot use ReplicaSet.setPrimaryInstance() to safely perform a switchover to a new primary as described at Planned Changes of the ReplicaSet Primary, use the ReplicaSet.forcePrimaryInstance() operation to perform a forced failover of the primary. This is a last resort operation that must only be used in a disaster type scenario where the current primary is unavailable and cannot be restored in any way.

Warning

A forced failover is a potentially destructive action and must be used with caution.

If a target instance is not given (or is null), the most up-to-date instance is automatically selected and promoted to be the new primary. If a target instance is provided, it is promoted to a primary, while other reachable secondary instances are switched to replicate from the new primary. The target instance must have the most up-to-date GTID_EXECUTED set among reachable instances, otherwise the operation fails.

A failover is different from a planned primary change because it promotes a secondary instance without synchronizing with or updating the old primary. That has the following major consequences:

  • Any transactions that had not yet been applied by a secondary at the time the old primary failed are lost.

  • If the old primary is actually still running and processing transactions, there is a split-brain and the datasets of the old and new primaries diverge.

If the last known primary is still reachable, the ReplicaSet.forcePrimaryInstance() operation fails, to reduce the risk of split-brain situations. But it is the administrator's responsibility to ensure that the old primary it is not reachable by the other instances to prevent or minimize such scenarios.

After a forced failover, the old primary is considered invalid by the new primary and can no longer be part of the replica set. If at a later date you find a way to recover the instance, it must be removed from the ReplicaSet and re-added as a new instance. If there were any secondary instances that could not be switched to the new primary during the failover, they are also considered invalid.

Data loss is possible after a failover, because the old primary might have had transactions that were not yet replicated to the secondary being promoted. Moreover, if the instance that was presumed to have failed is still able to process transactions, for example because the network where it is located is still functioning but unreachable from MySQL Shell, it continues diverging from the promoted instances. Recovering once transaction sets on instances have diverged requires manual intervention and could not be possible in some situations, even if the failed instances can be recovered. In many cases, the fastest and simplest way to recover from a disaster that required a forced failover is by discarding such diverged transactions and re-provisioning a new instance from the newly promoted primary.

InnoDB ReplicaSet Locking

From version 8.0.20, AdminAPI uses a locking mechanism to avoid different operations from performing changes on an InnoDB ReplicaSet simultaneously. Previously, different instances of MySQL Shell could connect to an InnoDB ReplicaSet at the same time and execute AdminAPI operations simultaneously. This could lead to inconsistent instance states and errors, for example if ReplicaSet.addInstance() and ReplicaSet.setPrimaryInstance() were executed in parallel.

The InnoDB ReplicaSet operations have the following locking:

  • dba.upgradeMetadata() and dba.createReplicaSet() are globally exclusive operations. This means that if MySQL Shell executes these operations on an InnoDB ReplicaSet, no other operations can be executed against the InnoDB ReplicaSet or any of its instances.

  • ReplicaSet.forcePrimaryInstance() and ReplicaSet.setPrimaryInstance() are operations that change the primary. This means that if MySQL Shell executes these operations against an InnoDB ReplicaSet, no other operations which change the primary, or instance change operations can be executed until the first operation completes.

  • ReplicaSet.addInstance(), ReplicaSet.rejoinInstance(), and ReplicaSet.removeInstance() are operations that change an instance. This means that if MySQL Shell executes these operations on an instance, the instance is locked for any further instance change operations. However, this lock is only at the instance level and multiple instances in an InnoDB ReplicaSet can each execute one of this type of operation simultaneously. In other words, at most one instance change operation can be executed at a time, per instance in the InnoDB ReplicaSet.

  • dba.getReplicaSet() and ReplicaSet.status() are InnoDB ReplicaSet read operations and do not require any locking.

In practice, if you try to execute an InnoDB ReplicaSet related operation while another operation that cannot be executed concurrently is still running, you get an error indicating that a lock on a needed resource failed to be acquired. In this case, you should wait for the running operation which holds the lock to complete, and only then try to execute the next operation. For example:

mysql-js> rs.addInstance("admin@rs2:3306");

ERROR: The operation cannot be executed because it failed to acquire the lock on
instance 'rs1:3306'. Another operation requiring exclusive access to the
instance is still in progress, please wait for it to finish and try again.

ReplicaSet.addInstance: Failed to acquire lock on instance 'rs1:3306' (MYSQLSH
51400)

In this example, the ReplicaSet.addInstance() operation failed because the lock on the primary instance (rs1:3306) could not be acquired, for example because a ReplicaSet.setPrimaryInstance() operation (or other similar operation) was still running.

Tagging ReplicaSets

Tagging is supported by ReplicaSets, and their instances. For the purpose of tagging, ReplicaSets support the setOption(), setInstanceOption() and options() operations. These operations function in generally the same way as their Cluster equivalents. For more information, see Section 21.2.9, “Tagging the Metadata”. This section documents the differences in working with tags for ReplicaSets.

Important

There are no other options which can be configured for ReplicaSets and their instances. For ReplicaSets, the options documented at Setting Options for InnoDB Cluster are not supported. The only supported option is the tagging described here.

The ReplicaSet.options() operation shows information about the tags assigned to individual ReplicaSet instances as well as to the ReplicaSet itself.

The option argument of ReplicaSet.setOption() and ReplicaSet.setInstanceOption() only support options with the tag namespace and throw an error otherwise.

The ReplicaSet.setInstanceOption(instance, option, value) and ReplicaSet.setOption(option, value) operations behave in the same way as the Cluster equivalent operations.

Note

Setting or deleting a Y tag on a ReplicaSet does not override the Y tag value stored in the metadata for instances of that ReplicaSet.

There are no differences in hiding instances as described at Removing Instances from Routing. For example, to hide the ReplicaSet instance rs-1, issue:

mysql-js> myReplicaSet.setInstanceOption(icadmin@rs-1:3306, "tag:_hidden", true);

A MySQL Router that has been bootstrapped against the ReplicaSet detects the change and removes the rs-1 instance from the routing destinations.

21.4 MySQL Router

This section describes how to integrate MySQL Router with InnoDB Cluster and InnoDB ReplicaSet. For background information on MySQL Router, see MySQL Router 8.0.

21.4.1 Bootstrapping MySQL Router

You bootstrap MySQL Router against an InnoDB ReplicaSet or InnoDB Cluster to automatically configure routing. The bootstrap process is a specific way of running MySQL Router, which does not start the usual routing and instead configures the mysqlrouter.conf file based on the metadata. To boostrap MySQL Router at the command-line, pass in the --bootstrap option when you start the mysqlrouter command, and it retrieves the topology information from the metadata and configures routing connections to the server instances. Alternatively, on Windows use the MySQL Installer to bootstrap MySQL Router, see Section 2.3.3.3.2, “MySQL Router Configuration with MySQL Installer”. Once MySQL Router has been bootstrapped, client applications then connect to the ports it publishes. MySQL Router automatically redirects client connections to the instances based on the incoming port, for example 6646 is used by default for read-write connections using classic MySQL protocol. In the event of a topology change, for example due to an unexpected failure of an instance, MySQL Router detects the change and adjusts the routing to the remaining instances automatically. This removes the need for client applications to handle failover, or to be aware of the underlying topology. For more information, see Routing for MySQL InnoDB Cluster.

Note

Do not attempt to configure MySQL Router manually to redirect to the server instances. Always use the --bootstrap option as this ensures that MySQL Router takes its configuration from the metadata. See Cluster Metadata and State.

Configuring the MySQL Router User

When MySQL Router connects to an InnoDB Cluster or InnoDB ReplicaSet, it requires a user account which has the correct privileges. From MySQL Router version 8.0.19 this internal user can be specified using the --account option. In previous versions, MySQL Router created internal accounts at each bootstrap of the cluster, which could result in a number of accounts building up over time. From MySQL Shell version 8.0.20, you can use AdminAPI to set up the user account required for MySQL Router. Use the setupRouterAccount(user, [options]) operation to create a MySQL user account or upgrade an existing account so that it can be used by MySQL Router to operate on an InnoDB Cluster or InnoDB ReplicaSet. This is the recommended method of configuring MySQL Router with InnoDB Cluster and InnoDB ReplicaSet.

To add a new MySQL Router account named myRouter1 to the InnoDB Cluster referenced by the variable testCluster, issue:

mysqlsh> testCluster.setupRouterAccount(myRouter1)

In this case, no domain is specified and so the account is created with the wildcard (%) character, which ensures that the created user can connect from any domain. To limit the account to only be able to connect from the example.com domain, issue:

mysqlsh> testCluster.setupRouterAccount(myRouter1@example.com)

The operation prompts for a password, and then sets up the MySQL Router user with the correct privileges. If the InnoDB Cluster or InnoDB ReplicaSet has multiple instances, the created MySQL Router user is propagated to all of the instances.

When you already have a MySQL Router user configured, for example if you were using a version prior to 8.0.20, you can use the setupRouterAccount() operation to reconfigure the existing user. In this case, pass in the update option set to true. For example, to reconfigure the myOldRouter user, issue:

mysqlsh> testCluster.setupRouterAccount(myOldRouter, {'update':1})

Deploying MySQL Router

The recommended deployment of MySQL Router is on the same host as the application. When using a sandbox deployment, everything is running on a single host, therefore you deploy MySQL Router to the same host. When using a production deployment, we recommend deploying one MySQL Router instance to each machine used to host one of your client applications. It is also possible to deploy MySQL Router to a common machine through which your application instances connect. For more information, see Installing MySQL Router.

To bootstrap MySQL Router based on an InnoDB Cluster or InnoDB ReplicaSet, you need the URI-like connection string to an online instance. Run the mysqlrouter command and provide the --bootstrap=instance option, where instance is the URI-like connection string to an online instance. MySQL Router connects to the instance and uses the included metadata cache plugin to retrieve the metadata, consisting of a list of server instance addresses and their role. For example:

shell> mysqlrouter --bootstrap icadmin@ic-1:3306 --user=mysqlrouter 

You are prompted for the instance password and encryption key for MySQL Router to use. This encryption key is used to encrypt the instance password used by MySQL Router to connect to the cluster. The ports you can use for client connections are also displayed. For additional bootstrap related options, see Bootstrapping Options.

Tip

At this point MySQL Router has not been started so that it would route connections. Bootstrapping is a separate process.

The MySQL Router bootstrap process creates a mysqlrouter.conf file, with the settings based on the metadata retrieved from the address passed to the --bootstrap option, in the above example icadmin@ic-1:3306. Based on the metadata retrieved, MySQL Router automatically configures the mysqlrouter.conf file, including a metadata_cache section. If you are using MySQL Router 8.0.14 and later, the --bootstrap option automatically configures MySQL Router to track and store active MySQL metadata server addresses at the path configured by dynamic_state. This ensures that when MySQL Router is restarted it knows which MySQL metadata server addresses are current. For more information, see the dynamic_state documentation.

In earlier MySQL Router versions, metadata server information was defined during MySQL Router's initial bootstrap operation and stored statically as bootstrap_server_addresses in the configuration file, which contained the addresses for all server instances in the cluster. For example:

[metadata_cache:prodCluster]
router_id=1
bootstrap_server_addresses=mysql://icadmin@ic-1:3306,mysql://icadmin@ic-2:3306,mysql://icadmin@ic-3:3306
user=mysql_router1_jy95yozko3k2
metadata_cluster=prodCluster
ttl=300
Tip

If using MySQL Router 8.0.13 or earlier, when you change the topology of a cluster by adding another server instance after you have bootstrapped MySQL Router, you need to update bootstrap_server_addresses based on the updated metadata. Either restart MySQL Router using the --bootstrap option, or manually edit the bootstrap_server_addresses section of the mysqlrouter.conf file and restart MySQL Router.

The generated MySQL Router configuration creates TCP ports which you use to connect to the cluster. By default, ports for communicating with the cluster using both classic MySQL protocol and X Protocol are created. To use X Protocol the server instances must have X Plugin installed and configured, which is the default for MySQL 8.0 and later. The default available TCP ports are:

  • 6446 - for classic MySQL protocol read-write sessions, which MySQL Router redirects incoming connections to primary server instances.

  • 6447 - for classic MySQL protocol read-only sessions, which MySQL Router redirects incoming connections to one of the secondary server instances.

  • 64460 - for X Protocol read-write sessions, which MySQL Router redirects incoming connections to primary server instances.

  • 64470 - for X Protocol read-only sessions, which MySQL Router redirects incoming connections to one of the secondary server instances.

Depending on your MySQL Router configuration the port numbers might be different to the above. For example if you use the --conf-base-port option, or the group_replication_single_primary_mode variable. The exact ports are listed when you start MySQL Router.

The way incoming connections are redirected depends on the underlying topology being used. For example, when using a single-primary cluster, by default MySQL Router publishes a X Protocol and a classic MySQL protocol port, which clients connect to for read-write sessions and which are redirected to the cluster's single primary. With a multi-primary cluster read-write sessions are redirected to one of the primary instances in a round-robin fashion. For example, this means that the first connection to port 6446 would be redirected to the ic-1 instance, the second connection to port 6446 would be redirected to the ic-2 instance, and so on. For incoming read-only connections MySQL Router redirects connections to one of the secondary instances, also in a round-robin fashion. To modify this behavior see the routing_strategy option.

Once bootstrapped and configured, start MySQL Router. If you used a system wide install with the --bootstrap option then issue:

shell> mysqlrouter &

If you installed MySQL Router to a directory using the --directory option, use the start.sh script found in the directory you installed to. Alternatively set up a service to start MySQL Router automatically when the system boots, see Starting MySQL Router. You can now connect a MySQL client, such as MySQL Shell to one of the incoming MySQL Router ports as described above and see how the client gets transparently connected to one of the server instances.

shell> mysqlsh --uri root@localhost:6442

To verify which instance you are actually connected to, simply issue an SQL query against the port status variable.

mysql-js> \sql
Switching to SQL mode... Commands end with ;
mysql-sql> select @@port;
+--------+
| @@port |
+--------+
|   3310 |
+--------+

Using ReplicaSets with MySQL Router

You can use MySQL Router 8.0.19 and later to bootstrap against an InnoDB ReplicaSet, see Section 21.4, “MySQL Router”. The only difference in the generated MySQL Router configuration file is the addition of the cluster_type option. When MySQL Router is bootstrapped against a ReplicaSet, the generated configuration file includes:

cluster_type=rs

When you use MySQL Router with InnoDB ReplicaSet, be aware that:

  • The read-write port of MySQL Router directs client connections to the primary instance of the ReplicaSet

  • The read-only port of MySQL Router direct client connections to a secondary instance of the ReplicaSet, although it could also direct them to the primary

  • MySQL Router obtains information about the ReplicaSet's topology from the primary instance

  • MySQL Router automatically recovers when the primary instance becomes unavailable and a different instance is promoted

You work with the MySQL Router instances which have been bootstrapped against a ReplicaSet in exactly the same way as with InnoDB Cluster. See Working with a Cluster's Routers for information on ReplicaSet.listRouters() and ReplicaSet.removeRouterMetadata().

21.4.2 Using AdminAPI and MySQL Router

This section explains how to use MySQL Router and AdminAPI.

Testing InnoDB Cluster High Availability

To test if InnoDB Cluster high availability works, simulate an unexpected halt by killing an instance. The cluster detects the fact that the instance left the cluster and reconfigures itself. Exactly how the cluster reconfigures itself depends on whether you are using a single-primary or multi-primary cluster, and the role the instance serves within the cluster.

In single-primary mode:

  • If the current primary leaves the cluster, one of the secondary instances is elected as the new primary, with instances prioritized by the lowest server_uuid. MySQL Router redirects read-write connections to the newly elected primary.

  • If a current secondary leaves the cluster, MySQL Router stops redirecting read-only connections to the instance.

For more information see Section 18.1.3.1, “Single-Primary Mode”.

In multi-primary mode:

  • If a current "R/W" instance leaves the cluster, MySQL Router redirects read-write connections to other primaries. If the instance which left was the last primary in the cluster then the cluster is completely gone and you cannot connect to any MySQL Router port.

For more information see Section 18.1.3.2, “Multi-Primary Mode”.

There are various ways to simulate an instance leaving a cluster, for example you can forcibly stop the MySQL server on an instance, or use the AdminAPI dba.killSandboxInstance() if testing a sandbox deployment. In this example assume there is a single-primary sandbox cluster deployment with three server instances and the instance listening at port 3310 is the current primary. Simulate the instance leaving the cluster unexpectedly:

mysql-js> dba.killSandboxInstance(3310)

The cluster detects the change and elects a new primary automatically. Assuming your session is connected to port 6446, the default read-write classic MySQL protocol port, MySQL Router should detect the change to the cluster's topology and redirect your session to the newly elected primary. To verify this, switch to SQL mode in MySQL Shell using the \sql command and select the instance's port variable to check which instance your session has been redirected to. Notice that the first SELECT statement fails as the connection to the original primary was lost. This means the current session has been closed, MySQL Shell automatically reconnects for you and when you issue the command again the new port is confirmed.

mysql-js> \sql
Switching to SQL mode... Commands end with ;
mysql-sql> SELECT @@port;
ERROR: 2013 (HY000): Lost connection to MySQL server during query
The global session got disconnected.
Attempting to reconnect to 'root@localhost:6446'...
The global session was successfully reconnected.
mysql-sql> SELECT @@port;
+--------+
| @@port |
+--------+
|   3330 |
+--------+
1 row in set (0.00 sec)

In this example, the instance at port 3330 has been elected as the new primary. This shows that the InnoDB Cluster provided us with automatic failover, that MySQL Router has automatically reconnected us to the new primary instance, and that we have high availability.

Working with a Cluster's Routers

You can bootstrap multiple instances of MySQL Router against a cluster or ReplicaSet. From version 8.0.19, to show a list of all registered MySQL Router instances, issue:

Cluster.listRouters()

The result provides information about each registered MySQL Router instance, such as its name in the metadata, the hostname, ports, and so on. For example, issue:

mysql-js> Cluster.listRouters()
{
    "clusterName": "example",
    "routers": {
        "ic-1:3306": {
            "hostname": "ic-1:3306",
            "lastCheckIn": "2020-01-16 11:43:45",
            "roPort": 6447,
            "roXPort": 64470,
            "rwPort": 6446,
            "rwXPort": 64460,
            "version": "8.0.19"
        }
    }
}

The returned information shows:

  • The name of the MySQL Router instance.

  • Last check-in timestamp, which is generated by a periodic ping from the MySQL Router stored in the metadata

  • Hostname where the MySQL Router instance is running

  • Read-Only and Read-Write ports which the MySQL Router publishes for classic MySQL protocol connections

  • Read-Only and Read-Write ports which the MySQL Router publishes for X Protocol connections

  • Version of this MySQL Router instance. The support for returning version was added in 8.0.19. If this operation is run against an earlier version of MySQL Router, the version field is null.

Additionally, the Cluster.listRouters() operation can show a list of instances that do not support the metadata version supported by MySQL Shell. Use the onlyUpgradeRequired option, for example by issuing Cluster.listRouters({'onlyUpgradeRequired':'true'}). The returned list shows only the MySQL Router instances registered with the Cluster which require an upgrade of their metadata. See Section 21.2.8.2, “Upgrading InnoDB Cluster Metadata”.

MySQL Router instances are not automatically removed from the metadata, so for example as you bootstrap more instances the InnoDB Cluster metadata contains a growing number of references to instances. To remove a registered MySQL Router instance from a cluster's metadata, use the Cluster.removeRouterMetadata(router) operation, added in version 8.0.19. Use the Cluster.listRouters() operation to get the name of the MySQL Router instance you want to remove, and pass it in as router. For example suppose your MySQL Router instances registered with a cluster were:

mysql-js> Cluster.listRouters(){

    "clusterName": "testCluster",
    "routers": {
        "myRouter1": {
            "hostname": "example1.com",
            "lastCheckIn": null,
            "routerId": "1",
            "roPort": "6447",
            "rwPort": "6446"
            "version": null
        },
        "myRouter2": {
            "hostname": "example2.com",
            "lastCheckIn": "2019-11-27 16:25:00",
            "routerId": "3",
            "roPort": "6447",
            "rwPort": "6446"
            "version": "8.0.19"
        }
    }
}

Based on the fact that the instance named myRouter1 has null for lastCheckIn and version, we decide to remove this old instance from the metadata by issuing:

mysql-js> cluster.removeRouterMetadata('myRouter1')

The MySQL Router instance specified is unregistered from the cluster by removing it from the InnoDB Cluster metadata.

21.5 AdminAPI MySQL Sandboxes

This section explains how to set up a sandbox deployment with AdminAPI. Initially deploying and using local sandbox instances of MySQL is a good way to start your exploration of AdminAPI. You can fully test out the functionality locally, prior to deployment on your production servers. AdminAPI has built-in functionality for creating sandbox instances that are correctly configured to work with InnoDB Cluster and InnoDB ReplicaSet in a locally deployed scenario.

Important

Sandbox instances are only suitable for deploying and running on your local machine for testing purposes. In a production environment the MySQL Server instances are deployed to various host machines on the network. See Section 21.2.2, “Deploying a Production InnoDB Cluster” for more information.

Unlike a production deployment, where you work with instances and specify them by a connection string, sandbox instances run locally on the same machine as which you are running MySQL Shell. Therefore, to specify a sandbox instance you supply the port number which the MySQL sandbox instance is listening on.

Deploying Sandbox Instances

The MySQL AdminAPI adds the dba global variable to MySQL Shell, which provides functions for administration of sandbox instances. In this example setup, you create three sandbox instances using dba.deploySandboxInstance(port_number). To deploy a new sandbox instance which is bound to port 3310, issue:

mysql-js> dba.deploySandboxInstance(3310)

The argument passed to deploySandboxInstance() is the TCP port number where the MySQL Server instance listens for connections. By default the sandbox is created in a directory named $HOME/mysql-sandboxes/port on Unix systems. For Microsoft Windows systems the directory is %userprofile%\MySQL\mysql-sandboxes\port.

The root user's password for the instance is prompted for.

Important

Each sandbox instance uses the root user and password, and it must be the same on all sandbox instances which should work together. This is not recommended in production.

To deploy another sandbox server instance, repeat the steps followed for the sandbox instance at port 3310, choosing different port numbers for each instance. For each additional sandbox instance issue:

mysql-js> dba.deploySandboxInstance(port_number)

To follow this tutorial, use port numbers 3310, 3320 and 3330 for the three sandbox server instances. Issue:

mysql-js> dba.deploySandboxInstance(3320)
mysql-js> dba.deploySandboxInstance(3330)

To change the directory which sandboxes are stored in, for example to run multiple sandboxes on one host for testing purposes, use the MySQL Shell sandboxDir option. For example to use a sandbox in the /home/user/sandbox1 directory, issue:

mysql-js> shell.options.sandboxDir='/home/user/sandbox1'

All subsequent sandbox related operations are then executed against the instances found at /home/user/sandbox1.

When you deploy sandboxes, MySQL Shell searches for the mysqld binary which it then uses to create the sandbox instance. You can configure where MySQL Shell searches for the mysqld binary by configuring the PATH environment variable. This can be useful to test a new version of MySQL locally before deploying it to production. For example, to use a mysqld binary at the path /home/user/mysql-latest/bin/mysqld issue:

PATH=/home/user/mysql-latest/bin/mysqld:$PATH

Then run MySQL Shell from the terminal where the PATH environment variable is set. Any sandboxes you deploy then use the mysqld binary found at the configured path.

Managing Sandbox Instances

Once a sandbox instance is running, it is possible to change its status at any time using the following:

  • To stop a sandbox instance use dba.stopSandboxInstance(instance). This stops the instance gracefully, unlike dba.killSandboxInstance(instance).

  • To start a sandbox instance use dba.startSandboxInstance(instance).

  • To kill a sandbox instance use dba.killSandboxInstance(instance). This stops the instance without gracefully stopping it and is useful in simulating unexpected halts.

  • To delete a sandbox instance use dba.deleteSandboxInstance(instance). This completely removes the sandbox instance from your file system.