Wednesday, February 26, 2020

Connector/J DNS SRV with Consul Demo

MySQL Connector in 8.0.19 includes the support of DNS SRV.

The following URL can be a good starting point to understand what this feature is about.


https://www.slideshare.net/Grypyrg/mysql-connectors-8019-dns-srv

https://dev.mysql.com/doc/connector-j/8.0/en/connector-j-reference-dns-srv.html

https://learn.hashicorp.com/consul/security-networking/forwarding


MySQL Router in MySQL InnoDB Cluster setup offers the transparent access to the backend database servers.   However, the application has to know the IP Address of the Router in order to get access to the InnoDB Cluster.

In general, MySQL Router can be installed together with the application.  By doing this, the application can access to 'localhost' / 127.0.0.1 as IP address to the MySQL Router.

If the MySQL Router is installed on separated machine or VM, there is external service to discover the ROUTER IP address and also about the health check with the Router(s).

To achieve external access to Router instead of 'localhost' access within the same box deployment with the application, there are couple of options
1.  Load Balancer
2.  Virtual IP failover
3.  Hard coded in application about the IPs of MySQL Routers
4. New!!  - Using DNS SRV support from MySQL Connector
(other options such as VM respawn, etc...)

This article is written to share a DEMO setup of DNS SRV support in MySQL Connector/J 8.0.19 with Consul.

The DEMO setup with Consul is not the scope of this article.  In general, it provides a simple testing idea with the DNS SRV support with Connector/J in 8.0.19.

Environment
1.  MySQL InnoDB Cluster
     A running 3 node InnoDB Cluster.

For example : 
MySQL [primary ssl] JS> dba.getCluster().status()
{
    "clusterName": "mycluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "primary:3310",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "primary:3310": {
                "address": "primary:3310",
                "mode": "R/W",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.19"
            },
            "primary:3320": {
                "address": "primary:3320",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.19"
            },
            "primary:3330": {
                "address": "primary:3330",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.19"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "primary:3310"
}


2.  MySQL Router running on 2 VMs - port = 6446 as RW routing

   a. IP 192.168.56.41:6446
   b. IP 192.168.56.42:6446

[mysql@virtual-42 lab]$ mysql -udemo -pdemo -h192.168.56.41 -P6446 -e "select @@hostname, @@port;"
mysql: [Warning] Using a password on the command line interface can be insecure.
+----------------------+--------+
| @@hostname           | @@port |
+----------------------+--------+
| virtual-41.localhost |   3310 |
+----------------------+--------+
[mysql@virtual-42 lab]$ mysql -udemo -pdemo -h192.168.56.42 -P6446 -e "select @@hostname, @@port;"
mysql: [Warning] Using a password on the command line interface can be insecure.
+----------------------+--------+
| @@hostname           | @@port |
+----------------------+--------+
| virtual-41.localhost |   3310 |
+----------------------+--------+
 


3. Consul and Register service router.service.consul to point to the 2 VM Router Services.

4. Java Application connecting to Router Service




1. Consul Download and Setup
Refer to https://www.consul.io/docs/install/index.html
    a. Download the binary for the platform
    b. Assuming this is Linux box environment, as demo : copy the binary "consul" to /sbin (as root)

2. Create the service json    /etc/consul.d/router.json

# cat /etc/consul.d/router.json
{
 "services" :  [
  {
    "id": "router1",
    "name": "router",
    "tags": [ "rw" ],
    "address": "192.168.56.41",
    "port": 6446,
    "checks": [
      {
        "args": ["/demo/consul/check1-6446.sh"],
        "interval": "10s"
      }
    ]
  },
  {
    "id": "router2",
    "name": "router",
    "tags": [ "rw" ],
    "address": "192.168.56.42",
    "port": 6446,
    "checks": [
      {
        "args": ["/demo/consul/check2-6446.sh"],
        "interval": "10s"
      }
    ]
  }
  ]
}







2. Health Check Script for Services (/demo/consul/check1-6446.sh and /demo/consul/check2-6446.sh)

Creating the health check script as (Assuming mysql program is in the path)

<check1-6446.sh>

#! /bin/bash
mysql -ugradmin -pgrpass -hprimary -P6446 -e "select 1;" > /dev/null 2>&1
if [ $? != 0 ]
then
        exit 2
fi


<check2-6446.sh>

#! /bin/bash
mysql -ugradmin -pgrpass -hsecondary -P6446 -e "select 1;" > /dev/null 2>&1
if [ $? != 0 ]
then
        exit 2
fi

3. Startup consul agent with root (so that it listens to port=53 instead of 8600)
The default DNS PORT is 8600.  However DNS service is to be run on port 53.  Connector search for DNS Service which has to be running on port 53.

Options  
a.  consul running on port 53
b.  forwarding consul service to DNS services (Please check on other resources : e.g. https://learn.hashicorp.com/consul/security-networking/forwarding )

Note : This article is demo only.  consul is used but there are other choices.

# consul agent -dev -enable-script-checks -dns-port=53 -config-dir=/etc/consul.d  &

Once the agent is started, you can find the services running as

# curl http://127.0.0.1:8500/v1/health/checks/router
[
    {
        "Node": "virtual-42.localhost",
        "CheckID": "service:router1",
        "Name": "Service 'router' check",
        "Status": "passing",
        "Notes": "",
        "Output": "",
        "ServiceID": "router1",
        "ServiceName": "router",
        "ServiceTags": [
            "rw"
        ],
        "Type": "script",
        "Definition": {},
        "CreateIndex": 12,
        "ModifyIndex": 14
    },
    {
        "Node": "virtual-42.localhost",
        "CheckID": "service:router2",
        "Name": "Service 'router' check",
        "Status": "passing",
        "Notes": "",
        "Output": "",
        "ServiceID": "router2",
        "ServiceName": "router",
        "ServiceTags": [
            "rw"
        ],
        "Type": "script",
        "Definition": {},
        "CreateIndex": 13,
        "ModifyIndex": 15
    }
]


4. Change the /etc/resolv.conf 

search localhost
nameserver 127.0.0.1
domain consul

5. Test the Java Application
 a. Make sure the Connector/J 8.0.19 driver is downloaded.
 b. Sample Source Attached -
 c.  To run :
java -cp ./mysql-connector-java-8.0.19.jar:. LBDemo01

The output is simply like this
 java -cp ./mysql-connector-java-8.0.1jar:. LBDemo01
Spawned threads : 3



By killing/stopping one of the ROUTER, the application can continue to run.









Monday, February 17, 2020

MySQL InnoDB Cluster Replication via Router to Slave Single Node using SSL

This article is written to share how to setup SSL Replication between MySQL InnoDB Cluster and Slave single node via MySQL Router.

It is for Demo Purpose ONLY.

The video below shows the Replication working between Primary Node failover in the MySQL InnoDB Cluster.   The Replication switches to another Primary Node via the MySQL Router.

https://youtu.be/R0jOMfZlF8c

The General Steps as follows :
Setup as follows (Demo only)
Virtual Machine 1
1. A working MySQL InnoDB Cluster

Virutal Machine 2
2. A working MySQL Node as Slave
3. A working MySQL Router setup on Slave Node to point to the MySQL InnoDB Cluser on VM1.


The key part is to ensure the "key files" to be the same on each node of the MySQL InnoDB Cluster.

For the InnoDB Cluster on VM1 setup :
For example with MySQL InnoDB Cluster in the demo setup :
   Node1 DataDir : /home/mysql/data/3310
   Node2 DataDir : /home/mysql/data/3320
   Node3 DataDir : /home/mysql/data/3330

Copy all the *.pem files from datadir of Node1 to Node2 and Node3 datadir.  Thereafter restarting all the servers and bringing the InnoDB Cluster ONLINE ensures the keys are the same for ALL nodes.

For the SINGLE node slave in DR (VM2), the setup as follows
  Setup MySQL Router to connect to the  MySQL InnoDB Cluster on VM1.  So that the 6446 as primary node connection is always going into the RW node.

  Setup the MySQL Server as standalone Server on DR with proper setup (empty or restored backup from MySQL Server on InnoDB Cluster.)
 
   Copy the public key file (public_key.pem) file from the VM1 with those Server setup to VM2.   (e.g. the location of the file as /opt/download/lab/keys/public_key.pem)

  The MySQL Replication Channel can be setup as follows (Because the Router is sitting on the server VM2 and the port number is 6446.  The replication is setup to connect to ROUTER with the master_public_key defined with the copied "public_key.pem" from VM1.

mysql -uroot -h127.0.0.1 -P<the port of the slave>  << EOL1
change master to
master_host='127.0.0.1',
master_user='repl',
master_password='repl',
master_port=6446,
master_public_key_path='/opt/download/lab/keys/public_key.pem',
get_master_public_key=1,
master_auto_position=1
for channel 'channel1';

start slave for channel 'channel1';
Once the server is setup properly, the replication channel is started. 

Thanks for reading.






Wednesday, January 29, 2020

Upgrading MySQL InnoDB Cluster with MySQL5.7.25 to MySQL 8.0.19


This is the tutorial and serves as a sample ONLY.  Every environment can be different.  It is a test trial for the Upgrade MySQL InnoDB Cluster with version 5.7.25 to MySQL 8.0.19 where the MySQL InnoDB Cluster Metadata has changed from V1 to V2 in 8.0.19.
1.      Assuming the following SETUP
           i.              MySQL Server 5.7.25
          ii.              MySQL Shell : 8.0.15
        iii.              MySQL Router : 8.0.15
2.      3 nodes are running on the same machine for this tutorial
           i.              Port : 3310, 3320 and 3330
          ii.              Hostname : primary   (or node1)
3.      MySQL InnoDB Cluster Status as follows
MySQL [primary ssl] JS> x.status({extended:2})
{
    "clusterName": "mycluster",
    "defaultReplicaSet": {
        "groupName": "8561210d-4278-11ea-907d-0800277b31d3",
        "name": "default",
        "primary": "primary:3310",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "node1:3320": {
                "address": "node1:3320",
                "memberId": "9cdacfc4-4277-11ea-8435-0800277b31d3",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "node1:3330": {
                "address": "node1:3330",
                "memberId": "a18eb74f-4277-11ea-8595-0800277b31d3",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "primary:3310": {
                "address": "primary:3310",
                "memberId": "98f0e702-4277-11ea-81f1-0800277b31d3",
                "mode": "R/W",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE",
                "transactions": {
                    "checkedCount": 0,
                    "committedAllMembers": "8561210d-4278-11ea-907d-0800277b31d3:1-16,
98f0e702-4277-11ea-81f1-0800277b31d3:1-10",
                    "conflictsDetectedCount": 0,
                    "inQueueCount": 0,
                    "lastConflictFree": ""
                }
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "primary:3310"
}


Configuration File (MySQL) – my1.cnf / my2.cnf / my3.cnf
The initial configuration as shown in the first part of the ROW.  The second part was appended during the dba.configureLocalInstance(…) after the MySQL InnoDB Cluster creation.
my1.cnf
my2.cnf
my3.cnf
[mysqld]
datadir=/home/mysql/data/3310
basedir=/usr/local/mysql5725
log-error=/home/mysql/data/3310/my.error
port=3310
socket=/home/mysql/data/3310/my.sock
log-bin=logbin
relay-log=logrelay
binlog-format=row
binlog-checksum=NONE
server-id=101

# enable gtid
gtid-mode=on
enforce-gtid-consistency = ON
log-slave-updates = ON

# Table based repositories
master-info-repository=TABLE
relay-log-info-repository=TABLE
# Extraction Algorithm
transaction-write-set-extraction=XXHASH64

report-host=primary
report_port = 3310

[mysqld]
datadir=/home/mysql/data/3320
basedir=/usr/local/mysql5725
log-error=/home/mysql/data/3320/my.error
port=3320
socket=/home/mysql/data/3320/my.sock
log-bin=logbin
relay-log=logrelay
binlog-format=row
binlog-checksum=NONE
server-id=102

# enable gtid
gtid-mode=on
enforce-gtid-consistency = ON
log-slave-updates = ON

# Table based repositories
master-info-repository=TABLE
relay-log-info-repository=TABLE
# Extraction Algorithm
transaction-write-set-extraction=XXHASH64

report-host=primary
report_port = 3320
[mysqld]
datadir=/home/mysql/data/3330
basedir=/usr/local/mysql5725
log-error=/home/mysql/data/3330/my.error
port=3330
socket=/home/mysql/data/3330/my.sock
log-bin=logbin
relay-log=logrelay
binlog-format=row
binlog-checksum=NONE
server-id=103

# enable gtid
gtid-mode=on
enforce-gtid-consistency = ON
log-slave-updates = ON

# Table based repositories
master-info-repository=TABLE
relay-log-info-repository=TABLE
# Extraction Algorithm
transaction-write-set-extraction=XXHASH64

report-host=primary
report_port = 3330
INSERTED after dba.configureLocalInstance( … )
INSERTED after dba.configureLocalInstance( … )
INSERTED after dba.configureLocalInstance( … )
auto_increment_increment = 1
auto_increment_offset = 2
group_replication_allow_local_disjoint_gtids_join = OFF
group_replication_allow_local_lower_version_join = OFF
group_replication_auto_increment_increment = 7
group_replication_bootstrap_group = OFF
group_replication_components_stop_timeout = 31536000
group_replication_compression_threshold = 1000000
group_replication_enforce_update_everywhere_checks = OFF
group_replication_exit_state_action = READ_ONLY
group_replication_flow_control_applier_threshold = 25000
group_replication_flow_control_certifier_threshold = 25000
group_replication_flow_control_mode = QUOTA
group_replication_force_members =
group_replication_group_name = 8561210d-4278-11ea-907d-0800277b31d3
group_replication_group_seeds = node1:13320,node1:13330
group_replication_gtid_assignment_block_size = 1000000
group_replication_ip_whitelist = 192.168.56.0/24
group_replication_local_address = node1:13310
group_replication_member_weight = 80
group_replication_poll_spin_loops = 0
group_replication_recovery_complete_at = TRANSACTIONS_APPLIED
group_replication_recovery_reconnect_interval = 60
group_replication_recovery_retry_count = 10
group_replication_recovery_ssl_ca =
group_replication_recovery_ssl_capath =
group_replication_recovery_ssl_cert =
group_replication_recovery_ssl_cipher =
group_replication_recovery_ssl_crl =
group_replication_recovery_ssl_crlpath =
group_replication_recovery_ssl_key =
group_replication_recovery_ssl_verify_server_cert = OFF
group_replication_recovery_use_ssl = ON
group_replication_single_primary_mode = ON
group_replication_ssl_mode = REQUIRED
group_replication_start_on_boot = ON
group_replication_transaction_size_limit = 0
group_replication_unreachable_majority_timeout = 0
auto_increment_increment = 1
auto_increment_offset = 2
group_replication_allow_local_disjoint_gtids_join = OFF
group_replication_allow_local_lower_version_join = OFF
group_replication_auto_increment_increment = 7
group_replication_bootstrap_group = OFF
group_replication_components_stop_timeout = 31536000
group_replication_compression_threshold = 1000000
group_replication_enforce_update_everywhere_checks = OFF
group_replication_exit_state_action = READ_ONLY
group_replication_flow_control_applier_threshold = 25000
group_replication_flow_control_certifier_threshold = 25000
group_replication_flow_control_mode = QUOTA
group_replication_force_members =
group_replication_group_name = 8561210d-4278-11ea-907d-0800277b31d3
group_replication_group_seeds = node1:13310,node1:13330
group_replication_gtid_assignment_block_size = 1000000
group_replication_ip_whitelist = 192.168.56.0/24
group_replication_local_address = node1:13320
group_replication_member_weight = 70
group_replication_poll_spin_loops = 0
group_replication_recovery_complete_at = TRANSACTIONS_APPLIED
group_replication_recovery_reconnect_interval = 60
group_replication_recovery_retry_count = 10
group_replication_recovery_ssl_ca =
group_replication_recovery_ssl_capath =
group_replication_recovery_ssl_cert =
group_replication_recovery_ssl_cipher =
group_replication_recovery_ssl_crl =
group_replication_recovery_ssl_crlpath =
group_replication_recovery_ssl_key =
group_replication_recovery_ssl_verify_server_cert = OFF
group_replication_recovery_use_ssl = ON
group_replication_single_primary_mode = ON
group_replication_ssl_mode = REQUIRED
group_replication_start_on_boot = ON
group_replication_transaction_size_limit = 0
group_replication_unreachable_majority_timeout = 0
auto_increment_increment = 1
auto_increment_offset = 2
group_replication_allow_local_disjoint_gtids_join = OFF
group_replication_allow_local_lower_version_join = OFF
group_replication_auto_increment_increment = 7
group_replication_bootstrap_group = OFF
group_replication_components_stop_timeout = 31536000
group_replication_compression_threshold = 1000000
group_replication_enforce_update_everywhere_checks = OFF
group_replication_exit_state_action = READ_ONLY
group_replication_flow_control_applier_threshold = 25000
group_replication_flow_control_certifier_threshold = 25000
group_replication_flow_control_mode = QUOTA
group_replication_force_members =
group_replication_group_name = 575f951a-427c-11ea-83d7-0800277b31d3
group_replication_group_seeds = node1:13310,node1:13320
group_replication_gtid_assignment_block_size = 1000000
group_replication_ip_whitelist = 192.168.56.0/24
group_replication_local_address = node1:13330
group_replication_member_weight = 60
group_replication_poll_spin_loops = 0
group_replication_recovery_complete_at = TRANSACTIONS_APPLIED
group_replication_recovery_reconnect_interval = 60
group_replication_recovery_retry_count = 10
group_replication_recovery_ssl_ca =
group_replication_recovery_ssl_capath =
group_replication_recovery_ssl_cert =
group_replication_recovery_ssl_cipher =
group_replication_recovery_ssl_crl =
group_replication_recovery_ssl_crlpath =
group_replication_recovery_ssl_key =
group_replication_recovery_ssl_verify_server_cert = OFF
group_replication_recovery_use_ssl = ON
group_replication_single_primary_mode = ON
group_replication_ssl_mode = REQUIRED
group_replication_start_on_boot = ON
group_replication_transaction_size_limit = 0
group_replication_unreachable_majority_timeout = 0


MySQL Router (8.0.15)
mysql> select * from mysql_innodb_cluster_metadata.routers;
+-----------+-------------+---------+------------+
| router_id | router_name | host_id | attributes |
+-----------+-------------+---------+------------+
|         1 |             |       3 | NULL       |
+-----------+-------------+---------+------------+
1 row in set (0.00 sec)

mysql> select * from mysql_innodb_cluster_metadata.hosts;
+---------+----------------------+------------+-------------------+----------+------------------------------------+--------------------+
| host_id | host_name            | ip_address | public_ip_address | location | attributes                         | admin_user_account |
+---------+----------------------+------------+-------------------+----------+------------------------------------+--------------------+
|       1 | primary              |            | NULL              |          | NULL                               | NULL               |
|       2 | node1                |            | NULL              |          | NULL                               | NULL               |
|       3 | virtual-41.localhost | NULL       | NULL              |          | {"registeredFrom": "mysql-router"} | NULL               |
+---------+----------------------+------------+-------------------+----------+------------------------------------+--------------------+
3 rows in set (0.00 sec)

Upgrade Steps
1.      With the Running MySQL InnoDB Cluster 5.7.25 (MySQL Shell 8.0.15 and MySQL Router 8.0.15)
           i.              Upgrade the MySQL Router First with MySQL 8.0.19
          ii.              Upgrade the MySQL Server to MySQL 8.0.19 (But still using MySQL Shell 8.0.15)
        iii.              With MySQL Server 8.0.19 Running for ALL Servers as MySQL InnoDB Cluster (using Shell MySQL 8.0.15)
        iv.              Upgrade MetaData to V2 : MySQL Shell 8.0.19


MySQL Router 8.0.19 Upgrade
For Simplicity with this tutorial, the upgrade is simply to do the bootstrap process with MySQL Router 8.0.19 new binary.   The new ROUTER configuration folder is created and the router is started.
After the MySQL Router 8.0.19 upgrade, checking the tables with mysql_innodb_cluster_metadata.routers :
mysql> select * from mysql_innodb_cluster_metadata.routers;
+-----------+-------------+---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+
| router_id | router_name | host_id | attributes                                                                                                                                                      |
+-----------+-------------+---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+
|         1 |             |       3 | NULL                                                                                                                                                            |
|         2 |             |       4 | {"version": "8.0.19", "ROEndpoint": "6447", "RWEndpoint": "6446", "ROXEndpoint": "64470", "RWXEndpoint": "64460", "MetadataUser": "mysql_router2_z7jvrcps73jm"} |
+-----------+-------------+---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)

Extra Row is created with attribute : 8.0.19 information.
Whereas the user table has new User due to the MySQL Router Bootstrapping.   This tutorial does not provide any details with re-using existing router account or managing/removing the OLD  router account(s).
mysql> select user,host from mysql.user;
+----------------------------------+----------------------------+
| user                             | host                       |
+----------------------------------+----------------------------+
| gradmin                          | %                          |
| mysql_router1_7d0dklmkh3n8       | %                          |
| mysql_router2_z7jvrcps73jm       | %                          |
| mysql_innodb_cluster_r0431831317 | 192.168.56.0/255.255.255.0 |
| mysql_innodb_cluster_r0431832209 | 192.168.56.0/255.255.255.0 |
| mysql_innodb_cluster_r0431833234 | 192.168.56.0/255.255.255.0 |
| mysql.session                    | localhost                  |
| mysql.sys                        | localhost                  |
| root                             | localhost                  |
+----------------------------------+----------------------------+
9 rows in set (0.00 sec)
1          Upgrade MySQL 5.7.25 to MySQL 8.0.19 for Secondary Server
1.1         Shutdown MySQL Secondary Server
[mysql@virtual-41 mysql57]$ mysql -uroot -h127.0.0.1 -P3330
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 20
Server version: 5.7.25-enterprise-commercial-advanced-log MySQL Enterprise Server - Advanced Edition (Commercial)

Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> shutdown;
Query OK, 0 rows affected (0.00 sec)

mysql>
1.2         Backup the my[1|2|3].cnf
[mysql@virtual-41 config]$ mkdir backup
[mysql@virtual-41 config]$ cp my3.cnf backup
[mysql@virtual-41 config]$ cp my2.cnf backup
[mysql@virtual-41 config]$ cp my1.cnf backup

1.3         Change the my?.cnf for the Secondary Server
1.3.1    COMMENT all variables with group_replication% prefix from the configuration file
Those variables should be converted into persisted variables

[mysql@virtual-41 config]$ mkdir 8.0
[mysql@virtual-41 config]$ sed   "s/^group_replication/#group_replication/g" my3.cnf > 8.0/my3.cnf
[mysql@virtual-41 config]$ sed   "s/^group_replication/#group_replication/g" my2.cnf > 8.0/my2.cnf
[mysql@virtual-41 config]$ sed   "s/^group_replication/#group_replication/g" my1.cnf > 8.0/my1.cnf

1.4         Change the 8.0/my?.cnf with basedir pointing to MySQL 8.0.19 (MySQL Home Directory) and make change to mysqlx-port and mysqlx-socket for MySQL 8.0.  The mysqlx port /socket has default value.  For my tutorial on single machine, server(s) cannot be started for the same port/socket.
1.4.1    For 8.0/my3.cnf as example :
basedir=/usr/local/mysql8019
mysqlx-port=33300
mysqlx-socket=/home/mysql/data/3330/myx.sock

1.5         Convert the group_replication% variables into persisted variables statements
[mysql@virtual-41 config]$ awk '/^group_replication/   { if ($NF != "=") print "set persist " $0 ";"}' my3.cnf > 8.0/my3.sql
[mysql@virtual-41 config]$ awk '/^group_replication/   { if ($NF != "=") print "set persist " $0 ";"}' my2.cnf > 8.0/my2.sql
[mysql@virtual-41 config]$ awk '/^group_replication/   { if ($NF != "=") print "set persist " $0 ";"}' my1.cnf > 8.0/my1.sql

1.6         Modify the SQL files to
1.6.1    comment the deprecated (or not used) variables (group_replication_allow_local_disjoint_gtids_join)
#set persist group_replication_allow_local_disjoint_gtids_join = OFF;

1.6.2    Change the group_replication_exit_state_action = OFFLINE_MODE only if needed.  OFFLINE_MODE is available in new MySQL 8.0.19.
1.6.3    Change the group_replication_auto_increment_increment = 1 (for SINGLE PRIMARY MODE)
1.6.4    Change the SQL file with those value as STRING with quotation mark
E.g
set persist group_replication_group_name = "575f951a-427c-11ea-83d7-0800277b31d3";

1.6.5    Modify or append variables which you may want.  The following variables serve as example ONLY.   You can use the SHELL to modify after upgrade.
(e.g.
group_replication_unreachable_majority_timeout =,
group_replication_member_expel_timeout=
group_replication_autorejoin_tries
set persist group_replication_consistency = "BEFORE_ON_PRIMARY_FAILOVER"
)


1.7         Start Up MySQL Secondary Server as Standalone Server using MySQL 8.0.19 Binary
1.7.1    Make sure the PATH with the New Binary from MySQL 8.0.19 bin to be used to start the Database.
1.7.2    The startup process should automatically upgrade the MySQL 5.7 to MySQL 8.0.19.  Because ALL group_replication% prefix variables are commented, the startup will not bring the server to rejoin the MySQL InnoDB Cluster. 
1.7.3    Check the error log for any errors.   (This is supposed to be taken care for MySQL 5.7 to MySQL 8.0 process)   This tutorial assumes the upgrade process is finished without any errors.  
1.7.3.1   If there is any error, RESTORED the MySQL Database as MySQL 5.7.  
1.7.3.2   MySQL 5.7 to MySQL 8.0 should be taken care with separate process.  Backup the original MySQL 5.7 and clone the database as separate database.   Upgrade to MySQL 8.0 and check carefully.  Until all errors are fixed, the MySQL 5.7.x InnoDB Cluster to MySQL 8.0.x InnoDB Cluster can be rolled out with the upgrade process. 
1.7.3.3   Sample Error Log file for my3.cnf after the startup and Upgrade to 8.0.19.  The Group Replication Plugin is started but there is no group_replication% prefix variables.  The server does not rejoin the InnoDB Cluster.
2020-01-30T03:15:20.657436Z 0 [System] [MY-010116] [Server] /usr/local/mysql8019/bin/mysqld (mysqld 8.0.19-commercial) starting as process 15406
2020-01-30T03:15:20.705770Z 1 [System] [MY-011012] [Server] Starting upgrade of data directory.
2020-01-30T03:15:24.115723Z 0 [ERROR] [MY-011685] [Repl] Plugin group_replication reported: 'The group name option is mandatory'
2020-01-30T03:15:24.116032Z 0 [ERROR] [MY-011660] [Repl] Plugin group_replication reported: 'Unable to start Group Replication on boot'
2020-01-30T03:15:29.025526Z 2 [System] [MY-011003] [Server] Finished populating Data Dictionary tables with data.
2020-01-30T03:15:31.721829Z 5 [System] [MY-013381] [Server] Server upgrade from '50700' to '80019' started.
2020-01-30T03:15:54.090860Z 5 [System] [MY-013381] [Server] Server upgrade from '50700' to '80019' completed.
2020-01-30T03:15:54.422628Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2020-01-30T03:15:54.604614Z 0 [System] [MY-010931] [Server] /usr/local/mysql8019/bin/mysqld: ready for connections. Version: '8.0.19-commercial'  socket: '/home/mysql/data/3330/my.sock'  port: 3330  MySQL Enterprise Server - Commercial.

1.7.3.4   Execute the script file for persisted variables which it includes those group_replication variables.
[mysql@virtual-41 config]$ mysql -uroot -h127.0.0.1 -P3330 < my3.sql
1.7.3.5   Check the persisted variables imported
1.7.3.6   As example for my3 Secondary Server (at this moment the server is still a standalone server)
[mysql@virtual-41 config]$ mysql -uroot -h127.0.0.1 -P3330 -e "select * from performance_schema.persisted_variables;"
+----------------------------------------------------+--------------------------------------+
| VARIABLE_NAME                                      | VARIABLE_VALUE                       |
+----------------------------------------------------+--------------------------------------+
| group_replication_consistency                      | BEFORE_ON_PRIMARY_FAILOVER           |
| group_replication_allow_local_lower_version_join   | OFF                                  |
| group_replication_auto_increment_increment         | 1                                    |
| group_replication_autorejoin_tries                 | 288                                  |
| group_replication_bootstrap_group                  | OFF                                  |
| group_replication_components_stop_timeout          | 31536000                             |
| group_replication_compression_threshold            | 1000000                              |
| group_replication_enforce_update_everywhere_checks | OFF                                  |
| group_replication_exit_state_action                | OFFLINE_MODE                         |
| group_replication_flow_control_applier_threshold   | 25000                                |
| group_replication_flow_control_certifier_threshold | 25000                                |
| group_replication_flow_control_mode                | QUOTA                                |
| group_replication_group_name                       | 575f951a-427c-11ea-83d7-0800277b31d3 |
| group_replication_group_seeds                      | node1:13310,node1:13320              |
| group_replication_gtid_assignment_block_size       | 1000000                              |
| group_replication_ip_whitelist                     | 192.168.56.0/24                      |
| group_replication_local_address                    | node1:13330                          |
| group_replication_member_expel_timeout             | 120                                  |
| group_replication_member_weight                    | 60                                   |
| group_replication_poll_spin_loops                  | 0                                    |
| group_replication_recovery_complete_at             | TRANSACTIONS_APPLIED                 |
| group_replication_recovery_reconnect_interval      | 60                                   |
| group_replication_recovery_retry_count             | 10                                   |
| group_replication_recovery_ssl_verify_server_cert  | OFF                                  |
| group_replication_recovery_use_ssl                 | ON                                   |
| group_replication_single_primary_mode              | ON                                   |
| group_replication_ssl_mode                         | REQUIRED                             |
| group_replication_start_on_boot                    | ON                                   |
| group_replication_transaction_size_limit           | 0                                    |
| group_replication_unreachable_majority_timeout     | 120                                  |
+----------------------------------------------------+--------------------------------------+
[mysql@virtual-41 config]$

1.7.3.7   Restart the server which should rejoin the MySQL InnoDB Cluster.
1.7.3.8   Check the MySQL Server Error Log for any error
1.7.3.9   Connect with MySQL Shell 8.0.15 (At this time, the shell has not yet been upgraded) and Check the Cluster Status.
[mysql@virtual-41 config]$ mysqlsh
MySQL Shell 8.0.15-commercial

Copyright (c) 2016, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.

MySQL JS> \connect gradmin:grpass@primary:3310
Creating a session to 'gradmin@primary:3310'
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 112498
Server version: 5.7.25-enterprise-commercial-advanced-log MySQL Enterprise Server - Advanced Edition (Commercial)
No default schema selected; type \use <schema> to set one.

MySQL [primary ssl] JS> var x = dba.getCluster()
MySQL [primary ssl] JS> x.status()
{
    "clusterName": "mycluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "primary:3310",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "node1:3320": {
                "address": "node1:3320",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "node1:3330": {
                "address": "node1:3330",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "primary:3310": {
                "address": "primary:3310",
                "mode": "R/W",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "primary:3310"
}

MySQL [primary ssl] JS>
1.7.3.10            The Secondary Server (with port 3330 is ONLINE and rejoined)
1.7.4    Repeat the process for the 2nd Secondary Server and then finally the Primary Server.
1.7.5    After ALL servers are upgraded, and check the status again.  The R/W node Primary Server should be switched to another node.  For this tutorial with Weighting being set, the Primary Server should be on 3320 as follows :
[mysql@virtual-41 mysql80]$ mysqlsh
MySQL Shell 8.0.15-commercial

Copyright (c) 2016, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.

MySQL JS> \connect gradmin:grpass@primary:3320
Creating a session to 'gradmin@primary:3320'
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 236
Server version: 8.0.19-commercial MySQL Enterprise Server - Commercial
No default schema selected; type \use <schema> to set one.

MySQL [primary ssl] JS> var x = dba.getCluster()
MySQL [primary ssl] JS> x.status()
{
    "clusterName": "mycluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "node1:3320",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "node1:3320": {
                "address": "node1:3320",
                "mode": "R/W",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "node1:3330": {
                "address": "node1:3330",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "primary:3310": {
                "address": "primary:3310",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "primary:3320"
}

MySQL [primary ssl] JS>

2          Upgrade MySQL InnoDB Cluster Meta Data to V2 and Using MySQL Shell 8.0.19
2.2         Please do GRANT Statements for the clusterAdmin User following the dba.upgradeMetadata({dryRun:true}) in MySQL Shell 8.0.19
MySQL [primary ssl] JS> dba.upgradeMetadata({dryRun:true})
ERROR: The account 'gradmin'@'%' is missing privileges required to manage an InnoDB cluster:
GRANT EXECUTE, SELECT ON *.* TO 'gradmin'@'%' WITH GRANT OPTION;
GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SHOW VIEW, TRIGGER, UPDATE ON mysql_innodb_cluster_metadata_bkp.* TO 'gradmin'@'%' WITH GRANT OPTION;
GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SHOW VIEW, TRIGGER, UPDATE ON mysql_innodb_cluster_metadata_previous.* TO 'gradmin'@'%' WITH GRANT OPTION;
Dba.upgradeMetadata: The account 'gradmin'@'%' is missing privileges required for this operation. (RuntimeError)

2.2.1    Execute the GRANT statements as following the steps
2.2.2    You need to RUN extra GRANT statement :
mysql> grant SELECT  on performance_schema.global_variables to gradmin@'%' with grant option;
2.2.2.1   Or else you may get error during the upgrade of meta data.
Step 1 of 1: upgrading from 1.0.1 to 2.0.0...
ERROR: SELECT command denied to user 'gradmin'@'primary.localhost' for column 'variable_value' in table 'global_variables'
Dba.upgradeMetadata: Unable to detect Metadata version. Please check account privileges. (RuntimeError)

Once all is completed, the Servers are upgraded to New Version 8.0.19 together with meta data V2.