MySQL Connector in 8.0.19 includes the support of DNS SRV.
The following URL can be a good starting point to understand what this feature is about.
https://www.slideshare.net/Grypyrg/mysql-connectors-8019-dns-srv
https://dev.mysql.com/doc/connector-j/8.0/en/connector-j-reference-dns-srv.html
https://learn.hashicorp.com/consul/security-networking/forwarding
MySQL Router in MySQL InnoDB Cluster setup offers the transparent access to the backend database servers. However, the application has to know the IP Address of the Router in order to get access to the InnoDB Cluster.
In general, MySQL Router can be installed together with the application. By doing this, the application can access to 'localhost' / 127.0.0.1 as IP address to the MySQL Router.
If the MySQL Router is installed on separated machine or VM, there is external service to discover the ROUTER IP address and also about the health check with the Router(s).
To achieve external access to Router instead of 'localhost' access within the same box deployment with the application, there are couple of options
1. Load Balancer
2. Virtual IP failover
3. Hard coded in application about the IPs of MySQL Routers
4. New!! - Using DNS SRV support from MySQL Connector
(other options such as VM respawn, etc...)
This article is written to share a DEMO setup of DNS SRV support in MySQL Connector/J 8.0.19 with Consul.
The DEMO setup with Consul is not the scope of this article. In general, it provides a simple testing idea with the DNS SRV support with Connector/J in 8.0.19.
Environment
1. MySQL InnoDB Cluster
A running 3 node InnoDB Cluster.
For example :
MySQL [primary ssl] JS> dba.getCluster().status()
{
"clusterName": "mycluster",
"defaultReplicaSet": {
"name": "default",
"primary": "primary:3310",
"ssl": "REQUIRED",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"primary:3310": {
"address": "primary:3310",
"mode": "R/W",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.19"
},
"primary:3320": {
"address": "primary:3320",
"mode": "R/O",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.19"
},
"primary:3330": {
"address": "primary:3330",
"mode": "R/O",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.19"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "primary:3310"
}
2. MySQL Router running on 2 VMs - port = 6446 as RW routing
a. IP 192.168.56.41:6446
b. IP 192.168.56.42:6446
[mysql@virtual-42 lab]$ mysql -udemo -pdemo -h192.168.56.41 -P6446 -e "select @@hostname, @@port;"
mysql: [Warning] Using a password on the command line interface can be insecure.
+----------------------+--------+
| @@hostname | @@port |
+----------------------+--------+
| virtual-41.localhost | 3310 |
+----------------------+--------+
[mysql@virtual-42 lab]$ mysql -udemo -pdemo -h192.168.56.42 -P6446 -e "select @@hostname, @@port;"
mysql: [Warning] Using a password on the command line interface can be insecure.
+----------------------+--------+
| @@hostname | @@port |
+----------------------+--------+
| virtual-41.localhost | 3310 |
+----------------------+--------+
3. Consul and Register service router.service.consul to point to the 2 VM Router Services.
4. Java Application connecting to Router Service
1. Consul Download and Setup
Refer to https://www.consul.io/docs/install/index.html
a. Download the binary for the platform
b. Assuming this is Linux box environment, as demo : copy the binary "consul" to /sbin (as root)
2. Create the service json /etc/consul.d/router.json
# cat /etc/consul.d/router.json
{
"services" : [
{
"id": "router1",
"name": "router",
"tags": [ "rw" ],
"address": "192.168.56.41",
"port": 6446,
"checks": [
{
"args": ["/demo/consul/check1-6446.sh"],
"interval": "10s"
}
]
},
{
"id": "router2",
"name": "router",
"tags": [ "rw" ],
"address": "192.168.56.42",
"port": 6446,
"checks": [
{
"args": ["/demo/consul/check2-6446.sh"],
"interval": "10s"
}
]
}
]
}
2. Health Check Script for Services (/demo/consul/check1-6446.sh and /demo/consul/check2-6446.sh)
Creating the health check script as (Assuming mysql program is in the path)
<check1-6446.sh>
#! /bin/bash
mysql -ugradmin -pgrpass -hprimary -P6446 -e "select 1;" >
/dev/null 2>&1
if [ $? != 0 ]
then
exit 2
fi
<check2-6446.sh>
#! /bin/bash
mysql -ugradmin -pgrpass -hsecondary -P6446 -e "select 1;" >
/dev/null 2>&1
if [ $? != 0 ]
then
exit 2
fi
3. Startup consul agent with root (so that it listens to port=53
instead of 8600)
The default DNS PORT is 8600. However DNS service is to be run on port 53. Connector search for DNS Service which has to be running on port 53.
Options
a. consul running on port 53
b. forwarding consul service to DNS services (Please check on other resources : e.g. https://learn.hashicorp.com/consul/security-networking/forwarding )
Note : This article is demo only. consul is used but there are other choices.
# consul agent -dev -enable-script-checks -dns-port=53
-config-dir=/etc/consul.d &
Once the agent is started, you can find the services running as
# curl http://127.0.0.1:8500/v1/health/checks/router
[
{
"Node": "virtual-42.localhost",
"CheckID": "service:router1",
"Name": "Service 'router' check",
"Status": "passing",
"Notes": "",
"Output": "",
"ServiceID": "router1",
"ServiceName": "router",
"ServiceTags": [
"rw"
],
"Type": "script",
"Definition": {},
"CreateIndex": 12,
"ModifyIndex": 14
},
{
"Node": "virtual-42.localhost",
"CheckID": "service:router2",
"Name": "Service 'router' check",
"Status": "passing",
"Notes": "",
"Output": "",
"ServiceID": "router2",
"ServiceName": "router",
"ServiceTags": [
"rw"
],
"Type": "script",
"Definition": {},
"CreateIndex": 13,
"ModifyIndex": 15
}
]
4. Change the /etc/resolv.conf
search localhost
nameserver 127.0.0.1
domain consul
5. Test the Java Application
a. Make sure the Connector/J 8.0.19 driver is downloaded.
b. Sample Source Attached -
c. To run :
java -cp ./mysql-connector-java-8.0.19.jar:. LBDemo01
The output is simply like this
java -cp ./mysql-connector-java-8.0.1jar:. LBDemo01
Spawned threads : 3
By killing/stopping one of the ROUTER, the application can continue to run.
Wednesday, February 26, 2020
Monday, February 17, 2020
MySQL InnoDB Cluster Replication via Router to Slave Single Node using SSL
This article is written to share how to setup SSL Replication between MySQL InnoDB Cluster and Slave single node via MySQL Router.
It is for Demo Purpose ONLY.
The video below shows the Replication working between Primary Node failover in the MySQL InnoDB Cluster. The Replication switches to another Primary Node via the MySQL Router.
https://youtu.be/R0jOMfZlF8c
The General Steps as follows :
Setup as follows (Demo only)
Virtual Machine 1
1. A working MySQL InnoDB Cluster
Virutal Machine 2
2. A working MySQL Node as Slave
3. A working MySQL Router setup on Slave Node to point to the MySQL InnoDB Cluser on VM1.
The key part is to ensure the "key files" to be the same on each node of the MySQL InnoDB Cluster.
For the InnoDB Cluster on VM1 setup :
For example with MySQL InnoDB Cluster in the demo setup :
Node1 DataDir : /home/mysql/data/3310
Node2 DataDir : /home/mysql/data/3320
Node3 DataDir : /home/mysql/data/3330
Copy all the *.pem files from datadir of Node1 to Node2 and Node3 datadir. Thereafter restarting all the servers and bringing the InnoDB Cluster ONLINE ensures the keys are the same for ALL nodes.
For the SINGLE node slave in DR (VM2), the setup as follows
Setup MySQL Router to connect to the MySQL InnoDB Cluster on VM1. So that the 6446 as primary node connection is always going into the RW node.
Setup the MySQL Server as standalone Server on DR with proper setup (empty or restored backup from MySQL Server on InnoDB Cluster.)
Copy the public key file (public_key.pem) file from the VM1 with those Server setup to VM2. (e.g. the location of the file as /opt/download/lab/keys/public_key.pem)
The MySQL Replication Channel can be setup as follows (Because the Router is sitting on the server VM2 and the port number is 6446. The replication is setup to connect to ROUTER with the master_public_key defined with the copied "public_key.pem" from VM1.
Thanks for reading.
It is for Demo Purpose ONLY.
The video below shows the Replication working between Primary Node failover in the MySQL InnoDB Cluster. The Replication switches to another Primary Node via the MySQL Router.
https://youtu.be/R0jOMfZlF8c
The General Steps as follows :
Setup as follows (Demo only)
Virtual Machine 1
1. A working MySQL InnoDB Cluster
Virutal Machine 2
2. A working MySQL Node as Slave
3. A working MySQL Router setup on Slave Node to point to the MySQL InnoDB Cluser on VM1.
The key part is to ensure the "key files" to be the same on each node of the MySQL InnoDB Cluster.
For the InnoDB Cluster on VM1 setup :
For example with MySQL InnoDB Cluster in the demo setup :
Node1 DataDir : /home/mysql/data/3310
Node2 DataDir : /home/mysql/data/3320
Node3 DataDir : /home/mysql/data/3330
Copy all the *.pem files from datadir of Node1 to Node2 and Node3 datadir. Thereafter restarting all the servers and bringing the InnoDB Cluster ONLINE ensures the keys are the same for ALL nodes.
For the SINGLE node slave in DR (VM2), the setup as follows
Setup MySQL Router to connect to the MySQL InnoDB Cluster on VM1. So that the 6446 as primary node connection is always going into the RW node.
Setup the MySQL Server as standalone Server on DR with proper setup (empty or restored backup from MySQL Server on InnoDB Cluster.)
Copy the public key file (public_key.pem) file from the VM1 with those Server setup to VM2. (e.g. the location of the file as /opt/download/lab/keys/public_key.pem)
The MySQL Replication Channel can be setup as follows (Because the Router is sitting on the server VM2 and the port number is 6446. The replication is setup to connect to ROUTER with the master_public_key defined with the copied "public_key.pem" from VM1.
mysql -uroot -h127.0.0.1 -P<the port of the slave> << EOL1
change master to
master_host='127.0.0.1',
master_user='repl',
master_password='repl',
master_port=6446,
master_public_key_path='/opt/download/lab/keys/public_key.pem',
get_master_public_key=1,
master_auto_position=1
for channel 'channel1';
start slave for channel 'channel1';Once the server is setup properly, the replication channel is started.
Thanks for reading.
Subscribe to:
Posts (Atom)