Create a Master Master MySQL replication – Ubuntu Server 10.04 x64

There are lots of howtos about creating a MySQL cluster, and this is another one that might be useful in some special cases. In this tutorial I will create a Two nodes Master/Master MySQL replication on Ubuntu server 10.04. I used in my cluster two nodes HP DL380G6 with 3 hard disks 15K in RAID5 and connected to a SAN storage via Fiber.

Note: Before beginning I want to mention that master/master configuration is great to avoid single point of failure in your system architecture, however it will not distribute load across the nodes. I use Mysql specifically for users management and permissions and then distribute load via my web application.

Ubuntu Server install ext4 by default, which you will need to change first to ext3, according to Lucid/Lynx release notes, if you have performance-sensitive applications – the case with a Mysql database server.

We will consider two nodes node1 and node2 with respectively 10.10.0.1 and 10.10.0.2

First install MySQL 5.1 in the two nodes :

hatem@node1$ sudo apt-get install mysql-server mysql-client

You will have to set the MySQL root password during installation.

Then we have to configure the first master node1

hatem@node1$ sudo vi /etc/mysql/my.cnf

comment out the line bind-address, then set a unique value to server-id, and the masterdbname you want to replicate.

[mysqld] # bind-address = 127.0.0.1
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
binlog_do_db = masterdbname
binlog_ignore_db = mysql
binlog_ignore_db = test

Restart database to affect changes :

hatem@node1$ sudo /etc/init.d/mysql restart

If you have a database dump you can import it in node1, however you can just create the database masterdbname manually :

hatem@node1$ mysql -u root -p
mysql> CREATE DATABASE masterdbname;

We will have to grant a username and password replication permission on the node1 :

mysql> GRANT REPLICATION SLAVE ON *.* TO 'user'@'%' IDENTIFIED BY 'password';
mysql> FLUSH PRIVILEGES;
mysql> show master status;
+------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000034 | 443 | masterdbname | mysql,test |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)
mysql> quit

Now we can move to node2 and configure the mysql server :

hatem@node2$ sudo vi /etc/mysql/my.cnf

We will have the similar config here too, but with a different server-id

[mysqld] # bind-address = 127.0.0.1
server-id = 2
log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
binlog_do_db = masterdbname
binlog_ignore_db = mysql
binlog_ignore_db = test

Restart mysql

hatem@node2$ sudo /etc/init.d/mysql restart

Finally we can connect to node2 server and setup the first replication :

hatem@node2$ mysql -u root -p
mysql> CREATE DATABASE masterdbname;
mysql> CHANGE MASTER TO MASTER_HOST='10.10.0.1', MASTER_USER='user', MASTER_PASSWORD='password', MASTER_LOG_FILE='mysql-bin.000034', MASTER_LOG_POS=443;

Notice that MASTER_LOG_FILE and MASTER_LOG_POS values are from previous show master status result.

The first replication is done we can start the slave :

mysql> START SLAVE;
mysql> show slave status \G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.10.0.1
Master_User: user
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000034
Read_Master_Log_Pos: 261
Relay_Log_File: Node2-relay-bin.000002
Relay_Log_Pos: 406
Relay_Master_Log_File: mysql-bin.000034
Slave_IO_Running: Yes
Slave_SQL_Running: Yes

If the Slave_IO_Running and Slave_SQL_Running are Yes it mean your first replication is done and you can create a simple table :

At Node1 (actual master) create a table :

mysql> use masterdbname;
mysql> create table testmaster21 (mid int(11) auto_increment, PRIMARY KEY (mid)) Engine=MyISAM;

Check available tables at node2 (actual slave) … Tada :

mysql> show tables;
+------------------------+
| Tables_in_masterdbname |
+------------------------+
| testmaster21 |
+------------------------+
1 rows in set (0.00 sec)

Now the second part is very easy, we need first to check master details in node2 :

mysql> show master status;
+------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000034 | 261 | masterdbname | mysql,test |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)

Then add in node1 :

mysql> CHANGE MASTER TO MASTER_HOST='10.10.0.2', MASTER_USER='user', MASTER_PASSWORD='password', MASTER_LOG_FILE='mysql-bin.000034', MASTER_LOG_POS=261;
mysql> START SLAVE;
mysql> show slave status \G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.10.0.2
Master_User: user
Master_User: user
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000034
Read_Master_Log_Pos: 261
Relay_Log_File: Node1-relay-bin.000002
Relay_Log_Pos: 406
Relay_Master_Log_File: mysql-bin.000034
Slave_IO_Running: Yes
Slave_SQL_Running: Yes

To test it you can drop table from Node2 and make sure the table is dropped on node1 too.

That’s all !