MySQL Cluster Configuration
MySQL Cluster Configuration
A. Preparations:
1. Install VMWare player for Windows (Im using Windows 7 by the way),
2. Creates a VM of an Ubuntu Desktop instance,
3. Disabling IP filtering in this instance (read my post here)
B. Steps:
1. Login as root
2. Creates a new directory under /root/Downloads
1
2
3
cd ~/Downloads
mkdir mysql
cd /mysql
17.2.6-linux2.6-x86_64.tar.gz
cd /usr/local/mysql/
mkdir mysql-cluster
7. Creates a new config file called config.ini, inside mysql link directory
1
2
cd /usr/local/mysql/
vi config.ini
2
1
2
2
2
3
2
4
2
5
2
6
2
7
2
8
9. Create a new directory to stores data, as it is set inside the config.ini
1 mkdir /var/lib/mysql-cluster
10. Executes ndb_mgmd inside /usr/local/mysql/bin/, to run this management node
1 cd /usr/local/mysql
2 ./bin/ndb_mgmd --config-file=config.ini
11. To check, executes ndb_mgm inside /usr/local/mysql/bin
1
./bin/ndb_mgm
ndb_mgm>SHOW
13. If we succeed on installing the manager node, our ndb_mgm console screen will shows
something like below.
Thats it guys,
A. Preparations:
1. Turn on the Cluster Manager service, so when NDB node is running, it will
registered it self to it automatically,
2. Creates a VM of an Ubuntu Server instance (Im using v.12.04),
3. Disabling IP filtering in this Ubuntu instance (read my post here).
B. Steps:
1. Login as root
2. Open /var/tmp/ directory, and download MySQL cluster from the web
/var/tmp/
1cd
wget http://cdn.mysql.com/Downloads/MySQL-Cluster-7.2/mysql-cluster-gpl27.2.6-linux2.6-x86_64.tar.gz
mkdir /etc/mysql
cd /etc/mysql/
vi my.cnf
1 mkdir /var/lib/mysql-cluster
10. Execute ndbd
1
2
cd /usr/local/mysql
./bin/ndbd
11. If everything OK, our screen will shows something like below,
12. To further check, whether this NDB node has already registered in our Management node
1
ndb_mgm>SHOW
13. If our NDB node registered, our ndb_mgm console screen would shows something like
below
Thats it guys,
1
2
3
4
5
6
7
......
[mysqld]
port=3306
socket=/tmp/mysql-cluster
ndbcluster
ndbconnectstring=192.168.45.133
......
This particular code block, is actually a minimum sets, for creating an SQL node.
So, what we need to do next are:
A. Prerequisites:
1. Keep the Cluster Manager service ON
2. Keep the NDB service ON
B. Steps:
1. Creates a new user, called mysql
1
useradd mysql
groupadd mysql
useradd -g mysql mysql
5. Set new permissions access to both MySQL server and its data
1
2
3
chown -R root .
chown -R mysql data
chgrp -R mysql .
From the above pic, we can see that we already have the 1st half of the cluster we want to
make (1 ndbd node and 1 mysqld).
There are a few guides out for setting up a MySQL Cluster already; unfortunately the large
majority of them arent geared towards the beginner, and those that are generally involve a
single-server setup. This guide will aim to explain the purpose of each choice and will get
you up and running with a basic 3 server setup with a single load-balancing server.
Preliminary Requirements
There are a few things you will need first; 3 servers (or VMs) for the cluster and 1 server for
load-balancing. They should all be running Ubuntu 11.04 Server Edition.
Each node should have a minimum of 1GB of RAM and 16GB of hard drive space. The
management node can work with 512MB of ram and the default 8GB of hard drive space that
VMWare allocates.
Package Installation
These packages will be installed on all 3 servers (management node, data node 1 and data
node 2). There is a bug in the installation for the MySQL Cluster packages on Ubuntu, you
will need to install MySQL Server first then install the Cluster packages like so:
sudo apt-get install mysql-server
The mysql-server package installs some crucial configuration files and scripts that are
skipped and cause dpkg to get hung up during configuration. The root password you select
This is the bread and butter of a MySQL Cluster installation. In a production environment
you would run with more than 2 data nodes across more than one physical machine on the
same network. There should always be as little latency between nodes as possible. If you do
choose to run VMs on a physical host you should never overallocate RAM to the nodes, the
database is mainly stored in RAM and overallocation means some data will be placed into the
hosts swap space which increases latency in orders of 10s, 100s or even 1000s in the worst
cases.
The Management Node
This node is the brain of the cluster. Without the data nodes can lose sync and cause all sorts
of inconsistencies. In a production environment you have at least 2 management nodes (the
configuration changes slightly and will be noted in the files here). Here is the configuration
file (/etc/mysql/ndb_mgmd.cnf):
[NDBD DEFAULT]
NoOfReplicas=2
DataMemory=256M # How much memory to allocate for data storage
IndexMemory=18M # How much memory to allocate for index storage
# For DataMemory and IndexMemory, we have used the
# default values. Since the world database takes up
# only about 500KB, this should be more than enough for
# this example Cluster setup.
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]
# Section for the cluster management node
[NDB_MGMD]
# IP address of the management node (this system)
HostName=172.16.59.134
#For multiple management nodes we just create more [NDB_MGMD] sections for each node
# Section for the storage nodes
[NDBD]
# IP address of the first storage node
HostName=172.16.59.132
DataDir=/var/lib/mysql-cluster
BackupDataDir=/var/lib/mysql-cluster/backup
DataMemory=512M
[NDBD]
# IP address of the second storage node
HostName=172.16.59.133
DataDir=/var/lib/mysql-cluster
BackupDataDir=/var/lib/mysql-cluster/backup
DataMemory=512M
# one [MYSQLD] per storage node
#These do not require any configuration, they are the front-end access to our data
#Their addresses can be assigned when they connect
[MYSQLD]
[MYSQLD]
This configuration assumes 2 things; first that your nodes are isolated on their own network
and all the machines on it are trusted (VLAN them onto their own network damnit). Second,
it assumes you are going to run two mysqld instances (I run them on the data nodes
themselves and balance the load with a 4th server using mysql-proxy).
The Data Nodes
The data nodes are much easier to configure, we can use the configuration that was installed
and add 4 lines. Here are the changes that need to be made (/etc/mysql/my.cnf):
We add this to the existing section
[mysqld]
ndbcluster
ndb-connectstring=172.16.59.134 #This is the management node
#ndb-connectstring=host=172.16.59.134, host=172.16.59.135 This is if we hade TWO
management nodes, one on 172.16.59.134 and one on 172.16.59.135
We add this section at the bottom of the file
[mysql_cluster]
ndb-connectstring=172.16.59.134
#ndb-connectstring=host=172.16.59.134, host=172.16.59.135 If we had two management
nodes
One thing missing on the data nodes is the backup directory I have referenced in the
ndb_mgmd.cnf file. The following commands will create them and set their permisisons (do
this on each data node):
First off we want to verify that the cluster is running properly; run the following on the
management node:
sudo ndb_mgm
mgm> show
You should see at least 5 separate nodes, the first two are the data nodes, middle ones are the
management nodes and lastly you will see the mysql daemons. If the data nodes are stuck in
the starting state a quick restart should fix them, DO NOT JUST TYPE REBOOT.
From ndb_mgm
mgm> shutdown
Issuing the shutdown command from within ndb_mgm will bring the cluster down. You can
then safely reboot the data nodes, however make sure to restart the management node first as
the data nodes will come up without it otherwise (should probably just reboot the
management node(s) then the data nodes for good measure). If everything goes well you
should be set.
Test Databases
Connect to the first data node and run the following commands:
mysql -u root -p
mysql> show databases;
You should see a few databases, lets create a test database and add a table to it:
mysql> create databases test_db;
mysql> use test_db;
mysql> create table test_table (ival int(1)) engine=ndbcluster;
mysql> insert into test_table values(1);
mysql> select * from test_table;
You should see one value 1 in the table. Now connect to one of the other data nodes and you
should be able to do the following:
sudo mysql -u root -p
mysql> show databases;
This should show the database we created on the first node test_db;
mysql> use test_db;
mysql> select * from test_table;
If all is well this should show the same value as we had before, congratulations your cluster is
working.
Advanced Setup: A Load Balancer
This is actually the easier part of the guide. On a 4th server install mysql-proxy:
sudo apt-get install mysql-proxy
Next lets start the proxy and have it connect to our two data nodes:
screen -S proxy
mysql-proxy proxy-backend-addresses=172.16.59.132:3306 proxy-backendaddresses=172.16.59.133:3306
CTRL+A D
This starts the proxy and allows it to balance across the two nodes I specified in my
configuration. If you want to specify a node as read-only substitute proxy-backendaddresses= with proxy-read-only-backend-addresses=
Lets connect to the proxy and see if it works:
mysql -u root -p -h 127.0.0.1 -P 4040
mysql> use test_db;
mysql> select * from test_table;
If all is working well you will see the same things as if you were connected to your actual
mysql instances