Difference between revisions of "Set up a two-server GlusterFS array Tutorial"
(Created page with "For Ubuntu 16.04. For Hard drive Volume Pool, load balancing <br> <br> This article presents a step-by-step description for how to set up a two-server GlusterFS array. Havin...") |
(No difference)
|
Latest revision as of 19:30, 7 August 2019
For Ubuntu 16.04. For Hard drive Volume Pool, load balancing
This article presents a step-by-step description for how to set up a two-server GlusterFS array.
Having two web servers behind a load balancer means that they have to synchronize the files that they serve and write to. On modern Linux distributions, GlusterFS is the easiest way to accomplish this task.
Contents
Install GlusterFS
To install GlusterFS, run the following commands on both servers:
apt-get update apt-get install -y glusterfs-server glusterfs-client
Prepare the bricks
GlusterFS needs a file system that supports extended attributes to store its data. It creates directories in the file system and calls those directories bricks.
On a 4-GB Performance server, you have a whole extra drive that’s already partitioned for you:
[email protected]:~# ls /dev/xvde* /dev/xvde /dev/xvde1
Format this partition with
ext4
by running the following command on both servers:mkfs.ext4 /dev/xvde1
You don’t want anything other than GlusterFS using this partition, so mount it in a hidden directory. You can put them in
/srv/.bricks
. If you have multiple xfs volumes per server, you can put them in/srv/.bricks1
and so on.mkdir /srv/.bricks echo /dev/xvde1 /srv/.bricks ext4 defaults 0 1 >> /etc/fstab mount /srv/.bricks
In the example, the term bricks
is used because each directory in the setup is a GlusterFS brick. A GlusterFS volume is built of bricks (usually bricks on different hosts).
Open the firewall
Open the firewall to allow all traffic on this network. Run the following command on both servers:
ufw allow in on eth2
In this example, the network is on the device eth2
. You can use the command ip addr show
to see all network devices and networks associated with them.
If you added web01 to the network first, it has the IP address 192.168.0.1 and web02 has the IP address 192.168.0.2.
Link the servers
Introduce the two Gluster servers to each other. The following example runs the command on
web01
, and tells it to link withweb02
:[email protected]:~# gluster peer probe web02 peer probe: success
Run the
gluster peer status
command on web02 to confirm that the servers are linked:[email protected]:~# gluster peer status Number of Peers: 1
Hostname: 192.168.0.1 Port: 24007 Uuid: d080d5cc-4181-4d3f-91bc-ef42bb4e8ec9 State: Peer in Cluster (Connected)
Create the GlusterFS volumes
Now you can create the volumes. Run the following command on only one of the servers:
[email protected]:~# gluster volume create www replica 2 transport tcp 192.168.0.1:/srv/.bricks/www 192.168.0.2:/srv/.bricks/www volume create: www: success: please start the volume to access data
The parts of the command are as follows:
- gluster - this is the Gluster command-line tool.
- volume create - You are creating a Gluster volume.
- www - This is the volume name. You can call it whatever you like.You will use it later in your
/etc/fstab
file when mounting the volume. - replica 2 - Every file on this volume will be replicated between 2 bricks. In this case, that means at least two servers (because there is only one brick on each server).
- transport tcp - Use TCP/IP to synchronize the volumes.
- 192.168.0.1:/srv/.bricks/www - This is the first brick with which the volume is built (on web01).
- 192.168.0.2:/srv/.bricks/www - This is the second brick with which the volume is built (on web02).
For more information about these options, you can run the man gluster
command.
Start and mount the volume
The volume exists, but it is not being actively synchronized nor served.
Start the volume by running the following command on either server:
[email protected]:~# gluster volume start www volume start: www: success
Mount the volume in
/srv/www
initially. Run the following commands on both servers:mkdir /srv/www echo localhost:/www /srv/www glusterfs defaults,_netdev 0 0 >> /etc/fstab mount /srv/www
Create the mount point, configure it in
/etc/fstab
, and then actually mount the GlusterFS volume.In
/etc/fstab
, add one special option:_netdev
. This option tells Ubuntu that the filesystem resides on a device that requires network access, and to not mount it until the network has been enabled.
Test it
At this point, a file written to or read from /srv/www/*
should be the same on both systems.
To test it, create a file on web01, view it on web02, delete it on web02, and then verify that it is gone on web01:
web01:
echo hello > /srv/www/test.txt
web02:
cat /srv/www/test.txt # should print 'hello' rm /srv/www/test.txt # Delete the file from web02
web01:
ls /srv/www/ # Should return nothing
Move your web content to GlusterFS
In this example, /var/www
must be on GlusterFS. Ensure that web01 has the correct /var/www
.
If you are running this on an already live server, you must to shut down Apache on both servers. You could set up a custom down for maintenance
page and health monitoring on the load balancer first if you like, but that’s beyond the scope of this article.
On web01, move
/var/www/
to/srv/www/
:mv /var/www/* /srv/www/
On web02, if you’re sure that you don’t need it, you can free up space by deleting
/var/www
:rm -rf /var/www mkdir /var/www
Create a bindmount so that
/srv/www
is accessible via/var/www
. Run the following commands on both servers:echo /srv/www /var/www none defaults,bind 0 0 >> /etc/fstab mount /var/www
You should be able to see your web content with ls /var/www
on both servers.
Summary
Your /etc/fstab
should look as follows on both servers:
# /dev/xvda1 / ext4 errors=remount-ro,noatime,barrier=0 0 1 /dev/xvde1 /srv/.bricks ext4 defaults 0 1 localhost:/www /srv/www glusterfs defaults,_netdev 0 0 /srv/www /var/www none defaults,bind 0 0
If you run findmnt | grep srv
the response should look something like this:
[email protected]:~# findmnt | tail -n3 |-/srv/.bricks /dev/xvde1 ext4 rw,relatime,attr2,inode64,noquota |-/srv/www localhost:/www fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 `-/var/www localhost:/www fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072
This response shows that /srv/.bricks
is mounted on /dev/xvde1
and that localhost:/www
(the GlusterFS volume) is mounted in two places, /srv/www
and /var/www
(thanks to bind mounting).
GlusterFS should show everything as healthy:
[email protected]:~# gluster peer list unrecognized word: list (position 1) [email protected]:~# gluster peer status Number of Peers: 1 Hostname: web02 Port: 24007 Uuid: 56e02356-d2c3-4787-ae25-6b46e867751a State: Peer in Cluster (Connected) [email protected]:~# gluster volume list www [email protected]:~# gluster volume info www Volume Name: www Type: Replicate Volume ID: bf244b65-4201-4d2f-b8c0-2b11ee836d65 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 192.168.0.1:/srv/.bricks/www Brick2: 192.168.0.2:/srv/.bricks/www
Conclusion
You have installed GlusterFS and configured your servers to share your web content. Both servers hold a copy of the files and share changes almost instantaneously.
Where to go from here
The next article in this GlusterFS series describes how to Add and Remove GlusterFS Tutorial in a GlusterFS array.