How To Create RAID Software Ubuntu
How To Create RAID Software Ubuntu
By Justin Ellingwood
PostedJuly 9, 2018 105.4kviews
Ubuntu 18.04
Debian 9
Ubuntu 16.04
See More
Introduction
The mdadm utility can be used to create and manage storage arrays using Linux’s
software RAID capabilities. Administrators have great flexibility in coordinating their
individual storage devices and creating logical storage devices that have greater
performance or redundancy characteristics.
In this guide, we will go over a number of different RAID configurations that can be
set up using an Ubuntu 18.04 server.
Prerequisites
In order to complete the steps in this guide, you should have:
Warning: This process will completely destroy the array and any data written to it.
Make sure that you are operating on the correct array and that you have copied off
any data you need to retain prior to destroying the array.
cat /proc/mdstat
Output
Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4]
[raid10]
md0 : active raid0 sdc[1] sdd[0]
209584128 blocks super 1.2 512k chunks
Warning: Keep in mind that the /dev/sd* names can change any time you reboot!
Check them every time to make sure you are operating on the correct devices.
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
Output
NAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
sdc 100G linux_raid_member disk
sdd 100G linux_raid_member disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
After discovering the devices used to create an array, zero their superblock to
remove the RAID metadata and reset them to normal:
sudo update-initramfs -u
At this point, you should be ready to reuse the storage devices individually, or as
components of a different array.
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
Output
NAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
As you can see above, we have two disks without a filesystem, each 100G in size. In
this example, these devices have been given the /dev/sda and /dev/sdb identifiers
for this session. These will be the raw components we will use to build the array.
cat /proc/mdstat
Output
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
[raid10]
md0 : active raid0 sdb[1] sda[0]
209584128 blocks super 1.2 512k chunks
df -h -x devtmpfs -x tmpfs
Output
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1.4G 23G 6% /
/dev/vda15 105M 3.4M 102M 4% /boot/efi
/dev/md0 196G 61M 186G 1% /mnt/md0
The new filesystem is mounted and accessible.
sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab file for automatic
mounting at boot:
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
Output
NAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
As you can see above, we have two disks without a filesystem, each 100G in size. In
this example, these devices have been given the /dev/sda and /dev/sdb identifiers
for this session. These will be the raw components we will use to build the array.
Output
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: size set to 104792064K
Continue creating array? y
The mdadm tool will start to mirror the drives. This can take some time to complete,
but the array can be used during this time. You can monitor the progress of the
mirroring by checking the /proc/mdstat file:
cat /proc/mdstat
Output
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
[raid10]
md0 : active raid1 sdb[1] sda[0]
104792064 blocks super 1.2 [2/2] [UU]
[====>................] resync = 20.2% (21233216/104792064)
finish=6.9min speed=199507K/sec
df -h -x devtmpfs -x tmpfs
Output
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1.4G 23G 6% /
/dev/vda15 105M 3.4M 102M 4% /boot/efi
/dev/md0 99G 60M 94G 1% /mnt/md0
The new filesystem is mounted and accessible.
sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab file for automatic
mounting at boot:
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
Output
NAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
sdc 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
As you can see above, we have three disks without a filesystem, each 100G in size.
In this example, these devices have been given the /dev/sda, /dev/sdb,
and /dev/sdc identifiers for this session. These will be the raw components we will
use to build the array.
cat /proc/mdstat
Output
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4]
[raid10]
md0 : active raid5 sdc[3] sdb[1] sda[0]
209584128 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2]
[UU_]
[===>.................] recovery = 15.6% (16362536/104792064)
finish=7.3min speed=200808K/sec
Warning: Due to the way that mdadm builds RAID 5 arrays, while the array is still
building, the number of spares in the array will be inaccurately reported. This means
that you must wait for the array to finish assembling before updating
the /etc/mdadm/mdadm.conf file. If you update the configuration file while the array is
still building, the system will have incorrect information about the array state and will
be unable to assemble it automatically at boot with the correct name.
df -h -x devtmpfs -x tmpfs
Output
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1.4G 23G 6% /
/dev/vda15 105M 3.4M 102M 4% /boot/efi
/dev/md0 197G 60M 187G 1% /mnt/md0
The new filesystem is mounted and accessible.
cat /proc/mdstat
Output
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4]
[raid10]
md0 : active raid5 sdc[3] sdb[1] sda[0]
209584128 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3]
[UUU]
sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab file for automatic
mounting at boot:
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
Output
NAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
sdc 100G disk
sdd 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
As you can see above, we have four disks without a filesystem, each 100G in size. In
this example, these devices have been given the /dev/sda, /dev/sdb, /dev/sdc,
and /dev/sdd identifiers for this session. These will be the raw components we will
use to build the array.
cat /proc/mdstat
Output
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1]
[raid10]
md0 : active raid6 sdd[3] sdc[2] sdb[1] sda[0]
209584128 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4]
[UUUU]
[>....................] resync = 0.6% (668572/104792064)
finish=10.3min speed=167143K/sec
df -h -x devtmpfs -x tmpfs
Output
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1.4G 23G 6% /
/dev/vda15 105M 3.4M 102M 4% /boot/efi
/dev/md0 197G 60M 187G 1% /mnt/md0
The new filesystem is mounted and accessible.
sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab file for automatic
mounting at boot:
By default, two copies of each data block will be stored in what is called the “near”
layout. The possible layouts that dictate how each data block is stored are:
man 4 md
You can also find this man page online here.
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
Output
NAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
sdc 100G disk
sdd 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
As you can see above, we have four disks without a filesystem, each 100G in size. In
this example, these devices have been given the /dev/sda, /dev/sdb, /dev/sdc,
and /dev/sdd identifiers for this session. These will be the raw components we will
use to build the array.
You can set up two copies using the near layout by not specifying a layout and
copy number:
For instance, to create an array that has 3 copies in the offset layout, the command
would look like this:
cat /proc/mdstat
Output
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1]
[raid10]
md0 : active raid10 sdd[3] sdc[2] sdb[1] sda[0]
209584128 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
[===>.................] resync = 18.1% (37959424/209584128)
finish=13.8min speed=206120K/sec
df -h -x devtmpfs -x tmpfs
Output
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1.4G 23G 6% /
/dev/vda15 105M 3.4M 102M 4% /boot/efi
/dev/md0 197G 60M 187G 1% /mnt/md0
The new filesystem is mounted and accessible.
sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab file for automatic
mounting at boot:
Conclusion
In this guide, we demonstrated how to create various types of arrays using
Linux’s mdadm software RAID utility. RAID arrays offer some compelling redundancy
and performance enhancements over using multiple disks individually.
Once you have settled on the type of array needed for your environment and
created the device, you will need to learn how to perform day-to-day management
with mdadm. Our guide on how to manage RAID arrays with mdadm on Ubuntu
16.04 can help get you started.