6. Software Raid

6.1. Setting up RAID devices and config files

6.1.1. Prepare /etc/mdadm.conf

echo 'DEVICE /dev/hd* /dev/sd*' > /etc/mdadm.conf

6.1.2. Preparing the harddisk

For now we assume that we either want to create a RAID-0 or RAID-1 array. For RAID-5 you just have to add more partitions (and therefor harddisks). Create a partition on each disk with the maximal size fdisk -l should show you something like this:

# fdisk -l
Disk /dev/hda: 16 heads, 63 sectors, 79780 cylinders
Units = cylinders of 1008 * 512 bytes

Device Boot    Start       End    Blocks   Id  System
/dev/hda1          1     79780  40209088+  fd  Linux raid autodetect

Disk /dev/hdc: 16 heads, 63 sectors, 79780 cylinders
Units = cylinders of 1008 * 512 bytes

Device Boot    Start       End    Blocks   Id  System
/dev/hdc1          1     79780  40209088+  fd  Linux raid autodetect
[Note] Note

As Devil-Linux does not include raid autodetect, there's really no need to (read the linux-raid mailinglist!), we do just use the partition type "fd, Linux raid autodetect" to mark those partitions for ourselfs. You can of course use the standard partition type "83, Linux" - but hey someone might format it ;)

6.1.3. RAID-0 (no redundancy!!)

Use mdadm to create a RAID-0 device:

mdadm --create /dev/md0 --chunk=64 --level=raid0 --raid-devices=2 /dev/hda1 /dev/hdc1

Instead of /dev/md0 use any other md device if /dev/md0 is already in use by another array. You might also want to experiment with the chunk size (eg. 8, 16, 32, 64, 128). Use a harddisk benchmark to check or stay with the default of 64k chunk size. You probably have to change the device names specified here to the ones which reflect the setup of your system.

# cat /proc/mdstat
Personalities : [raid0]
read_ahead 1024 sectors
md0 : active raid0 hdc1[1] hda1[0]
      80418048 blocks 128k chunks

Ok, that looks fine.

6.1.4. RAID-1 (with redundancy!!)

Use mdadm to create a RAID-1 device:

mdadm --create /dev/md0 --chunk=64 --level=raid1 --raid-devices=2 /dev/hda1 /dev/hdc1

Instead of /dev/md0 use any other md device if /dev/md0 is already in use by another array. You might also want to experiment with the chunk size (eg. 8, 16, 32, 64, 128). Use a harddisk benchmark to check or stay with the default of 64k chunk size. You probably have to change the device names specified here to the ones which reflect the setup of your system.

# cat /proc/mdstat
Personalities : [raid0] [raid1]
read_ahead 1024 sectors
md0 : active raid1 hdc1[1] hda1[0]
      40209024 blocks [2/2] [UU]
      [>....................]  resync =  0.7% (301120/40209024) finish=17.6min speed=37640K/sec

Ok, that looks fine.

[Note] Note

Before rebooting you will have to wait till this resync is done otherwise it will start again after the system is up again.

Save the information about the just created array(s)

# mdadm --detail --scan >> /etc/mdadm.conf
# cat /etc/mdadm.conf
  DEVICE /dev/hd* /dev/sd*
  ARRAY /dev/md0 level=raid1 num-devices=2 UUID=d876333b:694e852b:e9a6f40f:0beb90f9

Looks good too!

Now you can use the just created arrays to put LVM on top of them to facilitate the auto-mounting of logical volumes in the devil-linux volume group.

Don't forgett to run a final save-config !

6.2. Gathering information about RAID devices and disks

6.2.1. Show current status of raid devices

cat /proc/mdstat

Output for a currently degraded RAID-1 with a failed disk:

Personalities : [raid0] [raid1]
read_ahead 1024 sectors
md0 : active raid1 hdc1[2](F) hda1[0]
      40209024 blocks [2/1] [U_]
unused devices: <none>

Output for a currently degraded RAID-1 with the faulty disk removed:

Personalities : [raid0] [raid1]
read_ahead 1024 sectors
md0 : active raid1 hda1[0]
          40209024 blocks [2/1] [U_]
unused devices: <none>

Output for a currently rebuilding RAID-1:

Personalities : [raid0] [raid1]
read_ahead 1024 sectors
md0 : active raid1 hdc1[2] hda1[0]
    40209024 blocks [2/1] [U_]
    [=======>.............]  recovery = 37.1% (14934592/40209024) finish=11.7min speed=35928K/sec
unused devices: <none>

6.2.2. Get more in detail info about RAID devices

# mdadm --query /dev/md0
/md0: 38.35GiB raid1 2 devices, 3 spares. Use mdadm --detail for more detail.
/dev/md0: No md super block found, not an md component.

# mdadm --detail /dev/md0
/dev/md0:
        Version : 00.90.00
  Creation Time : Mon Jan 20 22:53:28 2003
     Raid Level : raid1
     Array Size : 40209024 (38.35 GiB 41.22 GB)
    Device Size : 40209024 (38.35 GiB 41.22 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent

     Update Time : Tue Jan 21 00:49:47 2003
          State : dirty, no-errors
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0


    Number   Major   Minor   RaidDevice State
       0       3        1        0      active sync   /dev/hda1
       1      22        1        1      active sync   /dev/hdc1
           UUID : d876333b:694e852b:e9a6f40f:0beb90f9

6.2.3. Get more info about disks

# mdadm --query /dev/hda1
/dev/hda1: is not an md array
/dev/hda1: device 0 in 2 device active raid1 md0....

# mdadm --query /dev/hdc1
/dev/hdc1: is not an md array
/dev/hdc1: device 1 in 2 device active raid1 md0....

6.2.4. Managing RAID devices (RAID-1 and up!!)

Setting a disk faulty/failed:

# mdadm --fail /dev/md0 /dev/hdc1
[Caution] Caution

DO NOT run this every on a raid0 or linear device or your data is toasted!

Removing a faulty disk from an array:

# mdadm --remove /dev/md0 /dev/hdc1

Clearing any previous raid info on a disk (eg. reusing a disk from another decommissioned raid array)

# mdadm --zero-superblock /dev/hdc1

Adding a disk to an array

# mdadm --add /dev/md0 /dev/hdc1