View the status of a software RAID mirror or stripe
Author: name contact BSD flavour
Reviewer: ceri ceri@FreeBSD.org FreeBSD|OpenBSD
Reviewer: name contact BSD flavour
Concept
In addition to providing drivers for hardware RAID devices, BSD systems also provide built-in mechanisms for configuring software RAID systems. Know the difference between RAID levels 0, 1, 3 and 5 and recognize which utilities are available to configure software RAID on each BSD system.
Introduction
Software RAID
Raid is performed by using a kernel device driver. Raidframe is an example of this. Software raid is a inexpensive raid solution that can be deployed on any system. Software raid is a great opportunity to practice raid without investing in an expensive raid card.
Hardware RAID
Hardware raid is performed by a controller card. The most common are produced by LSI and Adaptec. These controller cards offload the workload of parity and transactions across multiple disks, and provide the operating system with a single virtual device to represent this set. It is notable to mention that many cards advertised as raid cards are simply controller cards bundled with a driver that requires the OS to handle parity and transactions. This is not a hardware raid solution. Also many hardware raid cards available on todays market may only be configured during bootup.
RAID Levels
RAID level 0
Raid level 0 traditionaly described a grouping of disks, with data striped evenly across without parity. This means that any single disk failure results in the failure of the complete set. the term "Raid level 0" no longer necessarily means the data is evenly distributed across all disks in a stripe, just that a set of disks are not fault-tolerant.
RAID level 1
Raid level 1, also called mirroring or shadowing, groups disks into pairs. A single copy of each block is stored on an accompanying disk. Raid level 1 is highly reliable and can tolerate disk failures up to N/2 without losing data, as long as two disks in a pair do not fail.
RAID level 3
In Raid level 3, data is striped across data disks, and an additional parity disk stores the parity of the data. When any single disk fails, the data may be recovered by computing the incomplete data from the parity disk. Multiple disk failures may be tolerated with the addition of multiple check disks.
RAID level 5
Raid level 5 is similar to raid level 3, with the exception that the parity data is evenly distributed acrossed all disks.
Raid on Raid
It is possible to combine raid levels. For instance, You may build two raid level 1 sets of a pair of two disks each. You may then stripe these two raid level 1 sets as raid level 0. This is not commonly done due to complexity, but is available when necessary.
RAIDframe: framework for rapid prototyping of RAID structures
RAIDframe is a software RAID solution. It is generaly used when hardware raid solutions are not cost effective.
RAIDframe was developed at Carnegie Mellon University. RAIDframe, as distributed by CMU, provides a RAID simulator for a number of different architectures, and a user-level device driver and a kernel device driver for Digital Unix. Greg Oster developed this framework as a NetBSD kernel-level device driver. It has since been ported to OpenBSD and FreeBSD. (RAIDframe for FreeBSD does not exist in recent releases. Isn't it dead?)
To check the status of a raid device, /dev/raid0, use:
raidctl -s raid0
ccd
ccd is a software raid solution that provides capability of combining multiple partitions into a virtual disk. ccd(4) driver and ccdconfig(8) are available on FreeBSD, NetBSD, OpenBSD and DragonFlyBSD natively.
(Example output...)
gstripe, graid3 and gmirror
gstripe, graid3, and gmirror is a GEOM based software raid solution on FreeBSD. Example output...
View status of a gstripe set: "" gstripe status
View status of a gmirror set: "" gmirror status
View status of a graid3 set: "" graid3 status
gvinum/vinum
gvinum on freebsd and vinum on netbsd is a software raid solution. Example output....
bioctl
bioctl
is an OpenBSD userland interface to hardware raid controllers and enclosures, and to software raid devices. Example of checking the health of a bioctl-compatible raid set:
$ sudo bioctl arc0
Volume Status Size Device
arc0 0 Online 127999672320 sd2 RAID5
0 Online 320072933376 0:0.0 noencl <ST3320620AS 3.AAD>
1 Online 320072933376 0:1.0 noencl <ST3320620AS 3.AAD>
2 Online 320072933376 0:2.0 noencl <ST3320620AS 3.AAC>
arc0 1 Online 127999672320 sd3 RAID0
0 Online 320072933376 0:0.0 noencl <ST3320620AS 3.AAD>
1 Online 320072933376 0:1.0 noencl <ST3320620AS 3.AAD>
2 Online 320072933376 0:2.0 noencl <ST3320620AS 3.AAC>
arc0 2 Online 127999672320 sd4 RAID1
0 Online 320072933376 0:0.0 noencl <ST3320620AS 3.AAD>
1 Online 320072933376 0:1.0 noencl <ST3320620AS 3.AAD>
2 Online 320072933376 0:2.0 noencl <ST3320620AS 3.AAC>
Definitions
Raid set
Raid level
parity
reconstruction
degraded mode
See Also
RAIDframe
- http://www.pdl.cmu.edu/RAIDframe/ CMU RAIDframe
- http://www.cs.usask.ca/staff/oster/raid.html NetBSD and RAIDframe
vinum(8), gvinum(8), gmirror(8), gstripe(8), graid3(8), raidctl(8), ccdconfig(8), softraid(4), bioctl(8)