I figured I'd share the notes from the expansion in case they might prove useful to someone.
The existing layout:
- Server running RHEL4 x86_64
- MD1000 storage currently configured with several 1 TB drives allocated to a single virtual RAID 5 disk with a single hot spare
- The storage is treated as a block device (not partitioned via fdisk) as /dev/sdb
- The device is the sole member of a volume group and is entirely allocated to a single logical volume
- The lvm is formatted using ext3
- Physically add the new drives to the MD1000
- Assign the drives to the virtual disk and change the RAID from 5 to 6
- Resize the physical volume (pvresize)
- Extend the logical volume to use all of the additional extents (lvextend)
- Grow the file system to use the additional space
Physically adding the disks can be done with the system online, just insert the caddies and allow a few minutes for the drives to be identified by the RAID controller.
Once the drives are installed restart Dell OpenManage (the server is a Dell, so we will use OpenManage to manage the RAID array) on the server so that it identifies the new hardware.
Log into the OpenManage web page (https://
$ sudo srvadmin-services.sh restart
Select 'Reconfigure' from the drop down box next to the virtual disk that will be expanded. On the next screens, select all of the drives to add to the virtual disk and change the RAID level from 5 to 6. Once complete, the array will begin rebuilding. This will probably take several days to complete.
Once complete, I had to reboot the server before it would see the additional capacity.
Now for the Linux specific steps.
- The file system has to be unmounted. In my case, the partition is used to store user data and is not needed for the system to function. Thus it can be safely unmounted with the system in runlevel 3. If we were performing this on the root partition or other system partition, you'd need to boot to the rescue mode using the install media.
- Display the current size of the physical volume
- Resize the physical volume
- Running pvdisplay again will show the new free extents
- Now extend the logical volume by the number of free extents
- The lvdisplay command will confirm that that the logical volume is now expanded
- Running a file system check isn't such a back idea (this will take a long time to complete on a multi TB file system)
- Grow the file system (ext3)
- Mount it and let the users know they can begin to fill it up (that's their purpose in life, isn't it?)
# umount /data
# pvdisplay /dev/sdb | grep PE
PE Size (KByte) 4096
Total PE 1191679
Free PE 0
Allocated PE 1191679
# pvresize -v -d /dev/sdb
Using physical volume(s) on command line
Archiving volume group "vg_md1000" metadata (seqno 2).
Resizing physical volume /dev/sdb from 1191679 to 2860031 extents.
Resizing volume "/dev/sdb" to 23429381760 sectors.
Updating physical volume "/dev/sdb"
Creating volume group backup "/etc/lvm/backup/vg_md1000" (seqno 3).
Physical volume "/dev/sdb" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
# pvdisplay /dev/sdb |grep PE
PE Size (KByte) 4096
Total PE 2860031
Free PE 1668352
Allocated PE 1191679
# lvextend -l +1668352 /dev/vg_md1000/lv_data
Extending logical volume lv_data to 10.91 TB
Logical volume lv_data successfully resized
# lvdisplay /dev/vg_md1000/lv_data
--- Logical volume ---
LV Name /dev/vg_md1000/lv_data
VG Name vg_md1000
LV UUID xxxxxxxxxxxxxxxxxx
LV Write Access read/write
LV Status available
# open 1
LV Size 10.91 TB
Current LE 2860031
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:0
# e2fsck -f /dev/vg_md1000/lv_data
e2fsck 1.35 (28-Feb-2004)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/vg_md1000/lv_data: 827839/610140160 files (2.4% non-contiguous), 1060122031/1220279296 blocks
# ext2online /dev/vg_md1000/lv_data