Friday, December 4, 2009

Expand that File System

The other day I received several 1 TB disks to add to an existing Dell MD1000. The user purchased the drives to expand a single file system on their RHEL4 server.

I figured I'd share the notes from the expansion in case they might prove useful to someone.

The existing layout:
  • Server running RHEL4 x86_64
  • MD1000 storage currently configured with several 1 TB drives allocated to a single virtual RAID 5 disk with a single hot spare
  • The storage is treated as a block device (not partitioned via fdisk) as /dev/sdb
  • The device is the sole member of a volume group and is entirely allocated to a single logical volume
  • The lvm is formatted using ext3
The task:
  • Physically add the new drives to the MD1000
  • Assign the drives to the virtual disk and change the RAID from 5 to 6
  • Resize the physical volume (pvresize)
  • Extend the logical volume to use all of the additional extents (lvextend)
  • Grow the file system to use the additional space
Backup all of the data, just in case!

Physically adding the disks can be done with the system online, just insert the caddies and allow a few minutes for the drives to be identified by the RAID controller.

Once the drives are installed restart Dell OpenManage (the server is a Dell, so we will use OpenManage to manage the RAID array) on the server so that it identifies the new hardware.

$ sudo srvadmin-services.sh restart
Log into the OpenManage web page (https://:1311/) and navigate to Storage -> PERC 6/E -> Virtual Disks

Select 'Reconfigure' from the drop down box next to the virtual disk that will be expanded. On the next screens, select all of the drives to add to the virtual disk and change the RAID level from 5 to 6. Once complete, the array will begin rebuilding. This will probably take several days to complete.

Once complete, I had to reboot the server before it would see the additional capacity.

Now for the Linux specific steps.
  1. The file system has to be unmounted. In my case, the partition is used to store user data and is not needed for the system to function. Thus it can be safely unmounted with the system in runlevel 3. If we were performing this on the root partition or other system partition, you'd need to boot to the rescue mode using the install media.
  2. # umount /data
  3. Display the current size of the physical volume
  4. # pvdisplay /dev/sdb | grep PE

    PE Size (KByte) 4096
    Total PE 1191679
    Free PE 0
    Allocated PE 1191679
  5. Resize the physical volume
  6. # pvresize -v -d /dev/sdb

    Using physical volume(s) on command line
    Archiving volume group "vg_md1000" metadata (seqno 2).
    Resizing physical volume /dev/sdb from 1191679 to 2860031 extents.
    Resizing volume "/dev/sdb" to 23429381760 sectors.
    Updating physical volume "/dev/sdb"
    Creating volume group backup "/etc/lvm/backup/vg_md1000" (seqno 3).
    Physical volume "/dev/sdb" changed
    1 physical volume(s) resized / 0 physical volume(s) not resized
  7. Running pvdisplay again will show the new free extents
  8. # pvdisplay /dev/sdb |grep PE

    PE Size (KByte) 4096
    Total PE 2860031
    Free PE 1668352
    Allocated PE 1191679
  9. Now extend the logical volume by the number of free extents
  10. # lvextend -l +1668352 /dev/vg_md1000/lv_data

    Extending logical volume lv_data to 10.91 TB
    Logical volume lv_data successfully resized
  11. The lvdisplay command will confirm that that the logical volume is now expanded
  12. # lvdisplay /dev/vg_md1000/lv_data
    --- Logical volume ---
    LV Name /dev/vg_md1000/lv_data
    VG Name vg_md1000
    LV UUID xxxxxxxxxxxxxxxxxx
    LV Write Access read/write
    LV Status available
    # open 1
    LV Size 10.91 TB
    Current LE 2860031
    Segments 1
    Allocation inherit
    Read ahead sectors 0
    Block device 253:0
  13. Running a file system check isn't such a back idea (this will take a long time to complete on a multi TB file system)
  14. # e2fsck -f /dev/vg_md1000/lv_data

    e2fsck 1.35 (28-Feb-2004)
    Pass 1: Checking inodes, blocks, and sizes
    Pass 2: Checking directory structure
    Pass 3: Checking directory connectivity
    Pass 4: Checking reference counts
    Pass 5: Checking group summary information
    /dev/vg_md1000/lv_data: 827839/610140160 files (2.4% non-contiguous), 1060122031/1220279296 blocks

  15. Grow the file system (ext3)
  16. # ext2online /dev/vg_md1000/lv_data
  17. Mount it and let the users know they can begin to fill it up (that's their purpose in life, isn't it?)

1 comment:

Unknown said...

In RHEL4, ext3 is limited to 8TB file systems.

RHEL5 supports 16TB ext3 file systems, but you have to provide the -F switch to mkfs.ext3