bigjocker's den

Extending an EBS backed LVM volume on Amazon EC2

Some time ago we created an EBS backed LVM volume on Amazon EC2. The setup is working without issues and we have tested that our snapshots can be restored to another instance without consistency issues. Right now we need to increase the size of the volume as the data we store in it has been growing quite a lot. The process could not be simpler:

The first step is to create two more EBS volumes (1TiB each) on the AWS Console. Why two? The LVM is stripped, so we need to grow it in pairs. After the new volumes have been attached to our instance we check with dmesg to see the correct mapping (EC2 instances may some times change the name of the devices):

root@backend:/mnt# dmesg
[29494096.486250]   sdg: unknown partition table
[29495074.531648]   sdi: unknown partition table
root@backend:/mnt#

We create the two physical volumes for the LVM subsystem:

root@backend:/mnt# pvcreate /dev/sdg /dev/sdi
Physical volume “/dev/sdg” successfully created Physical volume “/dev/sdi” successfully created root@backend:/mnt#

If you check the output of pvdisplay you can see the two drives have been initialized, but are not in use:

root@backend:/mnt# pvdisplay
   --- Physical volume ---
   PV Name                      /dev/sdf
   VG Name                      vgebs
   PV Size                      1.00 TiB / not usable 4.00 MiB
   Allocatable                  yes (but full)
   PE Size                      4.00 MiB
   Total PE                     262143
   Free PE                      0
   Allocated PE                 262143
   PV UUID                      qfvOCY-gW7v-7lWR-TkG4-OaCH-aRcy-xGPucO

   --- Physical volume ---

   PV Name                      /dev/sdh
   VG Name                      vgebs
   PV Size                      1.00 TiB / not usable 4.00 MiB
   Allocatable                  yes (but full)
   PE Size                      4.00 MiB
   Total PE                     262143
   Free PE                      0
   Allocated PE                 262143
   PV UUID                      hxfiqo-frgP-rFSZ-0wZm-LDOH-MGEJ-JOXcUj

   --- Physical volume ---

   PV Name                      /dev/sdg
   VG Name
   PV Size                      1.00 TiB
   Allocatable                  NO
   PE Size                      0
   Total PE                     0
   Free PE                      0
   Allocated PE                 0
   PV UUID                      FG7DRM-hkEX-AVQQ-j0uT-3zWP-go66-WZesmh

   --- NEW Physical volume ---

   PV Name                      /dev/sdi
   VG Name
   PV Size                      1.00 TiB
   Allocatable                  NO
   PE Size                      0
   Total PE                     0
   Free PE                      0
   Allocated PE                 0
   PV UUID                      wHj2Rc-UV6w-O32M-9y8B-mgXY-Prae-H8Hocz
root@backend:/mnt#

You can see the new volumes are present and have the correct size and are available for use. We can now add them to our volume group using vgextend:

root@backend:/mnt# vgextend vgebs /dev/sdg /dev/sdi
  Volume group "vgebs" successfully extended
root@backend:/mnt#

Running vgdisplay shows that the two volumes are part of the volume group (we now have 4 TiB in 4 disks):

root@backend:/mnt#  vgdisplay
   --- Volume group ---
   VG Name                       vgebs
   System ID
   Format                        lvm2
   Metadata Areas                4
   Metadata Sequence No          4
   VG Access                     read/write
   VG Status                     resizable
   MAX LV                        0
   Cur LV                        1
   Open LV                       1
   Max PV                        0
   Cur PV                        4
   Act PV                        4
   VG Size                       4.00 TiB
   PE Size                       4.00 MiB
   Total PE                      1048572
   Alloc PE / Size               524286 / 2.00 TiB
   Free   PE / Size              524286 / 2.00 TiB
   VG UUID                       OJWRog-PspS-8bDq-C0Vh-Rvkh-IXrj-7SxDaA
root@backend:/mnt#

Now we extend the LVM volume to use the whole size of the group. Please note that in theory we should ask lvextend to use 2048G, but since LVM needs to reserve some space for internal data we must leave a few GiB available:

root@backend:/mnt# lvextend -L+2045G /dev/mapper/vgebs-lvebs
  Using stripesize of last segment 2.00 MiB
  Extending logical volume lvebs to 4.00 TiB
  Logical volume lvebs successfully resized
root@backend:/mnt#

We can now use resize2fs to extend the filesystem until the end of the LVM volume:

root@backend:/mnt#  resize2fs /dev/mapper/vgebs-lvebs
resize2fs 1.41.11 (14-Mar-2010)
Filesystem at /dev/mapper/vgebs-lvebs is mounted on /mnt/disk3; on-line resizing required
old desc_blocks = 128, new_desc_blocks = 256
Performing an on-line resize of /dev/mapper/vgebs-lvebs to 1073477632 (4k) blocks.
The filesystem on /dev/mapper/vgebs-lvebs is now 1073477632 blocks long.
root@backend:/mnt#

And we are done:

root@backend:/mnt#  df -h /dev/mapper/vgebs-lvebs
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vgebs-lvebs
                      4.0T  1.4T  2.4T  37% /mnt/disk3
root@backend:/mnt#

In our case we add the two new volumes to our snapshot script:

dmsetup suspend vgebs-lvebs
python /opt/scripts/manage_snapshots.py vol-81e3ae1e 2 'backend:/dev/sdf'
python /opt/scripts/manage_snapshots.py vol-4b356321 2 'backend:/dev/sdh'
python /opt/scripts/manage_snapshots.py vol-fb2eddda 2 'backend:/dev/sdg'
python /opt/scripts/manage_snapshots.py vol-b83aa997 2 'backend:/dev/sdi'
dmsetup resume vgebs-lvebs

Next step is to take a couple of hours to test the snapshots can be attached to a new instance without issue. The best part is that the whole process is pretty straightforward and can be executed online without stopping or rebooting the instance.

Leave a Reply

Your email address will not be published. Required fields are marked *