If you are unable to finish the lab in the ProLUG lab environment we ask you reboot the machine from the command line so that other students will have the intended environment.
cd~
mkdirlvm_lab
cdlvm_lab
touchsomefile
echo“thisisastringoftext”>somefile
catsomefile
echo“thisisastringoftext”>somefile
# Repeat 3 timescatsomefile
# How many lines are there?Echo“thisisastringoftext”>>somefile
# Repeat 3 timescatsomefile
# How many lines are there?# cheat with `cat somefile | wc -l`echo“thisisourothertesttext”>>somefile
# Repeat 3 timescatsomefile|nl
# How many lines are there?catsomefile|nl|greptest# compare that with 14catsomefile|greptest|nl
If you want to preserve positional lines in file (know how much you’ve cut
out when you grep something, or generally be able to find it in the unfiltered
file for context, always | nl | before your grep
Pre Lab - Disk Speed tests:
When using the ProLUG lab environment, you should always check that there
are no other users on the system w or who.
After this, you may want to check the current state of the disks, as they retain
their information even after a reboot resets the rest of the machine. lsblk /dev/xvda.
# If you need to wipe the disks, you should use fdisk or a similar partition utility.fdisk/dev/xvda
p#print to see partitionsd#delete partitions or informationw#Write out the changes to the disk.
This is an aside, before the lab. This is a way to test different read or writes
into or out of your filesystems as you create them. Different types of raid and
different disk setups will give different speed of read and write. This is a simple
way to test them. Use these throughout the lab in each mount for fun and understanding.
Write tests (saving off write data - rename /tmp/file each time)
# Check /dev/xvda for a filesystemblkid/dev/xvda
# If it does not have one, make onemkfs.ext4/dev/xvda
mkdir/space# (If you don’t have it. Lab will tell you to later as well)mount/dev/xvda/space
If you are re-creating a test without blowing away the filesystem, change
the name or counting numbers of testfile because that’s the only way to be
sure there is not some type of filesystem caching going on to optimize. This
is especially true in SAN write tests.
# Check physical volumes on your server (my output may vary)fdisk-l|grep-ixvd
# Disk /dev/xvda: 15 GiB, 16106127360 bytes, 31457280 sectors# Disk /dev/xvdb: 3 GiB, 3221225472 bytes, 6291456 sectors# Disk /dev/xvdc: 3 GiB, 3221225472 bytes, 6291456 sectors# Disk /dev/xvde: 3 GiB, 3221225472 bytes, 6291456 sectors
Looking at Logical Volume Management
Logical Volume Management is an abstraction layer that looks a lot like how
we carve up SAN disks for storage management. We have Physical Volumes that
get grouped up into Volume Groups. We carve Volume Groups up to be presented
as Logical Volumes.
Here at the Logical Volume layer we can assign RAID functionality from our
Physical Volumes attached to a Volume Group or do all kinds of different things
that are “under the hood”. Logical Volumes get filesystems formatting and are
mounted to the OS.
There are many important commands for showing your physical volumes, volume
groups, and logical volumes.
With these you can see basic information that allows you to see how the disks
are allocated. Why do you think there is no output from these commands the
first time you run them? Try these next commands to see if you can figure out
what is happening? To see more in depth information try pvdisplay, vgdisplay,
and lvdisplay.
If there is still no output, it’s because this system is not configured for LVM.
You will notice that none of the disk you verified are attached are allocated to
LVM yet. We’ll do that next.
Creating and Carving up your LVM resources
Disks for this lab are /dev/xvdb, /dev/xvdc, and /dev/xvdd.
(but verify before continuing and adjust accordingly.)
We can do individual pvcreates for each disk pvcreate /dev/xvdb but we can also
loop over them with a simple loop as below. Use your drive letters.
fordiskinbcd;dopvcreate/dev/xvd$disk;done# Physical volume "/dev/xvdb" successfully created.# Creating devices file /etc/lvm/devices/system.devices# Physical volume "/dev/xvdc" successfully created.# Physical volume "/dev/xvde" successfully created.# To see what we made:pvs
# PV VG Fmt Attr PSize PFree# /dev/xvdb lvm2 --- 3.00g 3.00g# /dev/xvdc lvm2 --- 3.00g 3.00g# /dev/xvde lvm2 --- 3.00g 3.00gvgcreateVolGroupTest/dev/xvdb/dev/xvdc/dev/xvde
# Volume group "VolGroupTest" successfully createdvgs
# VG #PV #LV #SN Attr VSize VFree# VolGroupTest 3 0 0 wz--n- <8.99g <8.99glvcreate-l+100%FREE-nlv_testVolGroupTest
# Logical volume "lv_test" created.lvs
# LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert# lv_test VolGroupTest -wi-a----- <8.99g# Formatting and mounting the filesystemmkfs.ext4/dev/mapper/VolGroupTest-lv_test
# mke2fs 1.42.9 (28-Dec-2013)# Filesystem label=# OS type: Linux# Block size=4096 (log=2)# Fragment size=4096 (log=2)# Stride=0 blocks, Stripe width=0 blocks# 983040 inodes, 3929088 blocks# 196454 blocks (5.00%) reserved for the super user# First data block=0# Maximum filesystem blocks=2151677952# 120 block groups# 32768 blocks per group, 32768 fragments per group# 8192 inodes per group# Superblock backups stored on blocks:# 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208## Allocating group tables: done# Writing inode tables: done# Creating journal (32768 blocks): done# Writing superblocks and filesystem accounting information: donemkdir/space#Created earliervi/etc/fstab
# Add the following line# /dev/mapper/VolGroupTest-lv_test /space ext4 defaults 0 0# reload fstabsystemctldaemon-reload
If this command works, there will be no output. We use the df -h in the next
command to verify the new filesystem exists. The use of mount -a and not
manually mounting the filesystem from the command line is an old administration
trick I picked up over the years.
By setting our mount in /etc/fstab and then telling the system to mount everything
we verify that this will come back up properly during a reboot. We have mounted
and verified we have a persistent mount in one step.
The following command is one way to comment out the line in /etc/fstab. If you had
to do this across multiple servers this could be useful. (Or you can just use vi for simplicity).
Create a RAID 5 filesystem and mount it to the OS (For brevity’s sake we will
be limiting show commands from here on out, please use pvs,vgs,lvs often for
your own understanding)
fordiskincef;dopvcreate/dev/sd$disk;done# Physical volume "/dev/sdc" successfully created.# Physical volume "/dev/sde" successfully created.# Physical volume "/dev/sdf" successfully created.vgcreateVolGroupTest/dev/sdc/dev/sde/dev/sdf
lvcreate-l+100%FREE--typeraid5-nlv_testVolGroupTest
mkfs.xfs/dev/mapper/VolGroupTest-lv_test
vi/etc/fstab
# fix the /space directory to have these parameters (change ext4 to xfs)/dev/mapper/VolGroupTest-lv_test/spacexfsdefaults00df-h
# Filesystem Size Used Avail Use% Mounted on# /dev/mapper/VolGroup00-LogVol08 488M 34M 419M 8% /var/log/audit# /dev/mapper/VolGroupTest-lv_test 10G 33M 10G 1% /space
Since we’re now using RAID 5 we would expect to see the size no longer match the full
15GB, 10GB is much more of a RAID 5 value 66% of raw disk space.
Spend 5 min reading the man lvs page to read up on raid levels and what they can accomplish.
To run RAID 5 3 disks are needed. To run RAID 6 at least 4 disks are needed.
lvremove/dev/mapper/VolGroupTest-lv_test
# Do you really want to remove active logical volume VolGroupTest/lv_test? [y/n]: y# Logical volume "lv_test" successfully removedvgremoveVolGroupTest
# Volume group "VolGroupTest" successfully removedfordiskincef;dopvremove/dev/sd$disk;done# Labels on physical volume "/dev/sdc" successfully wiped.# Labels on physical volume "/dev/sde" successfully wiped.# Labels on physical volume "/dev/sdf" successfully wiped.
Working with MDADM as another RAID option
There could be a reason to use MDADM on the system. For example you want raid handled
outside of your LVM so that you can bring in sets of new disks already raided and treat
them as their own Physical Volumes. Think, “I want to add another layer of abstraction so
that even my LVM is unaware of the RAID levels.” This has special use case, but is still
useful to understand.
Verify with df -h ensure that your /space is mounted.
There is no procedure in this lab for breaking down this MDADM RAID.
You are root/administrator on your machine, and you do not care about the data on this RAID.
Can you use the internet/man pages/or other documentation to take this raid down safely and clear those disks?
Can you document your steps so that you or others could come back and do this process again?
Info
Be sure to reboot the lab machine from the command line when you are done.