🌀 Integrating LVM with Hadoop and
providing Elasticity to DataNode Storage 🌀

Rahul Kumar
4 min readMar 14, 2021

In this script we are going to Integrating LVM(logical volume Management) with hadoop and providing elasticity to datanode:-

For this we need to firstly Create a LVM :-

CREATING LVM :-

for creating lvm we need to attach hard disk first :

Step1: Attach a Hard disk to Virtual Machine:

by seeing instructions given in GUI (Image):-

go to settings -> Storage -> controller(Add hard disk) -> Create ->VHD -> Dynamically -> size(whatever you want) -> ok

In my Case , I attached a hard disk of 30Gib in size.

by using “fdisk -l” command you can see the Additional storage which we have added to the vm.

Step 2: Creating Logical Volume

Steps required to creating logical volume:

  1. Create physical volume(pv) → 2.Create volume group(vg) → 3.Create Logical Volume(lv)

Step2.1 Creating a physical volume

pvcreate” command is used to initialize a block device to be used as a physical volume.

Creating a physical volume( pv ) by using command:-

$ pvcreate /dev/sdb

we can check physical volume is created or not by using command : -

$ pvdisplay

Step2.2 Creating a volume group:

To create a volume group from one or more physical volumes, use the vgcreate command. The vgcreate command creates a new volume group by name and adds at least one physical volume to it.

Creating a Volume group( vg ) by using command:-

$ vgcreate datavg /dev/sdb

we can check volume group is created or not by using command : -

$ vgdisplay datavg

Step2.3 Creating a logical volume :-

creating a logical volume of size 15Gib and named datalv:-

$ lvcreate --size 15G --name datalv datavg

we can check logical volume is created or not by using command : -

$lvdisplay datavg/datalv

Step 3. formatting logical volume and mount to “/datanode” :

Step 4. Mounting to “/datanode” :

you can see “datalv” logical volume is added to “df -h”

Step 5. Check storage provided by data node to Name node : —

Now you can check on name node a datanode added to it with size of 15 GB:

by using command on Name node :

$ hadoop dfsadmin -report

Step 6. Providing elasticity to Namenode :-

In this step we are increasing size of datalv and provide to the namenode on the fly :

step 6.1. Increasing size of logical volume by 5GB :

by using command : -

$lvextend --size +5G /dev/datavg/datlv

step 6.2. Resize of logical volume :

The resize2fs program will resize ext2, ext3, or ext4 file systems. It can be used to enlarge or shrink an unmounted file system located on device. If the filesystem is mounted, it can be used to expand the size of the mounted filesystem, assuming the kernel supports on-line resizing.

$resize2fs /dev/datavg/datalv

you can see “datalv” logical volume is increased by +5GB by “df -h” command:

step 6.3. Checking for Elasticity:

Now you can check on name node a datanode added to it with size of 20GB:

by using command on Name node :

$ hadoop dfsadmin -report

Thank you…..

--

--