How Slave Node Share Limited Storage to Master Node in Hadoop Cluster

Deepak Sharma
4 min readNov 12, 2020

--

Have you ever thought that can you share limited storage to Hadoop Cluster???

yes, we can ….

so , In this article we find , how to contribute limited/specific amount of storage as a slave node to the cluster ?

For achieve this task we use partitions..

What is Partitions??

A partition is a section of a storage device, such as a hard disk drive or solid state drive. It is treated by the operating system as a separate logical volume, which makes it function similar to a separate physical device. … Windows stores system files in a “System Partition” and user data files in data partition.

partition in a hard disk..

TASK COMPLETION:

  1. I built Hadoop Cluster on ORACLE VM . I have created one Name Node — RedHatM and one Data node — rhelslave1.
In this image we have two vm , one is RedHatM which is namenode and second one is rhelslave1 which is datanode.

2.In Hadoop Cluster we give storage of Data Node by one folder which shares all the storage amount of the drive inside which the folder is present.

We can check the storage amount of the Drive inside which the Data Node storage directory is present.

  • df -hT

As we can see / drive has storage of 50G. When we create any folder in data node by default it will mounted on / drive. Hence it will share all storage of / drive.

  • hadoop dfsadmin -report

As we can see near about 50G is shared with name node by data node.

3. Now we want to share limited storage in hadoop cluster. For this we have to do partitions in drive so that we can share limited storage to cluster.

For this we want to create one additional hard disk to the virtual machine.

We can see volume is attached to our data node.

We can see all the disk partitions present in our Data node.

  • fdisk -l

4.As we attached volume to our data node. Now we have to create partition.

  • fdisk device_name

In my case my device name is /dev/xdb.

As we can see there is no any partition in my volume. Let’s create one partition of size 10G.

Yeahh… Our Partition is created.

5. When we create any partition we have to format it to use it.

  • mkfs.ext4 device_name

mkfs.ext4 /dev/sdb1

Our partition is successfully formatted.

6. Now to share storage we have to mount this partition on data node folder.

  • mount device_name folder_name

mount /dev/sdb1 /dn

We can see our folder is successfully mounted on created partition.

7.Now we can see how much storage is shared by data node to namenode.

  • hadoop dfsadmin -report

We can see now data node is not share full drive storage. Our partition is 10G and data node folder also shares 10G storage.

Yeah!!…. In this way I complete this task.

Thanks to visit….

--

--

No responses yet