How to contribute limited/specific amount of storage as slave to the Hadoop cluster? π» π» π»
Hola Guys ! π€© π€© π€©
Hope you all are well and excited for knowing more about Hadoop Cluster. So, for answer your question I created this Blog. π π π
β This Blog is for those who want to know about how create partition and then use that partition for Hadoop Cluster Datanode.β
So, letβs explore the technical stuff Together. π€ π€ π€
Prerequisite π π π
- Hadoop Software should be already installed in system.
- Java Software also should install in system.
- There should a cluster with NameNode already configured, we will create only Datanode with limited storage.
To contribute limited/specific amount of storage as slave to the Hadoop cluster, we should follow these steps : π π π
- Create Partition
- Format the partition
- Mount the Patition
- Configure the Hadoop Datanode
- Verify the Setup configuration ( Optional)
So, letβs perform above mentions steps one by one. π£ π£ π£
How to Create Partition π π π
NOTE : Article contain Setup for LINUX only. πππ
If you are using Virtual Machine, create a virtual volume as your requirement. Here i created 10 Gib volume disk. ( else skip this step)
Creation of Virtual Volume in Oracle VM VirtualBox π¦ π¦ π¦
π TO create volume, we have to go virtual machine setting. βοΈ
π Now go to storage and create new volume π¨
π Now click next next to create volume as shown in screenshots. βοΈβοΈβοΈ
π Now our Volumes are successfully created. π―π―π―
Now boot or turn on the Virtual machine.
Creation of the Partition by using fdisk command β₯οΈβ₯οΈβ₯οΈ
Now we create the the partition by fdisk command. But we first list all partition.
Command : β fdisk -l β or β lsblk β
Output :
Now we create the partition by command :
β fdisk /dev/sdb β
Output :
Now we again check partition by using β lsblkβ command.
As, partition successfully created so now we move ahead and format the partition. βοΈβοΈβοΈ
Format the Partition πππ
To format the partition we first have to compare the filesystem and create format as requirement. ( here i create .ext4 format )
Command
mkfs.ext4 /dev/sdb1
Output :
Now we have successfully formatted the partition. Now last step is to mount the Drive. βοΈβοΈβοΈ
Mount the Drive with a Directory πππ
First we list all mounted directories by βdf -h β command.
Mounting the drive is very easy task. We can do this by single command, but before we have to create a directory to which we will mount the drive.
So, first create directory and then mount by using following commands :
Command:
mkdir PathToDirectory
mount /dev/sdb1 /PathToDirectory
Output :
As now disk is successfully mounted, we will verify by βdf -hβ command. βοΈβοΈβοΈ
Now we configure our Datanode π₯π₯π₯
Configuration of Datanode β‘οΈβ‘οΈβ‘οΈ
As we have Hadoop and Java already installed, so now have to follow these steps :
1. Configuration of file :
hdfs-site.xml
core-site.xml
2. Start the Datanode
3. Check Hadoop report (to verify setup)
So, first we configure the files as follows : πππ
command :
cd /etc/hadoop
vi hdfs-site.xml ( and setup file as shown in ScreenShot )
vi core-site.xml ( and setup file as shown in ScreenShot )
ScreenShots :
Files Screenshots :
Hence, files successfully configured.
Now , we start the datanode by πππ
Command :
hadoop-daemon.sh start datanode
jps ( to check datanode started or not)
At last we check the Datanode report. πππ
Command :
hadoop dfsadmin -report
Hence, we have successfully done our task. π―π―π―
Thatβs all ! βοΈβοΈβοΈ
So, now itβs time to say Goodbye. Now, we will meet soon , in my upcoming blog, until that be happy and safe. π€ π€ π€
If you like my blog and wants such blog follow me on medium. πππ
In upcoming days I am going to publish lots of articles on Cloud Computing Technologies and many case-study, So definitely follow me on Medium. πππ
Here is my LinkedIn profile link and if you have any queries definitely Comment. βοΈβοΈβοΈ