Our community of experts have been thoroughly vetted for their expertise and industry experience.
Published:
Updated:
Browse All Articles > Installing and Configure single node Hadoop 2.2.0 on Oracle Linux
Hello All,
In previous article we used Hortonworks sandbox to work with Hadoop. Now, lets think to create own single node Hadoop on Linux. Here we Install and Configure Apache Hadoop on UI based Oracle Linux.
Hope, you have installed Linux on VMware workstation. Now time to Install and Configure Hadoop on it.
Before Installing Hadoop you needs to be installed some prerequisites.
Installing Java
Downaload Java jdk rpm file
What this installer do? It will install requied binaries to specifica location and set Java Home path as well. So no need to do it manually. To Install it follow the friendly steps.
If you wish to check version of installed java then run "java -version" command in terminal.
Adding dedicated Hadoop system user.
In linux there is application KUser to create users and groups with mapping of both.
Configuring SSH access.
Login with dedicate hadoop user and follow the steps:
SSH is required to communicate with other node of Hadoop.
To do that RUN command ssh-keygen -t rsa -P "" in terminal with newly created user and follow steps.
It will ask to provide the file name in which to save the key, you just press enter so it will generate the key at ‘/home/dedicatedhadoopuser/ .ssh’ default path.
Enable SSH access to your local machine with this newly created key RUN command in terminal cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
The final step is to test the SSH setup by connecting to your local machine with the dedicated hadoop user. RUN command in terminal ssh dedicatedhadoopuser@localhost
This will add localhost permanently to the list of known hosts
Disabling IPv6.
Open "/" root folder and goto path "/etc/" and open "sysctl.conf" file in gedit
Congrated! we have completed one task of Installation.
Configure Hadoop
As a configuration steps we have to create/update below files:
Files 1 to 4 are available at "/usr/local/hadoop/etc/hadoop" folder, open it with gedit and last one is in root folder.
Note: If files are not exists then create it.
We have configured Hadoop. Now we have to format node to use it and start service.
To format run hadoop namenode -format command in terminal. make sure you are at path "/usr/local/hadoop/bin".
Now are are ready to start all services. run start-all.sh command to terminal. It will start all dependant services of Hadoop.
I think you all are thinking how to check running services. No problem go to browser and check below urls:
I think you have not read article carefully and precisely. You have check article contains screen shots and wordings are totally different. However, always some commands are common. If you think those are copied then I can't do anything.
Comments (2)
Author
Commented:I think you have not read article carefully and precisely. You have check article contains screen shots and wordings are totally different. However, always some commands are common. If you think those are copied then I can't do anything.
Hope you read it again.
Thanks,
Alpesh
Author
Commented:Resolution:
You have to make entry in hosts file at /etc/hosts
Open in new window
Now check you hostname (are you getting IP address or not) by below command.
Open in new window
If you get Unknow host then you have to do more configuration in network file.
Make hostname entry to network file at /etc/sysconfig/network
Open in new window
Now reboot system.
Congrats! problem resolved.