Your SlideShare is downloading. ×
Hadoop cluster 安裝
Upcoming SlideShare
Loading in...5

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Hadoop cluster 安裝


Published on

Hadoop 2.2.0

Hadoop 2.2.0

Published in: Technology

  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide


  • 1. Hadoop Cluster 安裝 Intern Report
  • 2. 主要參考網頁  hdfs/installing-single-node-hadoop-2-2-0- on-ubuntu/
  • 3. Software Versions  Ubuntu Linux 12.04.4 LTS  Hadoop 2.2.0
  • 4.  If you are using putty to access your Linux box remotely, please install openssh by running this command, this also helps in configuring SSH access easily in the later part of the installation: sudo apt-get install openssh-server
  • 5. Prerequisites:  Installing Java v1.7  Adding dedicated Hadoop system user.  Configuring SSH access.
  • 6. 1. Installing Java v1.7: sudo add-apt-repository ppa:webupd8team/java sudo apt-get update sudo apt-get install oracle-java7-installer export JAVA_HOME=/usr/lib/jvm/java-7-oracle
  • 7. 2. Adding dedicated Hadoop system user.  a. Adding group: sudo addgroup hadoop  b. Creating a user and adding the user to a group: sudo adduser –ingroup hadoop hduser
  • 8. 3. Configuring SSH access:  su – hduser  ssh-keyegen -t rsa -P "“  cat $HOME/.ssh/ >> $HOME/.ssh/authorized_keys  ssh hduser@localhost
  • 9. Hadoop Installation
  • 10.  i. Run this following command to download Hadoop version 2.2.0 wget 2.2.0/hadoop-2.2.0.tar.gz  ii. Unpack the compressed hadoop file by using this command: tar -xvzf hadoop-2.2.0.tar.gz  iii. move hadoop-2.2.0 to hadoop directory by using give command mv hadoop-2.2.0 hadoop
  • 11.  iv. Move hadoop package of your choice sudo mv hadoop /usr/local/  v. Make sure to change the owner of all the files to the hduser user and hadoop group by using this command: cd /usr/local/ sudo chown -R hduser:hadoop hadoop
  • 12. Configuring Hadoop
  • 13.  The following are the required files we will use for the perfect configuration of the single node Hadoop cluster. a. yarn-site.xml: b. core-site.xml c. mapred-site.xml d. hdfs-site.xml e. Update $HOME/.bashrc  We can find the list of files in Hadoop directory which is located in cd /usr/local/hadoop/etc/hadoop
  • 14. a.yarn-site.xml: <configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux- services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler </value> </property> </configuration>
  • 15. b. core-site.xml: <configuration> <property> <name></name> <value>hdfs://localhost:9000</value> </property> </configuration>
  • 16. c. mapred-site.xml: <configuration> property> <name> </name> <value>yarn</value> </property> </configuration>
  • 17. sudo mkdir -p $HADOOP_HOME/yarn_data/hdfs/namenode sudo mkdir -p $HADOOP_HOME/yarn_data/hdfs/datanode
  • 18. d. hdfs-site.xml: <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name></name> <value>file:/usr/local/hadoop/yarn_data/hdfs/namenode</value> </property> <property> <name></name> <value>file:/usr/local/hadoop/yarn_data/hdfs/datanode</value> </property> </configuration>
  • 19. e. Update $HOME/.bashrc  i. Go back to the root and edit the .bashrc file. vi .bashrc
  • 20. e. Update $HOME/.bashrc #Set Hadoop-related environment variables export HADOOP_PREFIX=/usr/local/hadoop export HADOOP_HOME=/usr/local/hadoop export HADOOP_MAPRED_HOME=${HADOOP_HOME} export HADOOP_COMMON_HOME=${HADOOP_HOME} export HADOOP_HDFS_HOME=${HADOOP_HOME} export YARN_HOME=${HADOOP_HOME} export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop #Native Path export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_PREFIX}/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib" #Java path export JAVA_HOME='/usr/lib/jvm/java-7-oracle' #Add Hadoop bin/ directory to PATH export PATH=$PATH:$HADOOP_HOME/bin:$JAVA_PATH/bin:$HADOOP_HOME/sbi n
  • 21. Formatting and Starting/Stopping the HDFS filesystem via the NameNode
  • 22.  i. The first step to starting up your Hadoop installation is formatting the Hadoop filesystem which is implemented on top of the local filesystem of your cluster. You need to do this the first time you set up a Hadoop cluster. Do not format a running Hadoop filesystem as you will lose all the data currently in the cluster (in HDFS). hadoop namenode -format
  • 23.  ii. Start Hadoop Daemons by running the following commands:  Name node: start namenode  Data node: start datanode
  • 24.  Resource Manager: start resourcemanager  Node Manager: start nodemanager  Job History Server: start historyserver
  • 25.  Stop Hadoop by running the following command
  • 26.  Start and stop hadoop daemons all at once.
  • 27. Thanks for listening