复制本文链接

1 安装java环境安装

略…

hehaibolocal:~ hehaibo$ java -version

java version "1.8.0_91"

Java(TM) SE Runtime Environment (build 1.8.0_91-b14)

Java HotSpot(TM) 64-Bit Server VM (build 25.91-b14, mixed mode)

hehaibolocal:~ hehaibo$ 

2 hadoop 安装

从 https://archive.apache.org/dist/hadoop/common/hadoop-0.20.2/hadoop-0.20.2.tar.gz 下载到 磁盘 /Users/hehaibo/hadoop/ 

执行

hehaibolocal:hadoop hehaibo$ tar xvf hadoop-0.20.2.tar.gz

安装后的目录如下:/Users/hehaibo/hadoop/hadoop-0.20.2

 

3 配置hadooop环境变量

sudo vi /etc/profile

输入:

HADOOP_HOME=/Users/hehaibo/hadoop/hadoop-0.20.2

PATH=".;$PATH:/usr/local/bin:$JAVA_HOME/bin:$ANT_HOME/bin:$MAVEN_HOME/bin:$HADOOP_HOME/bin

4 验证安装的版本

hehaibolocal:hadoop-0.20.2 hehaibo$ hadoop version

Hadoop 0.20.2

Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707

Compiled by chrisdo on Fri Feb 19 08:07:34 UTC 2010

hehaibolocal:hadoop-0.20.2 hehaibo$ 

5 配置hadoop环境

5.1 配置conf/core-site.xml

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>

<!-- 设置namenode所在主机,端口号是9000 -->

<name>fs.default.name</name>

<value>hdfs://localhost:9000/</value>

</property>

</configuration>

5.2 配置conf/hdfs-site.xml

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<!-- 设置HDFS文件系统的元信息保存目录,可以设置多个,逗号分割 -->

<property>

<name>dfs.data.dir</name>

<value>/Users/hehaibo/hadoop/hadoop-0.20.2-tmp/hadoop-data</value>

</property>

<!-- 设置HDFS文件系统的数据保存在什么目录下,可以设置多个,逗号分割 -->

<property>

<name>dfs.name.dir</name>

<value>//Users/hehaibo/hadoop/hadoop-0.20.2-tmp/hadoop-name</value>

</property>

<property>

<!-- 设置数据块的复制次数,默认是3,如果slave节点数少于3,则写成相应的1或者2 -->

<name>dfs.replication</name>

<value>1</value>

</property>

</configuration>

5.3 配置conf/mapred-site.xml

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>

<!-- 设置jobtracker所在机器,端口号9001 -->

<name>mapred.job.tracker</name>

<value>localhost:8021</value>

</property>

</configuration>

6 配置ssh免密码登录

% sudo apt-get install ssh

% ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa 

% cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

--免登成功
hehaibolocal:~ hehaibo$ ssh localhost

Last login: Thu Jul 19 16:30:48 2018

hehaibolocal:~ hehaibo$ 


7 修改conf/hadoop-env.sh 增加java环境变量

export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home/

8 hadoop 格式化hdfs文件系统

% hadoop namenode -format


hehaibolocal:~ hehaibo$ hadoop namenode -format

18/07/19 16:50:25 INFO namenode.NameNode: STARTUP_MSG: 

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG:   host = hehaibolocal.local/172.17.11.24

STARTUP_MSG:   args = [-format]

STARTUP_MSG:   version = 0.20.2

STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010

************************************************************/

18/07/19 16:50:26 INFO namenode.FSNamesystem: fsOwner=hehaibo,staff,access_bpf,everyone,localaccounts,_appserverusr,admin,_appserveradm,_lpadmin,_appstore,_lpoperator,_developer,_analyticsusers,com.apple.access_ftp,com.apple.access_screensharing,com.apple.access_ssh-disabled

18/07/19 16:50:26 INFO namenode.FSNamesystem: supergroup=supergroup

18/07/19 16:50:26 INFO namenode.FSNamesystem: isPermissionEnabled=true

18/07/19 16:50:26 INFO common.Storage: Image file of size 97 saved in 0 seconds.

18/07/19 16:50:26 INFO common.Storage: Storage directory /Users/hehaibo/hadoop/hadoop-0.20.2-tmp/hadoop-name has been successfully formatted.

18/07/19 16:50:26 INFO namenode.NameNode: SHUTDOWN_MSG: 

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at hehaibolocal.local/172.17.11.24

************************************************************/

9 启动hadoop环境

9.1启动

hehaibolocal:~ hehaibo$ start-dfs.sh

namenode running as process 5375. Stop it first.

localhost: starting datanode, logging to /Users/hehaibo/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hehaibo-datanode-hehaibolocal.local.out

localhost: starting secondarynamenode, logging to /Users/hehaibo/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hehaibo-secondarynamenode-hehaibolocal.local.out

hehaibolocal:~ hehaibo$ start-mapred.sh 

starting jobtracker, logging to /Users/hehaibo/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hehaibo-jobtracker-hehaibolocal.local.out

localhost: starting tasktracker, logging to /Users/hehaibo/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hehaibo-tasktracker-hehaibolocal.local.out

9.2查看启动进程

hehaibolocal:~ hehaibo$ jps

5603 DataNode

5669 SecondaryNameNode

5770 TaskTracker

5710 JobTracker

5375 NameNode

9.3 浏览器访问:

http://localhost:50070/dfshealth.jsp

http://localhost:50030/jobtracker.jsp

10 停止hadoop服务

hehaibolocal:~ hehaibo$ stop-dfs.sh 

stopping namenode

localhost: stopping datanode

localhost: stopping secondarynamenode

hehaibolocal:~ hehaibo$ stop-mapred.sh 

stopping jobtracker

localhost: stopping tasktracker

hehaibolocal:~ hehaibo$ 

 

阿里云老用户专享阿里云双12

发表评论

电子邮件地址不会被公开。 必填项已用*标注

72 − 66 =