材料:3台虚拟主机,ip分别为:
192.168.1.201
192.168.1.202 192.168.1.2031、配置主机名称
三个ip与主机名称分别对应关系如下:
192.168.1.201 node201
192.168.1.202 node202192.168.1.203 node2031)修改配置文件
vi /etc/sysconfig/network
NETEORKING=yesHOSTNAME=node203
2)重启使生效
service network restart
3)检查
hostname
其他两个虚拟主机做同样的配置。
2、建立主机名和ip的映射
1)使三个虚拟主机通过节点名称直接相互访问
vi /etc/hosts
在3台虚拟机的/etc/hosts文件里面添加:
192.168.1.201 node201192.168.1.202 node202192.168.1.203 node203
2)使windows可以通过节点名称访问虚拟机
和虚拟机一样添加内容:
文件路径:C:\Windows\System32\drivers\etc
3)测试
虚拟机直接访问节点名称:
windows访问节点名称:
3、配置ssh免密码登录
1)生成密钥
ssh-keygen -t rsa
之后一直按回车
2)检查密钥
cd ~/.ssh/ ls
秘钥生成后在~/.ssh/目录下,有两个文件id_rsa(私钥)和id_rsa.pub(公钥)
3)在主节点(node201)上将公钥复制到authorized_keys并赋予authorized_keys600权限
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys chmod 600 ~/.ssh/authorized_keys
复制:
赋权:
3)同理在node202和node203节点上生成秘钥
ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub
4)将node202和node203节点的秘钥复制到node201节点上的authoized_keys
vi ~/.ssh/authorized_keys
5)将node201节点上的authoized_keys远程传输到node202和node203的~/.ssh/目录下
scp ~/.ssh/authorized_keys root@node202:~/.ssh/scp ~/.ssh/authorized_keys root@node203:~/.ssh/
6)检查是否免密登录
ssh node201
4、新建hadoop用户及其用户组
1)新建hadoop用户
adduser hadooppasswd hadoop
2)将hadoop用户归为hadoop组
usermod -a -G hadoop hadoopcat /etc/group
3)赋予hadoop用户root权限
vi /etc/sudoers
hadoop ALL=(ALL) ALL
5、安装hadoop
1)准备文件夹
mkdir /home/soft/hadoopcd /home/soft/hadoop
2)下载hadoop安装包
wget /home/soft/hadoopwget http://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-3.1.2/hadoop-3.1.2.tar.gz
3)解压hadoop安装包
tar -zxvf hadoop-3.1.2.tar.gz
4)配置环境变量
vi /etc/profile
export HADOOP_HOME=/home/soft/hadoop/hadoop-3.1.2:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
source /etc/profile
5)查看hadoop版本
6、搭建集群
1)在node201节点上创建以下文件夹
/home/soft/hadoop/hadoop-3.1.2/dfs/name
/home/soft/hadoop/hadoop-3.1.2/dfs/data
/home/soft/hadoop/hadoop-3.1.2/temp
cd /home/soft/hadoop/hadoop-3.1.2mkdir -p dfs/namemkdir -p dfs/datamkdir temp
2)配置hadoop文件
需要配置/home/soft/hadoop/hadoop-3.1.2/etc/hadoop目录下的7个配置文件:
【1】hadoop-env.sh
vi hadoop-env.sh
export JAVA_HOME=/home/soft/jdk/jdk1.8.0_191
【2】mapred-env.sh
vi mapred-env.sh
export JAVA_HOME=/home/soft/jdk/jdk1.8.0_191
【3】yarn-env.sh
vi yarn-env.sh
export JAVA_HOME=/home/soft/jdk/jdk1.8.0_191
【4】core-site.xml
vi core-site.xml
fs.defaultFS hdfs://mycluster hadoop.tmp.dir /opt/module/hadoop-2.7.6/data/ha/tmp ha.zookeeper.quorum node201:2181,node202:2181,node203:2181
【5】hdfs-site.xml
vi hdfs-site.xml
dfs.replication 2 dfs.nameservices mycluster dfs.ha.namenodes.mycluster nn1,nn2 dfs.namenode.rpc-address.mycluster.nn1 node201:8020 dfs.namenode.rpc-address.mycluster.nn2 node202:8020 dfs.namenode.http-address.mycluster.nn1 node201:50070 dfs.namenode.http-address.mycluster.nn2 node202:50070 dfs.namenode.shared.edits.dir qjournal://node201:8485;node202:8485;node203:8485/mycluster dfs.ha.fencing.methods sshfence dfs.ha.fencing.ssh.private-key-files /home/admin/.ssh/id_rsa dfs.journalnode.edits.dir /opt/module/hadoop-2.7.6/data/ha/jn dfs.permissions.enable false dfs.client.failover.proxy.provider.mycluster org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider dfs.ha.automatic-failover.enabled true
【6】mapred-site.xml
vi mapred-site.xml
mapreduce.framework.name yarn mapreduce.jobhistory.address node201:10020 mapreduce.jobhistory.webapp.address node201:19888 mapreduce.jobhistory.joblist.cache.size 20000 mapreduce.jobhistory.done-dir ${yarn.app.mapreduce.am.staging-dir}/history/done mapreduce.jobhistory.intermediate-done-dir ${yarn.app.mapreduce.am.staging-dir}/history/done_intermediate yarn.app.mapreduce.am.staging-dir /tmp/hadoop-yarn/staging
【7】yarn-site.xml
vi yarn-site.xml
yarn.nodemanager.aux-services mapreduce_shuffle yarn.resourcemanager.ha.enabled true yarn.resourcemanager.cluster-id rmCluster yarn.resourcemanager.ha.rm-ids rm1,rm2 yarn.resourcemanager.hostname.rm1 node202 yarn.resourcemanager.hostname.rm2 node203 yarn.resourcemanager.zk-address node201:2181,node202:2181,node203:2181 yarn.resourcemanager.recovery.enabled true yarn.resourcemanager.store.class org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore
【8】新建slaves
vi slaves
node201node202node203
2)将配置好的hadoop文件复制到其他节点上
由于hadoop集群需要在每一个节点上进行相同的配置,因此先在node201节点上配置,然后再复制到其他节点上即可。
scp -r /home/soft/hadoop/hadoop-3.1.2/etc/hadoop/ root@node202:/home/soft/hadoop/hadoop-3.1.2/etc/
scp -r /home/soft/hadoop/hadoop-3.1.2/etc/hadoop/ root@node203:/home/soft/hadoop/hadoop-3.1.2/etc/
7、启动验证集群
1)格式化Namenode
如果集群是第一次启动,需要格式化namenode
hdfs namenode -format
2)启动Hdfs
start-dfs.sh
启动报错: