hadoop生态版本
Component | Version |
---|---|
OS | CentOS 7.3 64位 |
Hadoop | 2.9.2 |
Zookeeper | 3.5.10 |
集群角色
机器 | 角色 |
---|---|
hadoop6 | DataNode QuorumPeerMain |
hadoop7 | DataNode QuorumPeerMain |
hadoop8 | NameNode DataNode QuorumPeerMain DFSZKFailoverController JournalNode |
hadoop9 | NameNode DataNode JournalNode DFSZKFailoverController |
hadoop10 | DataNode JournalNode |
jumpserver | 跳板机 |
Zookeeper
配置zookeeper用户环境变量
vim ~/.bashrc
export ZOOKEEPER_HOMR="/opt/apache-zookeeper-3.5.10-bin"
export PATH="$ZOOKEEPER_HOMR/bin:$PATH"
zookeeper配置
cd $ZOOKEEPER_HOME/conf && vim zoo.cfg
dataDir=/tmp/zookeeper
server.1=hadoop6:2888:3888
server.2=hadoop7:2888:3888
server.3=hadoop8:2888:3888
// hadoop6
echo 1 > /tmp/zookeeper/myid
// hadoop7
echo 2 > /tmp/zookeeper/myid
// hadoop8
echo 8 > /tmp/zookeeper/myid
Hadoop
配置Hadoop用户环境变量
vim ~/.bashrc
export HADOOP_HOME="/opt/hadoop-2.9.2"
export HADOOP_CONF_DIR="$HADOOP_HOME/etc/hadoop"
export PATH="$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH"
cd $HADOOP_HOME/etc/hadoop && vim core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://ns1/</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop-2.9.2/data</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>hadoop6:2181,hadoop7:2181,hadoop8:2181</value>
</property>
<property>
<name>ha.zookeeper.session-timeout.ms</name>
<value>3000</value>
</property>
</configuration>
cd $HADOOP_HOME/etc/hadoop && vim hdfs-site.xml
<configuration>
<property>
<name>dfs.nameservices</name>
<value>ns1</value>
</property>
<property>
<name>dfs.ha.namenodes.ns1</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ns1.nn1</name>
<value>hadoop8:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.ns1.nn1</name>
<value>hadoop8:50070</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ns1.nn2</name>
<value>hadoop9:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.ns1.nn2</name>
<value>hadoop9:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://hadoop8:8485;hadoop9:8485;hadoop10:8485/ns1</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/opt/hadoop-2.9.2/data/jn</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.ns1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
shell(/bin/true)
</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file://${hadoop.tmp.dir}/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/storage/hdp/dfs/data</value>
</property>
</configuration>
cd $HADOOP_HOME/etc/hadoop && vim slaves
# localhost
hadoop8
hadoop9
hadoop10
hadoop6
hadoop7
启动Zookeeper集群
// hadoop6 hadoop7 hadoop8
zkServer.sh start
zkServer.sh status
启动Hdfs集群
// hadoop6 hadoop7 hadoop8
hadoop-daemon.sh start journalnode
// hadoop8
hdfs namenode -format
hdfs zkfc -formatZK
hadoop-daemon.sh start namenode
hadoop-daemon.sh start zkfc
// hadoop9
hdfs namenode -bootstrapStandby
hadoop-daemon.sh start namenode
hadoop-daemon.sh start zkfc
// hadoop6 hadoop7 hadoop8 hadoop9 hadoop10
hadoop-daemon.sh start datanode
Hadoop HA集群搭建
Hadoop 启动/停止集群和节点的命令
Hadoop HA 高可用集群的搭建
相关文章
暂无评论...