您的位置:首页 > 游戏 > 手游 > hadoop部署

hadoop部署

2024/12/23 5:44:03 来源:https://blog.csdn.net/zhoucanji/article/details/139375388  浏览:    关键词:hadoop部署

需要3台机子,Linux为centos7

分别设置静态ip,设置主机名,配置主机名映射,配置ssh免密登入

hadoop1 192.168.123.7

hadoop2 192.168.123.8

hadoop3 192.168.123.9

vi /etc/sysconfig/network-scripts/ifcfg-ens33TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="4910dee8-d8d8-4d23-8bb0-37136025ba30"
DEVICE="ens33"
ONBOOT="yes"
IPADDR="192.168.123.7"
PREFIX="24"
GATEWAY="192.168.123.1"
DNS1="192.168.123.1"
IPV6_PRIVACY="no"service network restart
hostnamectl set-hostname hadoop1
hostname
vi /etc/hosts192.168.123.7 hadoop1
192.168.123.8 hadoop2
192.168.123.9 hadoop3
[root@hadoop1 ~]# ssh-keygen -t rsa -b 4096
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):    
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:UP2g8izBiP/yBO24aDDrSEI8rOZbLaHUhxGp9OJtbM4 root@hadoop1
The key's randomart image is:
+---[RSA 4096]----+
|   ..   ..       |
| . ..  .  o      |
|. oo o.  . o     |
|ooo.+.+..   .    |
|.*+=...=S        |
|*.o==+. o        |
|oB=o.oo.         |
|B oEoo.          |
|o=o .o.          |
+----[SHA256]-----+
[root@hadoop1 ~]# ssh-copy-id hadoop1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'hadoop1 (192.168.123.7)' can't be established.
ECDSA key fingerprint is SHA256:zeXte+vaqEuwuOW+Q8TeUXlDWUonODWXSgMl9PDb7E8.
ECDSA key fingerprint is MD5:eb:c8:2c:9c:c5:ce:e5:66:e8:bb:27:a2:f6:9f:01:63.
Are you sure you want to continue connecting (yes/no)? yes 
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@hadoop1's password: Number of key(s) added: 1Now try logging into the machine, with:   "ssh 'hadoop1'"
and check to make sure that only the key(s) you wanted were added.[root@hadoop1 ~]# ssh-copy-id hadoop2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'hadoop2 (192.168.123.8)' can't be established.
ECDSA key fingerprint is SHA256:zeXte+vaqEuwuOW+Q8TeUXlDWUonODWXSgMl9PDb7E8.
ECDSA key fingerprint is MD5:eb:c8:2c:9c:c5:ce:e5:66:e8:bb:27:a2:f6:9f:01:63.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@hadoop2's password: Number of key(s) added: 1Now try logging into the machine, with:   "ssh 'hadoop2'"
and check to make sure that only the key(s) you wanted were added.[root@hadoop1 ~]# ssh-copy-id hadoop3
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'hadoop3 (192.168.123.9)' can't be established.
ECDSA key fingerprint is SHA256:zeXte+vaqEuwuOW+Q8TeUXlDWUonODWXSgMl9PDb7E8.
ECDSA key fingerprint is MD5:eb:c8:2c:9c:c5:ce:e5:66:e8:bb:27:a2:f6:9f:01:63.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@hadoop3's password: Number of key(s) added: 1Now try logging into the machine, with:   "ssh 'hadoop3'"
and check to make sure that only the key(s) you wanted were added.

下载jdk8,并配置jdk环境

Java Archive Downloads - Java SE 8u211 and later

[root@hadoop1 ~]# mkdir -p /export/server
[root@hadoop1 ~]# tar -zxvf jdk-8u401-linux-x64.tar.gz -C /export/server/
[root@hadoop1 ~]# ln -s /export/server/jdk1.8.0_401/ /export/server/jdk
[root@hadoop1 ~]# vi /etc/profile
export JAVA_HOME=/export/server/jdk
export PATH=$PATH:$JAVA_HOME/bin
[root@hadoop1 ~]# source /etc/profile
[root@hadoop1 ~]# java -version

关闭防火墙和SELinux,然后重启

[root@hadoop1 ~]# systemctl stop firewalld
[root@hadoop1 ~]# systemctl disable firewalld
[root@hadoop1 ~]# vim /etc/sysconfig/selinux
SELINUX=disabled
[root@hadoop1 ~]# init 6

修改时区并配置自动时间同步

[root@hadoop1 ~]# yum install -y ntp
[root@hadoop1 ~]# rm -f /etc/localtime
[root@hadoop1 ~]# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
[root@hadoop1 ~]# ntpdate -u ntp.aliyun.com
[root@hadoop1 ~]# systemctl start ntpd
[root@hadoop1 ~]# systemctl enable ntpd
[root@hadoop3 ~]# date

解压hadoop压缩包

[root@hadoop1 ~]# tar -zxvf hadoop-3.3.6.tar.gz -C /export/server/
[root@hadoop1 ~]# ln -s /export/server/hadoop-3.3.6/ /export/server/hadoop 

配置workers文件

[root@hadoop1 hadoop]# vi /export/server/hadoop/etc/hadoop/workers
hadoop1
hadoop2
hadoop3

配置hadoop-env.sh文件

[root@hadoop1 hadoop]# vi /export/server/hadoop/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/export/server/jdk
export HADOOP_HOME=/export/server/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_LOG_DIR=$HADOOP_HOME/logs

配置core-site.xml文件

[root@hadoop1 hadoop]# vi /export/server/hadoop/etc/hadoop/core-site.xml
<configuration><property><name>fs.defaultFS</name><value>hdfs://hadoop1:8020</value></property><property><name>io.file.buffer.size</name><value>131072</value></property>
</configuration>

配置hdfs-site.xml文件

vi /export/server/hadoop/etc/hadoop/hdfs-site.xml
<configuration><property><name>dfs.datanode.data.dir.perm</name><value>700</value></property><property><name>dfs.namenode.name.dir</name><value>/data/namenode</value></property><property><name>dfs.namendoe.hosts</name><value>hadoop1,hadoop2,hadoop3</value></property><property><name>dfs.blockszie</name><value>268435456</value></property><property><name>dfs.namenode.handler.count</name><value>100</value></property><property><name>dfs.datanode.data.dir</name><value>/data/datanode</value></property>
</configuration>

配置hadoop环境变量

[root@hadoop1 hadoop]# vim /etc/profile
export HADOOP_HOME=/export/server/hadoop
export PATH=$PATH:$HADOOP/bin:$HDOOP/sbin
[root@hadoop1 hadoop]# source /etc/profile
[root@hadoop2 hadoop]# mkdir -p /data/datanode
[root@hadoop1 hadoop]# mkdir -p /data/namenode

为了安全,创建一个hadoop用户

[root@hadoop1 hadoop]# useradd hadoop
[root@hadoop1 hadoop]# passwd hadoop
[root@hadoop1 hadoop]# chown -R hadoop:hadoop /export
[root@hadoop1 hadoop]# chown -R hadoop:hadoop /data

切换用户,并初始化namenode节点,只需在hadoop1启动就行

[root@hadoop1 hadoop]# su hadoop
[hadoop@hadoop1 namenode]$ hadoop namenode -format
[hadoop@hadoop1 namenode]$ start-dfs.sh

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com