Hadoop 、Zookeeper、Spark笔记
随手记录一些用到的东西或问题,防止自己忘记或再次遇到
启动顺序
先启动Hadoop 集群,再启动zookeeper,最后启动spark
Hadoop
格式化:跟重装系统一样,只能第一次装好之后才能用这个命令,格式化
hadoop namenode -format
全部启动
# 进入hadoop的sbin目录下
start-all.sh
(base) root@node1:/export/server/hadoop-3.3.4/sbin# start-all.sh
Starting namenodes on [node1]
Starting datanodes
Starting secondary namenodes [node2]
Starting resourcemanager
Starting nodemanagers
全部停止
(base) root@node1:/export/server# hadoop-3.3.4/sbin/stop-all.sh
Stopping namenodes on [node1]
Stopping datanodes
Stopping secondary namenodes [node2]
Stopping nodemanagers
Stopping resourcemanager
zookeeper启动
启动命令:
一定要在要启动的几个窗口中同时启动,不然会出错,可以用finalshell的命令发送全部会话的功能
apache-zookeeper-3.7.1-bin/bin/zkServer.sh start
查看状态
# 复制该命令
apache-zookeeper-3.7.1-bin/bin/zkServer.sh status
# 第一台
(base) root@node1:/export/server# apache-zookeeper-3.7.1-bin/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /export/server/apache-zookeeper-3.7.1-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader
# 第二台
(base) root@node2:/export/server# apache-zookeeper-3.7.1-bin/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /export/server/apache-zookeeper-3.7.1-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
# 第三台
(base) root@node3:/export/server# apache-zookeeper-3.7.1-bin/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /export/server/apache-zookeeper-3.7.1-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
# 一个leader,两个follow
停止命令
apache-zookeeper-3.7.1-bin/bin/zkServer.sh stop
启动Saprk
node1
启动start-all.sh,启动master和所有worker
启动历史服务器 start-history-server.sh
(base) root@node1:/export/server# spark/sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /export/server/spark/logs/spark-root-org.apache.spark.deploy.master.Master-1-node1.out
node2: starting org.apache.spark.deploy.worker.Worker, logging to /export/server/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-node2.out
node3: starting org.apache.spark.deploy.worker.Worker, logging to /export/server/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-node3.out
node1: starting org.apache.spark.deploy.worker.Worker, logging to /export/server/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-node1.out
node3: 启动start-master.sh 作为备用master
(base) root@node3:/export/server# spark/sbin/start-master.sh
starting org.apache.spark.deploy.master.Master, logging to /export/server/spark/logs/spark-root-org.apache.spark.deploy.master.Master-1-node3.out
历史服务器:
涉及到hostname的配置文件
这个只有当其他虚拟机是克隆过去或者是将某一台修改主机名的时候要用到,修改完之后就可以用上面的命令启动对应的集群
宿主机hosts配置文件
如果在宿主机(假设是windows电脑)的浏览器上看master webUI的时候也想输入node1:8080,而不是192.168.x.x:8080,或者虚拟机ip地址发生了改变,那么需要修改宿主机的hosts文件
Ubuntu自身配置文件
-
/etc/hostname
-
/etc/hosts
zookeeper配置文件包含主机名
1.安装目录下的这个文件/export/server/apache-zookeeper-3.7.1-bin/conf/zoo.cfg
2.这个有可能涉及到,上面的server.1一定要和下面的1对应,这个文件位置看当初自己建的位置/export/data/zookeeper/data/myid
Hadoop
/export/server/hadoop-3.3.4/etc/hadoop/core-site.xml
/export/server/hadoop-3.3.4/etc/hadoop/hdfs-site.xml
/export/server/hadoop-3.3.4/etc/hadoop/mapred-site.xml
/export/server/hadoop-3.3.4/etc/hadoop/yarn-site.xml
/export/server/hadoop-3.3.4/etc/hadoop/workers
Spark配置文件包含主机名
/export/server/spark/conf/workers
/export/server/spark/conf/spark-env.sh
3./export/server/spark/conf/spark-defaults.conf
这篇好文章是转载于:学新通技术网
- 版权申明: 本站部分内容来自互联网,仅供学习及演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,请提供相关证据及您的身份证明,我们将在收到邮件后48小时内删除。
- 本站站名: 学新通技术网
- 本文地址: /boutique/detail/tanhggigib
系列文章
更多
同类精品
更多
-
photoshop保存的图片太大微信发不了怎么办
PHP中文网 06-15 -
《学习通》视频自动暂停处理方法
HelloWorld317 07-05 -
Android 11 保存文件到外部存储,并分享文件
Luke 10-12 -
word里面弄一个表格后上面的标题会跑到下面怎么办
PHP中文网 06-20 -
photoshop扩展功能面板显示灰色怎么办
PHP中文网 06-14 -
微信公众号没有声音提示怎么办
PHP中文网 03-31 -
excel下划线不显示怎么办
PHP中文网 06-23 -
excel打印预览压线压字怎么办
PHP中文网 06-22 -
TikTok加速器哪个好免费的TK加速器推荐
TK小达人 10-01 -
怎样阻止微信小程序自动打开
PHP中文网 06-13