• 首页 首页 icon
  • 工具库 工具库 icon
    • IP查询 IP查询 icon
  • 内容库 内容库 icon
    • 快讯库 快讯库 icon
    • 精品库 精品库 icon
    • 问答库 问答库 icon
  • 更多 更多 icon
    • 服务条款 服务条款 icon

Hue编译安装使用

武飞扬头像
顶尖高手养成计划
帮助1

简介

      由于大数据框架很多,为了解决某个问题,一般来说会用到多个框架,但是每个框架又都有自己的web UI监控界面,对应着不同的端口号。比如HDFS(9870)、YARN(8088)、MapReduce(19888)等。这个时候有一个统一的web UI界面去管理各个大数据常用框架是非常方便的。这就使得对大数据的开发、监控和运维更加的方便。由此Hue诞生就是为了解决每个框架都有自己的Web界面的问题。

学新通

编译安装

Hue官方网站:https://gethue.com/
HUE官方用户手册:https://docs.gethue.com/
官方安装文档:https://docs.gethue.com/administrator/installation/install/
HUE下载地址:Hue - The open source SQL Assistant for Data Warehouses

下载(点上面那个Hue下载地址下面地址作废)

Hue - The open source SQL Assistant for Data Warehouses

学新通

相关安装包

  1. centos 7
  2. hue 4.5
  3. node.js v10.6.0(参考官网建议,高版本编译存在问题)

hue源码包

链接:https://pan.百度.com/s/10UPgRfejKpwdV6qT4WuJog 
提取码:yyds 
--来自百度网盘超级会员V5的分享

npm

 先下载npm,安装,这里我就不具体了(记得加环境变量)

  1.  
    wget https://nodejs.org/dist/v14.15.4/node-v14.15.4-linux-x64.tar.xz
  2.  
    tar -xf node-v14.15.4-linux-x64.tar.xz

配置环境变量

 sudo vi /etc/profile.d/my_env.sh
  1.  
    #NPM_HOME
  2.  
    NPM_HOME=/home/bigdata/node-v14.15.4-linux-x64
  3.  
    export PATH=$PATH:$NPM_HOME/bin:$NPM_HOME/sbin
 source /etc/profile.d/my_env.sh

配置淘宝镜像

npm config set registry https://registry.npm.taobao.org

查看是否切换成功

npm config get registry

如果npm不好使,使用cnpm

  1.  
    npm install -g cnpm --registry=https://registry.npm.taobao.org
  2.  
    cd /usr/bin
  3.  
    ln -s /usr/local/node/bin/cnpm cnpm

编译

tar -zxvf hue-4.5.0.tgz

安装依赖包(安装最好在一台没有安装过mysql的机器编译安装)

  1.  
    # 需要Python支持(Python 2.7 / Python 3.5 )
  2.  
    python --version
  3.  
    # 在 CentOS 系统中安装编译 Hue 需要的依赖库
  4.  
    sudo yum install ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi cyrus-sasl-plain gcc gcc-c krb5-devel libffi-devel libxml2-devel libxslt-devel make mysql mysql-devel openldap-devel python-devel sqlite-devel gmp-devel

以上依赖仅适用CentOS/RHEL 7.X,其他情况请参考https://docs.gethue.com/administrator/installation/dependencies/
安装Hue的节点上最好没有安装过MySQL,否则可能有版本冲突
安装过程中需要联网,网络不好会有各种奇怪的问题

修改hue.ini文件

  1.  
    # [desktop]
  2.  
    http_host=node2
  3.  
    http_port=8000
  4.  
    time_zone=Asia/Shanghai
  5.  
    server_user=bigdata
  6.  
    server_group=bigdata
  7.  
    default_user=bigdata
  8.  
    app_blacklist=search
  9.  
    # [[database]]。Hue默认使用SQLite数据库记录相关元数据,替换为mysql
  10.  
    engine=mysql
  11.  
    host=master
  12.  
    port=3306
  13.  
    user=root
  14.  
    password=root
  15.  
    #数据库名称
  16.  
    name=hue
  17.  
    # 1003行左右,Hadoop配置文件的路径
  18.  
    hadoop_conf_dir=/home/bigdata/hadoop/hadoop/etc/hadoop
学新通


hue编译

学新通

  1.  
    # 进入 hue 源码目录,进行编译。 使用 PREFIX 指定安装 Hue 的路径
  2.  
    cd hue-4.5.0
  3.  
    make apps

 如果遇到下列问题

学新通

yum install mysql-devel

然后删除上面指定编译目录的target里面的文件

  1.  
    PREFIX=/home/bigdata/apache-maven-3.8.6/hue-release-4.4.0/target
  2.  
     
  3.  
    cd /home/bigdata/apache-maven-3.8.6/hue-release-4.4.0/target
  4.  
    rm -rf ./*

如果出现下面的错误

学新通

 sudo yum install -y libxslt-devel

 如果出现下面的错误

学新通

 查找对应的依赖

sudo yum search sqlite3

学新通

找到对应的依赖进行安装 

  1.  
    sudo yum install -y libsqlite3x.x86_64
  2.  
    sudo yum install -y libsqlite3x-devel.x86_64
  3.  
    sudo yum install -y gmp-devel.x86_64

再次编译

PREFIX=/home/bigdata/apache-maven-3.8.6/hue-release-4.4.0/target make install

稍微的等待.......恭喜编译成功! 

学新通

编译以后的包不能到其他机器使用,因为挺多都是觉得路径里面,除非环境一样。

tar -zcvf hue.tar.gz hue

整合

HDFS

修改hadoop配置

在 hdfs-site.xml 中增加配置

  1.  
    <!-- HUE -->
  2.  
    <property>
  3.  
    <name>dfs.webhdfs.enabled</name>
  4.  
    <value>true</value>
  5.  
    </property>
  6.  
    <property>
  7.  
    <name>dfs.permissions.enabled</name>
  8.  
    <value>false</value>
  9.  
    </property>

在 core-site.xml 中增加配置

  1.  
    <!-- HUE -->
  2.  
    <property>
  3.  
    <name>hadoop.proxyuser.bigdata.hosts</name>
  4.  
    <value>*</value>
  5.  
    </property>
  6.  
    <property>
  7.  
    <name>hadoop.proxyuser.bigdata.groups</name>
  8.  
    <value>*</value>
  9.  
    </property>
  10.  
    <property>
  11.  
    <name>hadoop.proxyuser.hdfs.hosts</name>
  12.  
    <value>*</value>
  13.  
    </property>
  14.  
    <property>
  15.  
    <name>hadoop.proxyuser.hdfs.groups</name>
  16.  
    <value>*</value>
  17.  
    </property>
学新通

增加 httpfs-site.xml 文件,加入配置

  1.  
    <configuration>
  2.  
    <!-- HUE -->
  3.  
    <property>
  4.  
    <name>httpfs.proxyuser.bigdata.hosts</name>
  5.  
    <value>*</value>
  6.  
    </property>
  7.  
    <property>
  8.  
    <name>httpfs.proxyuser.bigdata.groups</name>
  9.  
    <value>*</value>
  10.  
    </property>
  11.  
    </configuration>

备注:修改完HDFS相关配置后,需要把配置scp给集群中每台机器,重启hdfs服务。

修改hue配置

  1.  
    cd /home/bigdata/apache-maven-3.8.6/hue-4.5.0/desktop/conf
  2.  
    vi hue.ini
  1.  
    # [desktop]
  2.  
    http_host=node2
  3.  
    http_port=8000
  4.  
    time_zone=Asia/Shanghai
  5.  
    server_user=bigdata
  6.  
    server_group=bigdata
  7.  
    default_user=bigdata
  8.  
    app_blacklist=search
  9.  
    # [[database]]。Hue默认使用SQLite数据库记录相关元数据,替换为mysql
  10.  
    engine=mysql
  11.  
    host=master
  12.  
    port=3306
  13.  
    user=root
  14.  
    password=root
  15.  
    #数据库名称
  16.  
    name=hue
  17.  
    # 1003行左右,Hadoop配置文件的路径
  18.  
    hadoop_conf_dir=/home/bigdata/hadoop/hadoop/etc/hadoop
学新通

创建数据库

CREATE DATABASE hue DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;

Hue初始化数据库

  1.  
    # 初始化数据库
  2.  
    cd /home/bigdata/apache-maven-3.8.6/hue-release-4.4.0/target/hue/build/env/bin
  3.  
    ./hue syncdb
  4.  
    ./hue migrate
  5.  
    # 检查数据

学新通

启动hue

/data/hue/build/env/bin/supervisor

最全配置

core-site.xml

  1.  
    <?xml version="1.0" encoding="UTF-8"?>
  2.  
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3.  
    <!--
  4.  
    Licensed under the Apache License, Version 2.0 (the "License");
  5.  
    you may not use this file except in compliance with the License.
  6.  
    You may obtain a copy of the License at
  7.  
     
  8.  
    http://www.apache.org/licenses/LICENSE-2.0
  9.  
     
  10.  
    Unless required by applicable law or agreed to in writing, software
  11.  
    distributed under the License is distributed on an "AS IS" BASIS,
  12.  
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  13.  
    See the License for the specific language governing permissions and
  14.  
    limitations under the License. See accompanying LICENSE file.
  15.  
    -->
  16.  
     
  17.  
    <!-- Put site-specific property overrides in this file. -->
  18.  
     
  19.  
    <configuration>
  20.  
     
  21.  
    <property>
  22.  
    <!--指定 namenode 的 hdfs 协议文件系统的通信地址-->
  23.  
    <name>fs.defaultFS</name>
  24.  
    <!--指定hdfs高可用的集群名称-->
  25.  
    <value>hdfs://bigdatacluster</value>
  26.  
    </property>
  27.  
    <property>
  28.  
    <!--指定 hadoop 集群存储临时文件的目录-->
  29.  
    <name>hadoop.tmp.dir</name>
  30.  
    <value>/home/bigdata/module/hadoop-3.1.3/data</value>
  31.  
    </property>
  32.  
     
  33.  
    <!-- 配置HDFS网页登录使用的静态用户为bigdata -->
  34.  
    <property>
  35.  
    <name>hadoop.http.staticuser.user</name>
  36.  
    <value>bigdata</value>
  37.  
    </property>
  38.  
     
  39.  
    <!-- 回收站 -->
  40.  
    <property>
  41.  
    <name>fs.trash.interval</name>
  42.  
    <value>1</value>
  43.  
    </property>
  44.  
     
  45.  
    <property>
  46.  
    <name>fs.trash.checkpoint.interval</name>
  47.  
    <value>1</value>
  48.  
    </property>
  49.  
     
  50.  
    <!-- 配置该bigdata(superUser)允许通过代理访问的主机节点 -->
  51.  
    <property>
  52.  
    <name>hadoop.proxyuser.bigdata.hosts</name>
  53.  
    <value>*</value>
  54.  
    </property>
  55.  
    <!-- 配置该bigdata(superUser)允许通过代理用户所属组 -->
  56.  
    <property>
  57.  
    <name>hadoop.proxyuser.bigdata.groups</name>
  58.  
    <value>*</value>
  59.  
    </property>
  60.  
    <!-- 配置该bigdata(superUser)允许通过代理的用户-->
  61.  
    <property>
  62.  
    <name>hadoop.proxyuser.bigdata.users</name>
  63.  
    <value>*</value>
  64.  
    </property>
  65.  
     
  66.  
    <!-- 指定zkfc要连接的zkServer地址 -->
  67.  
    <property>
  68.  
    <name>ha.zookeeper.quorum</name>
  69.  
    <value>node1:2181,node2:2181,node3:2181</value>
  70.  
    </property>
  71.  
     
  72.  
    <!-- Hue -->
  73.  
    <property>
  74.  
    <name>hadoop.proxyuser.hdfs.hosts</name>
  75.  
    <value>*</value>
  76.  
    </property>
  77.  
    <property>
  78.  
    <name>hadoop.proxyuser.hdfs.groups</name>
  79.  
    <value>*</value>
  80.  
    </property>
  81.  
     
  82.  
    <property>
  83.  
    <name>hadoop.proxyuser.httpfs.hosts</name>
  84.  
    <value>*</value>
  85.  
    </property>
  86.  
    <property>
  87.  
    <name>hadoop.proxyuser.httpfs.groups</name>
  88.  
    <value>*</value>
  89.  
    </property>
  90.  
     
  91.  
    <property>
  92.  
    <name>hadoop.proxyuser.hue.hosts</name>
  93.  
    <value>*</value>
  94.  
    </property>
  95.  
    <property>
  96.  
    <name>hadoop.proxyuser.hue.groups</name>
  97.  
    <value>*</value>
  98.  
    </property>
  99.  
     
  100.  
    </configuration>
学新通

hdfs-site.xml 

  1.  
    <?xml version="1.0" encoding="UTF-8"?>
  2.  
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3.  
    <!--
  4.  
    Licensed under the Apache License, Version 2.0 (the "License");
  5.  
    you may not use this file except in compliance with the License.
  6.  
    You may obtain a copy of the License at
  7.  
     
  8.  
    http://www.apache.org/licenses/LICENSE-2.0
  9.  
     
  10.  
    Unless required by applicable law or agreed to in writing, software
  11.  
    distributed under the License is distributed on an "AS IS" BASIS,
  12.  
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  13.  
    See the License for the specific language governing permissions and
  14.  
    limitations under the License. See accompanying LICENSE file.
  15.  
    -->
  16.  
     
  17.  
    <!-- Put site-specific property overrides in this file. -->
  18.  
     
  19.  
    <configuration>
  20.  
     
  21.  
    <!-- NameNode数据存储目录 -->
  22.  
    <property>
  23.  
    <name>dfs.namenode.name.dir</name>
  24.  
    <value>file://${hadoop.tmp.dir}/name</value>
  25.  
    </property>
  26.  
    <!-- DataNode数据存储目录 -->
  27.  
    <property>
  28.  
    <name>dfs.datanode.data.dir</name>
  29.  
    <value>file://${hadoop.tmp.dir}/data</value>
  30.  
    </property>
  31.  
    <!-- JournalNode数据存储目录 -->
  32.  
    <property>
  33.  
    <name>dfs.journalnode.edits.dir</name>
  34.  
    <value>${hadoop.tmp.dir}/jn</value>
  35.  
    </property>
  36.  
    <!-- 完全分布式集群名称 对应core.xml里面的fs.defaultFS-->
  37.  
    <property>
  38.  
    <name>dfs.nameservices</name>
  39.  
    <value>bigdatacluster</value>
  40.  
    </property>
  41.  
    <!-- 集群中NameNode节点都有哪些 -->
  42.  
    <property>
  43.  
    <name>dfs.ha.namenodes.bigdatacluster</name>
  44.  
    <value>nn1,nn2</value>
  45.  
    </property>
  46.  
    <!-- NameNode的RPC通信地址 -->
  47.  
    <property>
  48.  
    <name>dfs.namenode.rpc-address.bigdatacluster.nn1</name>
  49.  
    <value>master1:8020</value>
  50.  
    </property>
  51.  
    <property>
  52.  
    <name>dfs.namenode.rpc-address.bigdatacluster.nn2</name>
  53.  
    <value>master2:8020</value>
  54.  
    </property>
  55.  
    <!-- NameNode的http通信地址 -->
  56.  
    <property>
  57.  
    <name>dfs.namenode.http-address.bigdatacluster.nn1</name>
  58.  
    <value>master1:9870</value>
  59.  
    </property>
  60.  
    <property>
  61.  
    <name>dfs.namenode.http-address.bigdatacluster.nn2</name>
  62.  
    <value>master2:9870</value>
  63.  
    </property>
  64.  
    <!-- 指定NameNode元数据在JournalNode上的存放位置 -->
  65.  
    <property>
  66.  
    <name>dfs.namenode.shared.edits.dir</name>
  67.  
    <value>qjournal://node1:8485;node2:8485;node3:8485/bigdatacluster</value>
  68.  
    </property>
  69.  
    <!-- 访问代理类:client用于确定哪个NameNode为Active -->
  70.  
    <property>
  71.  
    <name>dfs.client.failover.proxy.provider.bigdatacluster</name>
  72.  
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  73.  
    </property>
  74.  
    <!-- 配置隔离机制,即同一时刻只能有一台服务器对外响应 -->
  75.  
    <property>
  76.  
    <name>dfs.ha.fencing.methods</name>
  77.  
    <value>sshfence</value>
  78.  
    </property>
  79.  
    <!-- 使用隔离机制时需要ssh秘钥登录-->
  80.  
    <property>
  81.  
    <name>dfs.ha.fencing.ssh.private-key-files</name>
  82.  
    <value>/home/bigdata/.ssh/id_rsa</value>
  83.  
    </property>
  84.  
     
  85.  
    <!-- 配置黑名单 -->
  86.  
    <property>
  87.  
    <name>dfs.hosts.exclude</name>
  88.  
    <value>/home/bigdata/module/hadoop-3.1.3/etc/blacklist</value>
  89.  
    </property>
  90.  
     
  91.  
    <!-- 启用nn故障自动转移 -->
  92.  
    <property>
  93.  
    <name>dfs.ha.automatic-failover.enabled</name>
  94.  
    <value>true</value>
  95.  
    </property>
  96.  
     
  97.  
    <!-- HUE -->
  98.  
    <property>
  99.  
    <name>dfs.webhdfs.enabled</name>
  100.  
    <value>true</value>
  101.  
    </property>
  102.  
    <property>
  103.  
    <name>dfs.permissions.enabled</name>
  104.  
    <value>false</value>
  105.  
    </property>
  106.  
     
  107.  
    </configuration>
学新通

mapred-site.xml

  1.  
    <?xml version="1.0"?>
  2.  
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3.  
    <!--
  4.  
    Licensed under the Apache License, Version 2.0 (the "License");
  5.  
    you may not use this file except in compliance with the License.
  6.  
    You may obtain a copy of the License at
  7.  
     
  8.  
    http://www.apache.org/licenses/LICENSE-2.0
  9.  
     
  10.  
    Unless required by applicable law or agreed to in writing, software
  11.  
    distributed under the License is distributed on an "AS IS" BASIS,
  12.  
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  13.  
    See the License for the specific language governing permissions and
  14.  
    limitations under the License. See accompanying LICENSE file.
  15.  
    -->
  16.  
     
  17.  
    <!-- Put site-specific property overrides in this file. -->
  18.  
     
  19.  
    <configuration>
  20.  
     
  21.  
    <!-- 启用jvm重用 -->
  22.  
    <property>
  23.  
    <name>mapreduce.job.jvm.numtasks</name>
  24.  
    <value>10</value>
  25.  
    <description>How many tasks to run per jvm,if set to -1 ,there is no limit</description>
  26.  
    </property>
  27.  
     
  28.  
    <!--
  29.  
    <property>
  30.  
    <name>mapreduce.job.tracker</name>
  31.  
    <value>hdfs://master1:8001</value>
  32.  
    <final>true</final>
  33.  
    </property>
  34.  
    -->
  35.  
    <property>
  36.  
    <!--指定 mapreduce 作业运行在 yarn 上-->
  37.  
    <name>mapreduce.framework.name</name>
  38.  
    <value>yarn</value>
  39.  
    </property>
  40.  
     
  41.  
    <property>
  42.  
    <name>yarn.app.mapreduce.am.env</name>
  43.  
    <value>HADOOP_MAPRED_HOME=/home/bigdata/module/hadoop-3.1.3</value>
  44.  
    </property>
  45.  
    <property>
  46.  
    <name>mapreduce.map.env</name>
  47.  
    <value>HADOOP_MAPRED_HOME=/home/bigdata/module/hadoop-3.1.3</value>
  48.  
    </property>
  49.  
    <property>
  50.  
    <name>mapreduce.reduce.env</name>
  51.  
    <value>HADOOP_MAPRED_HOME=/home/bigdata/module/hadoop-3.1.3</value>
  52.  
    </property>
  53.  
     
  54.  
    <!-- 历史服务器端地址 -->
  55.  
    <property>
  56.  
    <name>mapreduce.jobhistory.address</name>
  57.  
    <value>master1:10020</value>
  58.  
    </property>
  59.  
    <!-- 历史服务器web端地址 -->
  60.  
    <property>
  61.  
    <name>mapreduce.jobhistory.webapp.address</name>
  62.  
    <value>master1:19888</value>
  63.  
    </property>
  64.  
     
  65.  
    </configuration>
学新通

 yarn-site.xml

  1.  
    <?xml version="1.0"?>
  2.  
    <!--
  3.  
    Licensed under the Apache License, Version 2.0 (the "License");
  4.  
    you may not use this file except in compliance with the License.
  5.  
    You may obtain a copy of the License at
  6.  
     
  7.  
    http://www.apache.org/licenses/LICENSE-2.0
  8.  
     
  9.  
    Unless required by applicable law or agreed to in writing, software
  10.  
    distributed under the License is distributed on an "AS IS" BASIS,
  11.  
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  12.  
    See the License for the specific language governing permissions and
  13.  
    limitations under the License. See accompanying LICENSE file.
  14.  
    -->
  15.  
     
  16.  
    <configuration>
  17.  
     
  18.  
    <property>
  19.  
    <name>yarn.nodemanager.aux-services</name>
  20.  
    <value>mapreduce_shuffle</value>
  21.  
    </property>
  22.  
     
  23.  
    <!-- 启用resourcemanager ha -->
  24.  
    <property>
  25.  
    <name>yarn.resourcemanager.ha.enabled</name>
  26.  
    <value>true</value>
  27.  
    </property>
  28.  
     
  29.  
    <!-- 声明两台resourcemanager的地址 -->
  30.  
    <property>
  31.  
    <name>yarn.resourcemanager.cluster-id</name>
  32.  
    <value>cluster-yarn1</value>
  33.  
    </property>
  34.  
    <!--指定resourcemanager的逻辑列表-->
  35.  
    <property>
  36.  
    <name>yarn.resourcemanager.ha.rm-ids</name>
  37.  
    <value>rm1,rm2</value>
  38.  
    </property>
  39.  
    <!-- ========== rm1的配置 ========== -->
  40.  
    <!-- 指定rm1的主机名 -->
  41.  
    <property>
  42.  
    <name>yarn.resourcemanager.hostname.rm1</name>
  43.  
    <value>master1</value>
  44.  
    </property>
  45.  
    <!-- 指定rm1的web端地址 -->
  46.  
    <property>
  47.  
    <name>yarn.resourcemanager.webapp.address.rm1</name>
  48.  
    <value>master1:8088</value>
  49.  
    </property>
  50.  
    <!-- 指定rm1的内部通信地址 -->
  51.  
    <property>
  52.  
    <name>yarn.resourcemanager.address.rm1</name>
  53.  
    <value>master1:8032</value>
  54.  
    </property>
  55.  
    <!-- 指定AM向rm1申请资源的地址 -->
  56.  
    <property>
  57.  
    <name>yarn.resourcemanager.scheduler.address.rm1</name>
  58.  
    <value>master1:8030</value>
  59.  
    </property>
  60.  
    <!-- 指定供NM连接的地址 -->
  61.  
    <property>
  62.  
    <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
  63.  
    <value>master1:8031</value>
  64.  
    </property>
  65.  
    <!-- ========== rm2的配置 ========== -->
  66.  
    <!-- 指定rm2的主机名 -->
  67.  
    <property>
  68.  
    <name>yarn.resourcemanager.hostname.rm2</name>
  69.  
    <value>master2</value>
  70.  
    </property>
  71.  
    <property>
  72.  
    <name>yarn.resourcemanager.webapp.address.rm2</name>
  73.  
    <value>master2:8088</value>
  74.  
    </property>
  75.  
    <property>
  76.  
    <name>yarn.resourcemanager.address.rm2</name>
  77.  
    <value>master2:8032</value>
  78.  
    </property>
  79.  
    <property>
  80.  
    <name>yarn.resourcemanager.scheduler.address.rm2</name>
  81.  
    <value>master2:8030</value>
  82.  
    </property>
  83.  
    <property>
  84.  
    <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
  85.  
    <value>master2:8031</value>
  86.  
    </property>
  87.  
     
  88.  
    <!-- 指定zookeeper集群的地址 -->
  89.  
    <property>
  90.  
    <name>yarn.resourcemanager.zk-address</name>
  91.  
    <value>node1:2181,node2:2181,node3:2181</value>
  92.  
    </property>
  93.  
     
  94.  
    <!-- 启用自动恢复 -->
  95.  
    <property>
  96.  
    <name>yarn.resourcemanager.recovery.enabled</name>
  97.  
    <value>true</value>
  98.  
    </property>
  99.  
     
  100.  
    <!-- 指定resourcemanager的状态信息存储在zookeeper集群 -->
  101.  
    <property>
  102.  
    <name>yarn.resourcemanager.store.class</name>
  103.  
    <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
  104.  
    </property>
  105.  
    <!-- 环境变量的继承 -->
  106.  
    <property>
  107.  
    <name>yarn.nodemanager.env-whitelist</name>
  108.  
    <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
  109.  
    </property>
  110.  
     
  111.  
    <!-- 开启日志聚集功能 -->
  112.  
    <property>
  113.  
    <name>yarn.log-aggregation-enable</name>
  114.  
    <value>true</value>
  115.  
    </property>
  116.  
    <!-- 设置日志聚集服务器地址 -->
  117.  
    <!-- 设置日志聚集服务器地址 -->
  118.  
    <property>
  119.  
    <name>yarn.log.server.url</name>
  120.  
    <value>http://master1:19888/jobhistory/logs</value>
  121.  
    </property>
  122.  
    <!-- 设置日志保留时间为7天 -->
  123.  
    <property>
  124.  
    <name>yarn.log-aggregation.retain-seconds</name>
  125.  
    <value>604800</value>
  126.  
    </property>
  127.  
     
  128.  
    <!--是否启动一个线程检查每个任务正使用的物理内存量,如果任务超出分配值,则直接将其杀掉,默认是true -->
  129.  
    <property>
  130.  
    <name>yarn.nodemanager.pmem-check-enabled</name>
  131.  
    <value>false</value>
  132.  
    </property>
  133.  
     
  134.  
    <!--是否启动一个线程检查每个任务正使用的虚拟内存量,如果任务超出分配值,则直接将其杀掉,默认是true -->
  135.  
    <property>
  136.  
    <name>yarn.nodemanager.vmem-check-enabled</name>
  137.  
    <value>false</value>
  138.  
    </property>
  139.  
     
  140.  
    <property>
  141.  
    <name>yarn.nodemanager.resource.memory-mb</name>
  142.  
    <value>24576</value>
  143.  
    </property>
  144.  
     
  145.  
    </configuration>
学新通

httpfs-site.xml

  1.  
    <?xml version="1.0" encoding="UTF-8"?>
  2.  
    <!--
  3.  
    Licensed under the Apache License, Version 2.0 (the "License");
  4.  
    you may not use this file except in compliance with the License.
  5.  
    You may obtain a copy of the License at
  6.  
     
  7.  
    http://www.apache.org/licenses/LICENSE-2.0
  8.  
     
  9.  
    Unless required by applicable law or agreed to in writing, software
  10.  
    distributed under the License is distributed on an "AS IS" BASIS,
  11.  
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  12.  
    See the License for the specific language governing permissions and
  13.  
    limitations under the License.
  14.  
    -->
  15.  
    <configuration>
  16.  
     
  17.  
    <!-- HUE -->
  18.  
    <property>
  19.  
    <name>httpfs.proxyuser.bigdata.hosts</name>
  20.  
    <value>*</value>
  21.  
    </property>
  22.  
    <property>
  23.  
    <name>httpfs.proxyuser.bigdata.groups</name>
  24.  
    <value>*</value>
  25.  
    </property>
  26.  
     
  27.  
     
  28.  
    </configuration>
学新通

capacity-scheduler.xml 

  1.  
    <!--
  2.  
    Licensed under the Apache License, Version 2.0 (the "License");
  3.  
    you may not use this file except in compliance with the License.
  4.  
    You may obtain a copy of the License at
  5.  
     
  6.  
    http://www.apache.org/licenses/LICENSE-2.0
  7.  
     
  8.  
    Unless required by applicable law or agreed to in writing, software
  9.  
    distributed under the License is distributed on an "AS IS" BASIS,
  10.  
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  11.  
    See the License for the specific language governing permissions and
  12.  
    limitations under the License. See accompanying LICENSE file.
  13.  
    -->
  14.  
    <configuration>
  15.  
     
  16.  
    <property>
  17.  
    <name>yarn.scheduler.capacity.maximum-applications</name>
  18.  
    <value>10000</value>
  19.  
    <description>
  20.  
    Maximum number of applications that can be pending and running.
  21.  
    </description>
  22.  
    </property>
  23.  
     
  24.  
    <property>
  25.  
    <name>yarn.scheduler.capacity.maximum-am-resource-percent</name>
  26.  
    <value>0.3</value>
  27.  
    <description>
  28.  
    Maximum percent of resources in the cluster which can be used to run
  29.  
    application masters i.e. controls number of concurrent running
  30.  
    applications.
  31.  
    </description>
  32.  
    </property>
  33.  
     
  34.  
    <property>
  35.  
    <name>yarn.scheduler.capacity.resource-calculator</name>
  36.  
    <value>org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator</value>
  37.  
    <description>
  38.  
    The ResourceCalculator implementation to be used to compare
  39.  
    Resources in the scheduler.
  40.  
    The default i.e. DefaultResourceCalculator only uses Memory while
  41.  
    DominantResourceCalculator uses dominant-resource to compare
  42.  
    multi-dimensional resources such as Memory, CPU etc.
  43.  
    </description>
  44.  
    </property>
  45.  
     
  46.  
    <property>
  47.  
    <name>yarn.scheduler.capacity.root.queues</name>
  48.  
    <value>high,low</value>
  49.  
    <description>
  50.  
    The queues at the this level (root is the root queue).
  51.  
    </description>
  52.  
    </property>
  53.  
    <!--
  54.  
    队列占比
  55.  
    -->
  56.  
    <property>
  57.  
    <name>yarn.scheduler.capacity.root.high.capacity</name>
  58.  
    <value>70</value>
  59.  
    <description>Default queue target capacity.</description>
  60.  
    </property>
  61.  
     
  62.  
    <property>
  63.  
    <name>yarn.scheduler.capacity.root.low.capacity</name>
  64.  
    <value>30</value>
  65.  
    <description>Default queue target capacity.</description>
  66.  
    </property>
  67.  
     
  68.  
     
  69.  
    <!--
  70.  
    百分比
  71.  
    -->
  72.  
    <property>
  73.  
    <name>yarn.scheduler.capacity.root.high.user-limit-factor</name>
  74.  
    <value>1</value>
  75.  
    <description>
  76.  
    Default queue user limit a percentage from 0.0 to 1.0.
  77.  
    </description>
  78.  
    </property>
  79.  
     
  80.  
    <property>
  81.  
    <name>yarn.scheduler.capacity.root.low.user-limit-factor</name>
  82.  
    <value>1</value>
  83.  
    <description>
  84.  
    Default queue user limit a percentage from 0.0 to 1.0.
  85.  
    </description>
  86.  
    </property>
  87.  
     
  88.  
     
  89.  
    <!--
  90.  
    运行状态
  91.  
    -->
  92.  
    <property>
  93.  
    <name>yarn.scheduler.capacity.root.high.maximum-capacity</name>
  94.  
    <value>100</value>
  95.  
    <description>
  96.  
    The maximum capacity of the default queue.
  97.  
    </description>
  98.  
    </property>
  99.  
     
  100.  
    <property>
  101.  
    <name>yarn.scheduler.capacity.root.low.state</name>
  102.  
    <value>RUNNING</value>
  103.  
    <description>
  104.  
    The state of the default queue. State can be one of RUNNING or STOPPED.
  105.  
    </description>
  106.  
    </property>
  107.  
     
  108.  
    <!--
  109.  
    权限
  110.  
    -->
  111.  
    <property>
  112.  
    <name>yarn.scheduler.capacity.root.high.acl_submit_applications</name>
  113.  
    <value>*</value>
  114.  
    <description>
  115.  
    The ACL of who can submit jobs to the default queue.
  116.  
    </description>
  117.  
    </property>
  118.  
     
  119.  
    <property>
  120.  
    <name>yarn.scheduler.capacity.root.low.acl_submit_applications</name>
  121.  
    <value>*</value>
  122.  
    <description>
  123.  
    The ACL of who can submit jobs to the default queue.
  124.  
    </description>
  125.  
    </property>
  126.  
     
  127.  
     
  128.  
    <!--
  129.  
    权限
  130.  
    -->
  131.  
    <property>
  132.  
    <name>yarn.scheduler.capacity.root.high.acl_administer_queue</name>
  133.  
    <value>*</value>
  134.  
    <description>
  135.  
    The ACL of who can administer jobs on the default queue.
  136.  
    </description>
  137.  
    </property>
  138.  
     
  139.  
    <property>
  140.  
    <name>yarn.scheduler.capacity.root.low.acl_administer_queue</name>
  141.  
    <value>*</value>
  142.  
    <description>
  143.  
    The ACL of who can administer jobs on the default queue.
  144.  
    </description>
  145.  
    </property>
  146.  
     
  147.  
    <!--
  148.  
    权限
  149.  
    -->
  150.  
    <property>
  151.  
    <name>yarn.scheduler.capacity.root.high.acl_application_max_priority</name>
  152.  
    <value>*</value>
  153.  
    <description>
  154.  
    The ACL of who can submit applications with configured priority.
  155.  
    For e.g, [user={name} group={name} max_priority={priority} default_priority={priority}]
  156.  
    </description>
  157.  
    </property>
  158.  
     
  159.  
    <property>
  160.  
    <name>yarn.scheduler.capacity.root.low.acl_application_max_priority</name>
  161.  
    <value>*</value>
  162.  
    <description>
  163.  
    The ACL of who can submit applications with configured priority.
  164.  
    For e.g, [user={name} group={name} max_priority={priority} default_priority={priority}]
  165.  
    </description>
  166.  
    </property>
  167.  
     
  168.  
    <!--
  169.  
    权限
  170.  
    -->
  171.  
    <property>
  172.  
    <name>yarn.scheduler.capacity.root.high.maximum-application-lifetime
  173.  
    </name>
  174.  
    <value>-1</value>
  175.  
    <description>
  176.  
    Maximum lifetime of an application which is submitted to a queue
  177.  
    in seconds. Any value less than or equal to zero will be considered as
  178.  
    disabled.
  179.  
    This will be a hard time limit for all applications in this
  180.  
    queue. If positive value is configured then any application submitted
  181.  
    to this queue will be killed after exceeds the configured lifetime.
  182.  
    User can also specify lifetime per application basis in
  183.  
    application submission context. But user lifetime will be
  184.  
    overridden if it exceeds queue maximum lifetime. It is point-in-time
  185.  
    configuration.
  186.  
    Note : Configuring too low value will result in killing application
  187.  
    sooner. This feature is applicable only for leaf queue.
  188.  
    </description>
  189.  
    </property>
  190.  
     
  191.  
    <property>
  192.  
    <name>yarn.scheduler.capacity.root.low.maximum-application-lifetime
  193.  
    </name>
  194.  
    <value>-1</value>
  195.  
    <description>
  196.  
    Maximum lifetime of an application which is submitted to a queue
  197.  
    in seconds. Any value less than or equal to zero will be considered as
  198.  
    disabled.
  199.  
    This will be a hard time limit for all applications in this
  200.  
    queue. If positive value is configured then any application submitted
  201.  
    to this queue will be killed after exceeds the configured lifetime.
  202.  
    User can also specify lifetime per application basis in
  203.  
    application submission context. But user lifetime will be
  204.  
    overridden if it exceeds queue maximum lifetime. It is point-in-time
  205.  
    configuration.
  206.  
    Note : Configuring too low value will result in killing application
  207.  
    sooner. This feature is applicable only for leaf queue.
  208.  
    </description>
  209.  
    </property>
  210.  
     
  211.  
     
  212.  
    <!--
  213.  
    生命周期
  214.  
    -->
  215.  
    <property>
  216.  
    <name>yarn.scheduler.capacity.root.high.default-application-lifetime
  217.  
    </name>
  218.  
    <value>-1</value>
  219.  
    <description>
  220.  
    Default lifetime of an application which is submitted to a queue
  221.  
    in seconds. Any value less than or equal to zero will be considered as
  222.  
    disabled.
  223.  
    If the user has not submitted application with lifetime value then this
  224.  
    value will be taken. It is point-in-time configuration.
  225.  
    Note : Default lifetime can't exceed maximum lifetime. This feature is
  226.  
    applicable only for leaf queue.
  227.  
    </description>
  228.  
    </property>
  229.  
    <property>
  230.  
    <name>yarn.scheduler.capacity.root.low.default-application-lifetime
  231.  
    </name>
  232.  
    <value>-1</value>
  233.  
    <description>
  234.  
    Default lifetime of an application which is submitted to a queue
  235.  
    in seconds. Any value less than or equal to zero will be considered as
  236.  
    disabled.
  237.  
    If the user has not submitted application with lifetime value then this
  238.  
    value will be taken. It is point-in-time configuration.
  239.  
    Note : Default lifetime can't exceed maximum lifetime. This feature is
  240.  
    applicable only for leaf queue.
  241.  
    </description>
  242.  
    </property>
  243.  
     
  244.  
    <property>
  245.  
    <name>yarn.scheduler.capacity.node-locality-delay</name>
  246.  
    <value>40</value>
  247.  
    <description>
  248.  
    Number of missed scheduling opportunities after which the CapacityScheduler
  249.  
    attempts to schedule rack-local containers.
  250.  
    When setting this parameter, the size of the cluster should be taken into account.
  251.  
    We use 40 as the default value, which is approximately the number of nodes in one rack.
  252.  
    Note, if this value is -1, the locality constraint in the container request
  253.  
    will be ignored, which disables the delay scheduling.
  254.  
    </description>
  255.  
    </property>
  256.  
     
  257.  
    <property>
  258.  
    <name>yarn.scheduler.capacity.rack-locality-additional-delay</name>
  259.  
    <value>-1</value>
  260.  
    <description>
  261.  
    Number of additional missed scheduling opportunities over the node-locality-delay
  262.  
    ones, after which the CapacityScheduler attempts to schedule off-switch containers,
  263.  
    instead of rack-local ones.
  264.  
    Example: with node-locality-delay=40 and rack-locality-delay=20, the scheduler will
  265.  
    attempt rack-local assignments after 40 missed opportunities, and off-switch assignments
  266.  
    after 40 20=60 missed opportunities.
  267.  
    When setting this parameter, the size of the cluster should be taken into account.
  268.  
    We use -1 as the default value, which disables this feature. In this case, the number
  269.  
    of missed opportunities for assigning off-switch containers is calculated based on
  270.  
    the number of containers and unique locations specified in the resource request,
  271.  
    as well as the size of the cluster.
  272.  
    </description>
  273.  
    </property>
  274.  
     
  275.  
    <property>
  276.  
    <name>yarn.scheduler.capacity.queue-mappings</name>
  277.  
    <value></value>
  278.  
    <description>
  279.  
    A list of mappings that will be used to assign jobs to queues
  280.  
    The syntax for this list is [u|g]:[name]:[queue_name][,next mapping]*
  281.  
    Typically this list will be used to map users to queues,
  282.  
    for example, u:%user:%user maps all users to queues with the same name
  283.  
    as the user.
  284.  
    </description>
  285.  
    </property>
  286.  
     
  287.  
    <property>
  288.  
    <name>yarn.scheduler.capacity.queue-mappings-override.enable</name>
  289.  
    <value>false</value>
  290.  
    <description>
  291.  
    If a queue mapping is present, will it override the value specified
  292.  
    by the user? This can be used by administrators to place jobs in queues
  293.  
    that are different than the one specified by the user.
  294.  
    The default is false.
  295.  
    </description>
  296.  
    </property>
  297.  
     
  298.  
    <property>
  299.  
    <name>yarn.scheduler.capacity.per-node-heartbeat.maximum-offswitch-assignments</name>
  300.  
    <value>1</value>
  301.  
    <description>
  302.  
    Controls the number of OFF_SWITCH assignments allowed
  303.  
    during a node's heartbeat. Increasing this value can improve
  304.  
    scheduling rate for OFF_SWITCH containers. Lower values reduce
  305.  
    "clumping" of applications on particular nodes. The default is 1.
  306.  
    Legal values are 1-MAX_INT. This config is refreshable.
  307.  
    </description>
  308.  
    </property>
  309.  
     
  310.  
     
  311.  
    <property>
  312.  
    <name>yarn.scheduler.capacity.application.fail-fast</name>
  313.  
    <value>false</value>
  314.  
    <description>
  315.  
    Whether RM should fail during recovery if previous applications'
  316.  
    queue is no longer valid.
  317.  
    </description>
  318.  
    </property>
  319.  
     
  320.  
    </configuration>
学新通

yarn-env.sh

  1.  
    #这个主要是解决找不到java的问题
  2.  
    export JAVA_HOME=/home/bigdata/module/jdk1.8.0_161

hadoop-server.sh 

  1.  
    #!/bin/bash
  2.  
    if [ $# -lt 1 ]
  3.  
    then
  4.  
    echo "No Args Input..."
  5.  
    exit ;
  6.  
    fi
  7.  
    case $1 in
  8.  
    "start")
  9.  
    echo " =================== 启动 hadoop集群 ==================="
  10.  
    echo "node1的journalnode启动"
  11.  
    ssh node1 "hdfs --daemon start journalnode"
  12.  
    echo "node2的journalnode启动"
  13.  
    ssh node2 "hdfs --daemon start journalnode"
  14.  
    echo "node3的journalnode启动"
  15.  
    ssh node3 "hdfs --daemon start journalnode"
  16.  
     
  17.  
     
  18.  
    echo " --------------- 启动 hdfs ---------------"
  19.  
    ssh master1 "/home/bigdata/module/hadoop-3.1.3/sbin/start-dfs.sh"
  20.  
    echo " --------------- 启动 yarn ---------------"
  21.  
    ssh master2 "/home/bigdata/module/hadoop-3.1.3/sbin/start-yarn.sh"
  22.  
     
  23.  
    echo " --------------- 启动 historyserver ---------------"
  24.  
    ssh master1 "/home/bigdata/module/hadoop-3.1.3/bin/mapred --daemon start historyserver"
  25.  
    echo " --------------- 启动 httpfs ---------------"
  26.  
    ssh master1 "/home/bigdata/module/hadoop-3.1.3/sbin/httpfs.sh start"
  27.  
    #建议/home/bigdata/hadoop/hadoop/bin/hdfs --daemon start httpfs
  28.  
    ;;
  29.  
    "stop")
  30.  
    echo " --------------- 关闭httpfs ---------------"
  31.  
    #建议/home/bigdata/hadoop/hadoop/bin/hdfs --daemon stop httpfs
  32.  
    ssh master1 "/home/bigdata/module/hadoop-3.1.3/sbin/httpfs.sh stop"
  33.  
    echo " =================== 关闭 hadoop集群 ==================="
  34.  
    echo " --------------- 关闭 historyserver ---------------"
  35.  
    ssh master1 "/home/bigdata/module/hadoop-3.1.3/bin/mapred --daemon stop historyserver"
  36.  
     
  37.  
    echo " --------------- 关闭 yarn ---------------"
  38.  
    ssh master2 "/home/bigdata/module/hadoop-3.1.3/sbin/stop-yarn.sh"
  39.  
    echo " --------------- 关闭 hdfs ---------------"
  40.  
    ssh master1 "/home/bigdata/module/hadoop-3.1.3/sbin/stop-dfs.sh"
  41.  
     
  42.  
    echo "node1的journalnode关闭"
  43.  
    ssh node1 "hdfs --daemon stop journalnode"
  44.  
    echo "node2的journalnode关闭"
  45.  
    ssh node2 "hdfs --daemon stop journalnode"
  46.  
    echo "node3的journalnode关闭"
  47.  
    ssh node3 "hdfs --daemon stop journalnode"
  48.  
    ;;
  49.  
    *)
  50.  
    echo "Input Args Error..."
  51.  
    ;;
  52.  
    esac
学新通

Hue整合Hdfs和Yarn集群配置

hue.ini

  1.  
    [hadoop]
  2.  
     
  3.  
    # Configuration for HDFS NameNode
  4.  
    # ------------------------------------------------------------------------
  5.  
    [[hdfs_clusters]]
  6.  
    # HA support by using HttpFs
  7.  
     
  8.  
    [[[default]]]
  9.  
    # Enter the filesystem uri
  10.  
    fs_defaultfs=hdfs://master1:8020
  11.  
     
  12.  
    # NameNode logical name.
  13.  
    ## logical_name=
  14.  
     
  15.  
    # Use WebHdfs/HttpFs as the communication mechanism.
  16.  
    # Domain should be the NameNode or HttpFs host.
  17.  
    # Default port is 14000 for HttpFs.
  18.  
    #要单独启动对应的webhdfs
  19.  
    webhdfs_url=http://master1:14000/webhdfs/v1
  20.  
     
  21.  
    # Change this if your HDFS cluster is Kerberos-secured
  22.  
    ## security_enabled=false
  23.  
     
  24.  
    # In secure mode (HTTPS), if SSL certificates from YARN Rest APIs
  25.  
    # have to be verified against certificate authority
  26.  
    ## ssl_cert_ca_verify=True
  27.  
     
  28.  
    # Directory of the Hadoop configuration
  29.  
    hadoop_conf_dir=/home/bigdata/module/hadoop-3.1.3/etc/hadoop
  30.  
    hadoop_bin=/home/bigdata/module/hadoop-3.1.3/bin
  31.  
    hadoop_hdfs_home=/home/bigdata/module/hadoop-3.1.3
  32.  
     
  33.  
    # Configuration for YARN (MR2)
  34.  
    # ------------------------------------------------------------------------
  35.  
    [[yarn_clusters]]
  36.  
     
  37.  
    [[[default]]]
  38.  
    # Enter the host on which you are running the ResourceManager
  39.  
    resourcemanager_host=cluster-yarn1
  40.  
     
  41.  
    # The port where the ResourceManager IPC listens on
  42.  
    resourcemanager_port=8032
  43.  
     
  44.  
    # Whether to submit jobs to this cluster
  45.  
    submit_to=True
  46.  
     
  47.  
    # Resource Manager logical name (required for HA)
  48.  
    logical_name=rm1
  49.  
     
  50.  
    # Change this if your YARN cluster is Kerberos-secured
  51.  
    ## security_enabled=false
  52.  
     
  53.  
    # URL of the ResourceManager API
  54.  
    resourcemanager_api_url=http://master1:8088
  55.  
     
  56.  
    # URL of the ProxyServer API
  57.  
    proxy_api_url=http://master1:8088
  58.  
     
  59.  
    # URL of the HistoryServer API
  60.  
    history_server_api_url=http://master1:19888
  61.  
     
  62.  
    # URL of the Spark History Server
  63.  
    ## spark_history_server_url=http://localhost:18088
  64.  
     
  65.  
    # Change this if your Spark History Server is Kerberos-secured
  66.  
    ## spark_history_server_security_enabled=false
  67.  
     
  68.  
    # In secure mode (HTTPS), if SSL certificates from YARN Rest APIs
  69.  
    # have to be verified against certificate authority
  70.  
    ## ssl_cert_ca_verify=True
  71.  
     
  72.  
    # HA support by specifying multiple clusters.
  73.  
    # Redefine different properties there.
  74.  
    # e.g.
  75.  
     
  76.  
    [[[ha]]]
  77.  
    # Resource Manager logical name (required for HA)
  78.  
    logical_name=rm2
  79.  
     
  80.  
    # Un-comment to enable
  81.  
    submit_to=True
  82.  
     
  83.  
    # URL of the ResourceManager API
  84.  
    resourcemanager_api_url=http://master2:8088
  85.  
    history_server_api_url=http://master1:19888
  86.  
    # ...
学新通

对接hive的时候把超时加长不然很容易就任务失败了 

server_conn_timeout=3600

Hue整合Hbase

hbase启动脚本(要开启thrift主要是给Hue用)

  1.  
    #!/bin/bash
  2.  
    case $1 in
  3.  
    "start"){
  4.  
    for i in master2
  5.  
    do
  6.  
    echo " --------启动 $i hbase-------"
  7.  
    ssh $i "/home/bigdata/module/hbase-2.4.9/bin/start-hbase.sh"
  8.  
    ssh $i "/home/bigdata/module/hbase-2.4.9/bin/hbase-daemons.sh start thrift"
  9.  
    done
  10.  
    };;
  11.  
    "stop"){
  12.  
    for i in master2
  13.  
    do
  14.  
    echo " --------停止 $i hbase-------"
  15.  
    ssh $i "/home/bigdata/module/hbase-2.4.9/bin/hbase-daemons.sh stop thrift"
  16.  
    ssh $i "/home/bigdata/module/hbase-2.4.9/bin/stop-hbase.sh"
  17.  
    done
  18.  
    };;
  19.  
    esac
学新通

Hue配置

  1.  
    [hbase]
  2.  
    # Comma-separated list of HBase Thrift servers for clusters in the format of '(name|host:port)'.
  3.  
    # Use full hostname. If hbase.thrift.ssl.enabled in hbase-site is set to true, https will be used otherwise it will use http
  4.  
    # If using Kerberos we assume GSSAPI SASL, not PLAIN.
  5.  
    hbase_clusters=(Cluster|node3:9090)
  6.  
     
  7.  
    # HBase configuration directory, where hbase-site.xml is located.
  8.  
    hbase_conf_dir=/home/bigdata/module/hbase-2.4.9/conf
  9.  
     
  10.  
    # Hard limit of rows or columns per row fetched before truncating.
  11.  
    ## truncate_limit = 500
  12.  
     
  13.  
    # Should come from hbase-site.xml, do not set. 'framed' is used to chunk up responses, used with the nonblocking server in Thrift but is not supported in Hue.
  14.  
    # 'buffered' used to be the default of the HBase Thrift Server. Default is buffered when not set in hbase-site.xml.
  15.  
    ## thrift_transport=buffered
  16.  
     
  17.  
    # Choose whether Hue should validate certificates received from the server.
  18.  
    ## ssl_cert_ca_verify=true
学新通

参考文章

https://blog.csdn.net/yxluojiecpp/article/details/126828755

这篇好文章是转载于:学新通技术网

  • 版权申明: 本站部分内容来自互联网,仅供学习及演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,请提供相关证据及您的身份证明,我们将在收到邮件后48小时内删除。
  • 本站站名: 学新通技术网
  • 本文地址: /boutique/detail/tanhgfhgbg
系列文章
更多 icon
同类精品
更多 icon
继续加载