1

Hue 安装

示例基于Centos7安装Hue,安装前需要安装大量的软件和包,因为python和c必须的依赖包

  • 安装Centos7中Hue依赖包
yum install ant gcc g++ libkrb5-dev libmysqlclient-dev
yum install build-essential gcc-c++
yum install python-dev libssl-dev build-essential zlibc zlib-bin libidn11-dev libidn11 zlib-devel
# sasl.h所需依赖包
yum install cyrus-sasl-lib.x86_64 cyrus-sasl-devel.x86_64 libgsasl-devel.x86_64 saslwrapper-devel.x86_64
# python xml依赖包
yum install libxslt-devel
pip install lxml

# c/_cffi_backend.c:15:17: 致命错误:ffi.h:
yum install libffi-devel

# openssl/opensslv.h找不到的问题解决
yum install openssl-devel

# fatal error: lber.h: 没有那个文件或目录
yum install libldap2-dev openldap-devel

# egg_info failed with error code 1 in
yum install mysql-devel

# sqlite3.h:没有那个文件或目录
yum install gmp-devel sqlite-devel

依赖包的安装这边没有分先后顺序,基本都会使用到

  • 安装Hue
    1 下载Hue
    http://gethue.com/hue-3-12-th... 进入下载页面,可以根据你自己需求来下载对应版本
    2 编译Hue

      tar -zxvf hue-3.12.0.tgz -C /usr/local/
      cd /usr/local/hue-3.12.0
      make install

    编译没有问题就可以配置啦,可以将/usr/local/hue-3.12.0目录删除
    3 配置Hue数据库
    vim /usr/local/hue/desktop/conf/hue.ini找到[[database]]修改配置如下:

      engine=mysql
      host=ambari-ttt-master
      port=3306
      user=hue
      password=123456
      name=hue
      schema=hue
      如果没有配置name=hue的话后面初始化数据库会有问题

    4 配置desktop
    hue端口,时区等信息配置

     http_host=0.0.0.0
     http_port=8888
     server_user=hue
     server_group=hue
     default_user=hue
     default_hdfs_superuser=hdfs

    5 初始化数据库

    cd /usr/local/hue/build/env/bin/
    ./hue syncdb
    在执行时中间有交互,输入no就好
    ./hue migrate
    mysql -h xxx -u hue -p 
    登录数据库是否已经初始化

    6 启动和重启HUE

      # 启动hue
      cd /usr/local/hue/build/env/bin
      supervisor &
      # 关闭hue,hue有守护进程会再拉起hue进程
      pkill -U hue
      或者
      killall -u hue

    7 登录Hue Web-Ui
    http://x.x.x.x:8888 创建管理员

Hue 组件配置

HDFS 配置

[[hdfs_clusters]]
    [[[default]]]
      # Enter the filesystem uri
      fs_defaultfs=hdfs://ambari-ttt-master:8020

      # Use WebHdfs/HttpFs as the communication mechanism.
      # Domain should be the NameNode or HttpFs host.
      # Default port is 14000 for HttpFs.
      webhdfs_url=http://ambari-ttt-master:50070/webhdfs/v1

      # Directory of the Hadoop configuration
      hadoop_conf_dir=$HADOOP_CONF_DIR
      
[[yarn_clusters]]

    [[[default]]]
      resourcemanager_host=ambari-ttt-master
      resourcemanager_port=8141
      submit_to=True

      # URL of the ResourceManager API
      resourcemanager_api_url=http://ambari-ttt-master:8088

      # URL of the ProxyServer API
      proxy_api_url=http://ambari-ttt-master:8088

      # URL of the HistoryServer API
      history_server_api_url=http://ambari-ttt-master:19888

      # URL of the Spark History Server
      spark_history_server_url=http://ambari-ttt-master:18088

需要在Ambari中的HDFS配置Custom core-site添加访问权限,配置如下:

hadoop.proxyuser.hue.groups=*
hadoop.proxyuser.hue.hosts=*

笨兔儿
38 声望7 粉丝

分享快是最快乐的