This section mainly explains kibana deployment,
environment preparation
Kibana is a front-end project based on Nodejs. It does not contain data storage functions, so it needs to be used with an Elasticsearch node/cluster. This section will describe the selection of the system environment and the installation of necessary basic applications.
Environment Selection Policy
- OS
Since Kibana cannot exist independently and needs to be bound to an Elasticsearch node/cluster, this article will mainly use a CentOS 7 system to host its supporting Elasticsearch node. We will also cover the installation of other commonly used operating systems.
The systems that Kibana can support are similar to those of Elasticsearch. It can be roughly considered that all systems that support Elasticsearch can host Kibana.
memory, CPU
Kibana is a front-end system, and the bound Elasticsearch can be considered as the database it uses to access data, so it does not require a particularly high configuration.
This article will describe a minimal configuration (1C2G) that can run Elasticsearch nodes smoothly
Actual system configuration
The system installation and configuration of Kibana are consistent with the installation of Elasticsearch in the previous section. Modify the source and install the necessary tools:
sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org/centos|baseurl=https://mirrors.ustc.edu.cn/ centos|g' \ -i.bak \ /etc/yum.repos.d/CentOS-Base.repo && \ yum makecache && \ yum update -y && \ yum install -y epel-release && \ yum install -y curl wget htop unzip && \ yum install -y docker docker-compose
Start the docker service:
systemctl start docker
systemctl enable docker
, install and start
tar package installs
Domestic mirror download link (referred to as KIBANA_DOWNLOAD_URL later):
https://repo.huaweicloud.com/kibana/7.13.4/kibana-7.13.4-linux-x86_64.tar.gz
and unzip:
- Create a file: mkdir -p /usr/local/kibana
- Change directory: cd /usr/local/kibana
- Download file: wget -c https://repo.huaweicloud.com/kibana/7.13.4/kibana-7.13.4-linux-x86_64.tar.gz
Unzip: tar vxf kibana-7.13.4-linux-x86_64.tar.gz
drwxr-xr-x 3 root root 4096 7月 15 2021 x-pack drwxr-xr-x 10 root root 4096 7月 15 2021 src -rw-r--r-- 1 root root 3968 7月 15 2021 README.txt--->项目说明文档 drwxr-xr-x 2 root root 4096 7月 15 2021 plugins-->插件文件夹, 目前为空,自定义插件会放置在这里 -rw-r--r-- 1 root root 740 7月 15 2021 package.json-->项目打包文件 -rw-r--r-- 1 root root 1476895 7月 15 2021 NOTICE.txt-->一些协议 的说明以及违反后果的警告 drwxr-xr-x 827 root root 32768 7月 15 2021 node_modules -rw-r--r-- 1 root root 3860 7月 15 2021 LICENSE.txt-->协议 drwxr-xr-x 2 root root 4096 7月 15 2021 data-->Kibana 和它的 插件写本地文件的文件夹 drwxr-xr-x 2 root root 4096 7月 15 2021 config-->配置文件目录 drwxr-xr-x 6 root root 4096 7月 15 2021 node drwxr-xr-x 2 root root 4096 7月 15 2021 bin--->Kibana 内置命令行工具
Modify the configuration file ${KIBANA_HOME}/config/kibana.yml
- Add Elasticsearch access address: elasticsearch.hosts: ["http://localhost:9200"]
Service start:
Start with the command ./bin/kibana
Unlike ES, kibana has direct background running parameters, and it can only run in the background through nohup and &
Full command:
nohup ./bin/kibana > kibana.log 2>&1 &
- Access in the browser through the address http://${node_ip}:5601
service suspended
- The process of Kibana is a nodejs service, so the process id cannot be obtained by ps -ef | grep kibana like Elasticsearch
- You can only use the command netstat to find the pid corresponding to kibana and perform the kill operation by looking for the listening port.
Full command:
netstat -anp | grep 5601 | awk '{ print $7 }' | cut -d '/' -f 1 | xargs kill -15
Docker/docker-compose install
download the corresponding image:
docker pull kibana:7.13.4
(Optional) If the target machine cannot access the Internet, you can try to download and import the image from another machine:
- Download the image on the host machine docker pull kibana:7.13.4
- Export the image as a file docker save -o kibana-7.13.4-image.tar docker.io/kibana:7.13.4
- Copy the exported file to the target machine scp kibana-7.13.4-image.tar root@192.168.10.221: /tmp
- Log in to the target machine ssh root@192.168.10.221
- Import the target image docker load < kibana-7.13.4-image.tar
Image Verification:
docker images
service started:
It is not recommended to start directly from the command line, because it needs to configure a common network with the Elasticsearch node and so on.
Here we mainly introduce management through docker-compose.
Modify the configuration file vi docker-compose.yml.
# 声明 docker-compose 版本,Mac 等环境可以使用 3,但是在一些 Linux 环境中只支持到 2
version: "2.2"
# 声明节点使用的网络空间
networks:
bigdata:
driver: bridge
#声明 Kibana 节点
services:
kibana:
# kibana 版本要和 ES 相匹配,否则会报错甚至无法正常启动
image: kibana:7.13.4
container_name: kibana
environment:
# 如果 ES 节点和当前 kibana 节点在同一个 docker-compose 环境中
# 可以直接写对应的 ES container_name,否则需要填完整的 URL
ELASTICSEARCH_HOSTS: http://es01:9200
depends_on: - es01
ports: - 5601:5601
networks: - bigdata
Or we start separately as follows:
docker run -d --name kibana -p 0.0.0.0:5601:5601 --restart=always kibana:7.13.4
Then enter the container to modify the connection address of its es:
Then restart the service:
docker restart kibana
service access
In the browser, you can access it through the address http://192.168.2.11:5601 . The access may be slow at first, and you need to wait for a while before the following effect occurs.
Then enter the kibana console: execute:
GET /
get the following result:
{
"name" : "node-11",
"cluster_name" : "es-cluster",
"cluster_uuid" : "q2_ZSSScSYy08nV9psoa9w",
"version" : {
"number" : "7.13.4",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "c5f60e894ca0c61cdbae4f5a686d9f08bcefc942",
"build_date" : "2021-07-14T18:33:36.673943207Z",
"build_snapshot" : false,
"lucene_version" : "8.8.2",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
security settings open
As an application closely related to Elasticsearch, Kibana needs to be turned on correspondingly when security authentication is turned on in Elasticsearch. Proceed as follows:
- Turn on security settings in an Elasticsearch cluster
- Generate the password of the required account through the bin/elasticsearch-setup-passwords command
- Modify the configuration file $KIBANA_HOME/config/kibana.yml to configure the password corresponding to the kibana account (it may be the kibana_system account after 7.x) into the relevant parameters
- elasticsearch.username: "kibana_system"
- elasticsearch.password: "password"
- After configuring the security settings, you need to log in with the administrator (elastic) account when you log in for the first time, and perform subsequent configuration and operations
the configuration method of each environment, please refer to
- rpm package installation: /etc/kibana/kibana.yml
- tar package installation: $KIBANA_HOME/config/kibana.yml
docker install:
- Map the local configuration file to the docker node through the -v parameter: docker run -v $KIBANA_CONFIG_PATH/kibana.yml:/usr/share/kibana/config/kibana.yml docker.io/kibana:7.134
Modify the environment configuration in the docker-compose.yml file
... environment: # 注意这几行 ELASTICSEARCH_HOSTS: http://es01:9200 ELASTICSEARCH_USERNAME: kibana ELASTICSEARCH_PASSWORD: kibana-password ...
MacOS:
- tar package installation: same as above
- Brew installation: the configuration address in brew info kibana-full, in this case: /usr/local/etc/kibana/config/kibana.yml
Windows:
- .zip package installation: the same as the tar package installation above
Common parameter optimization
The function of kibana is relatively complete, and parameter adjustment and optimization is not necessary. This section only discusses the way to enable Chinese display.
- Modify the corresponding configuration file in the same way as in the previous section
Set the parameter i18n.locale to zh-CN
- The corresponding parameter in docker-compose.yml is I18N_LOCALE
common problems and
Some problems often encountered during the installation and deployment of kibana nodes are analyzed, and some simple solutions are provided.
Kibana should not be run as root. Use --allow-root to continue.
- Kibana, like ES, cannot be started directly from the root account
- Like in the prompt, it can be started by adding the parameter --allow-root
- Full command: ./bin/kibana --allow-root
FATAL Error: Port 5601 is already in use. Another instance of Kibana may be running!
- Port 5601 is already occupied
Repair method:
Use the netstat -anp | grep 5601 command to find the process bound to port 5601:
netstat -anp | grep 5601 tcp6 0 0 :::5601 :::* LISTEN 3480/d ocker-proxy-c
- Determine whether to close the existing process according to the process information
Unable to connect to ES
log [10:08:28.980] [error][elasticsearch][monitoring] Request error, retrying GET http://localhost:9200/_xpack => connect ECONNREFUSED 127.0.0.1:9200log [10:08:2 8.993] [warning][elasticsearch][monitoring] Unable to revive connection: http://localhost:9200/ log [10:08:28.994] [warning][elasticsearch][monitoring] No living connectionslog [10:08:28. 995] [warning][licensing][plugins] License information could not be obtained from Elasticsear ch due to Error: No Living connections errorlog [10:08:29.016] [warning][monitoring][monit oring][plugins] X-Pack Monitoring Cluster Alerts will not be available: No Living connectionsl og [10:08:29.025] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 12 7.0.0.1:9200log [10:08:29.059] [error][savedobjects-service] Unable to retrieve version inform ation from Elasticsearch nodes.log [10:08:31.476] [error][data][elasticsearch] [ConnectionErr or]: connect ECONNREFUSED 127.0.0.1:9200
Kibana cannot connect to Elasticsearch url directly, there may be the following reasons:
- Address configuration error
- The network between the current node and the target address is blocked
- The listening address/port of the Elasticsearch configuration is wrong
Repair method:
- Check that the address/domain name is correct.
- Use the curl http://localhost:9200/ command to test whether the current node can connect to the target address normally.
- Adjust and debug to the correct connection.
Authentication failed
log [10:19:42.007] [error][data][elasticsearch] [security_exception]: missing authenticatio n credentials for REST request [/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_a ddress%2Cnodes.*.ip]log [10:19:42.042] [error][savedobjects-service] Unable to retrieve versi on information from Elasticsearch nodes.log [10:19:42.047] [warning][licensing][plugins] Lice nse information could not be obtained from Elasticsearch due to [security_exception] missi ng authentication credentials for REST request [/_xpack], with { header={ WWW-Authenticate ="Basic realm=\"security\" charset=\"UTF-8\"" } } :: {"path":"/_xpack","statusCode":401,"respon se":"{\"error\":{\"root_cause\":[{\"type\":\"security_exception\",\"reason\":\"missing authenticatio n credentials for REST request [/_xpack]\",\"header\":{\"WWW-Authenticate\":\"Basic realm=\\\ "security\\\" charset=\\\"UTF-8\\\"\"}}],\"type\":\"security_exception\",\"reason\":\"missing authe ntication credentials for REST request [/_xpack]\",\"header\":{\"WWW-Authenticate\":\"Basic re alm=\\\"security\\\" charset=\\\"UTF-8\\\"\"}},\"status\":401}","wwwAuthenticateDirective":"Basic realm=\"security\" charset=\"UTF-8\""} errorlog [10:19:42.050] [warning][monitoring][monitor ing][plugins] X-Pack Monitoring Cluster Alerts will not be available: [security_exception]3.4.1.2 Kibana(本地及 docker) < 162 missing authentication credentials for REST request [/_xpack], with { header={ WWW-Authent icate="Basic realm=\"security\" charset=\"UTF-8\"" } }log [10:19:44.442] [error][data][elasticse arch] [security_exception]: missing authentication credentials for REST request [/_nodes?filter _path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip]log [10:19:46.941] [erro r][data][elasticsearch] [security_exception]: missing authentication credentials for REST reque st [/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip]
- Kibana does not have security settings configured to properly connect to Elasticsearch nodes/clusters.
- When the Elasticsearch node/cluster has security settings enabled, all restful access needs to add authentication settings, including kibana access.
Repair method:
- (If not set) Select a node in the Elasticsearch cluster and run the bin/elasticsearch-setup-passwords command to generate the password for the required account
Modify the configuration file $KIBANA_HOME/config/kibana.yml to configure the password corresponding to the kibana account (it may be the kibana_system account after 7.x) into the relevant parameters
`elasticsearch.username: "kibana_system"` `elasticsearch.password: "password"`
- After configuring the security settings, you need to log in with the administrator (elastic) account when you log in for the first time, and perform subsequent configuration and operations
Error: Unable to find a match: docker-compose, the corresponding installation package of docker-compose cannot be found
- Maybe there is no latest installation package information in the yum repository or there is no corresponding software information in the lite system
Repair method:
Replace the address of the application market in the source file with the University of Science and Technology of China:
sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org/centos|baseurl=https://mirrors.ustc.edu.cn /centos|g' \ -i.bak \ /etc/yum.repos.d/CentOS-Base.repo
Install epel-release first (expand the application market)
follow-up installation
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Isthe docker daemon running?,The docker process does not start
- After docker is installed, it will not start automatically. Before it is set, docker will not restart automatically after the server restarts.
Repair method:
- Start the docker process systemctl start docker.
- Set docker to start systemctl enable docker with the system.
Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers
Failed to access the docker repository, and some nodes may not be able to directly access the external network to download the docker image
Repair method:
1. 开启外网访问
2. (或者)在其他能够访问外网的节点中下载对应镜像 `docker pull kibana:7.10.1`
3. 把镜像导出为文件 `docker save -o kibana-7.10.1-image.tar docker.io/kibana:7.10.1`
4. 把导出的文件拷贝到目标机器 `scp kibana-7.10.1-image.tar root@192.168.10.221:/tmp`
5. 登陆目标机器 `ssh root@192.168.10.221` 1. 导入目标镜像
Error response from daemon: manifest for kibana:7.9.11 not found: manifest unknown: manifest unknown
- The target image could not be found. The specified version of the image may not be found in the docker repository
- Log in to the mirror repository to search for the appropriate version ( http://dockerhub.com/ )
- (Or) search for a suitable image by command docker search kibana
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。