Linux ELK 安装

ELK简介

ELK是三个开源软件的缩写,分别表示:Elasticsearch , Logstash, Kibana , 它们都是开源软件。新增了一个FileBeat,它是一个轻量级的日志收集处理工具(Agent),Filebeat占用资源少,适合于在各个服务器上搜集日志后传输给Logstash,官方也推荐此工具。

Elasticsearch是个开源分布式搜索引擎,提供搜集、分析、存储数据三大功能。它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等。

Logstash 主要是用来日志的搜集、分析、过滤日志的工具,支持大量的数据获取方式。一般工作方式为c/s架构,client端安装在需要收集日志的主机上,server端负责将收到的各节点日志进行过滤、修改等操作在一并发往elasticsearch上去。

Kibana 也是一个开源和免费的工具,Kibana可以为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以帮助汇总、分析和搜索重要数据日志。

Filebeat隶属于Beats。目前Beats包含四种工具:

  1. Packetbeat(搜集网络流量数据)
  2. Topbeat(搜集系统、进程和文件系统级别的 CPU 和内存使用情况等数据)
  3. Filebeat(搜集文件数据)
  4. Winlogbeat(搜集 Windows 事件日志数据)

一般而言,ELK主要用在海量零散数据的汇总和信息提取分析上。在分布式系统的日志统计,大数据的数据分析,业务数据的快速检索,服务器集群上每台服务器的运行情况查询监控等方面有很强大的功能。

拿ELK在分布式系统上的日志收集举例。随着微服务的流行,分布式的使用,以往日志文件写在具体的服务器上的某一位置做法变得不符合需求,首先是服务器越来越多而且后端服务集群横跨多个服务器导致日志越来越散,不论是开发,测试还是线上的日志定位越来越难,准确的找到有用的信息需要运维/开发不断的排查,这时ELK就派上用场了,它将服务集群里面的日志收集汇总并建立索引,当出现问题是定位问题就像Google这类搜素引擎一样高效简单。

安装

环境

NameVersionNote
RHEL/CentOS8.xCentOS Stream
JDK11.x
elasticsearch7.10.x

一般单台机器就可以安装了,我这里为了贴近实际使用,分为3个机器来部署一个入门的ELK。

具体的结构如下

主机IP部署服务
thinkvmc01192.168.50.132ElasticSearch
thinkvmc02192.168.50.67Logstash
thinkvmc03192.168.50.51Kibana

安装准备

ELK是需要Java的,建议安装Java11。这里我就不多啰嗦了

# 先检查JDK环境
[thinktik@thinkvm01 env]$ java -version
java version "11.0.9" 2020-10-20 LTS
Java(TM) SE Runtime Environment 18.9 (build 11.0.9+7-LTS)
Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.9+7-LTS, mixed mode)

开始安装

安装ELK不难,按照官方的文档即可,官网链接下

开源搜索与分析 · Elasticsearch

我们先安装 ElasticSearch,先下载Linux通用安装包.当然为了简单,你也可以下载具体Linux发行版的预编译包,这样安装更加简单,不过缺少灵活性.

thinkvmc01 先安装ES
# 下载完毕,解压
[thinktik@thinkvm01 env]$ ls
elasticsearch-7.10.1-linux-x86_64.tar.gz  jdk11
[thinktik@thinkvm01 env]$ tar -zxvf elasticsearch-7.10.1-linux-x86_64.tar.gz 
elasticsearch-7.10.1/
elasticsearch-7.10.1/lib/

...

elasticsearch-7.10.1/config/jvm.options.d/
elasticsearch-7.10.1/logs/
elasticsearch-7.10.1/plugins/



[thinktik@thinkvm01 env]$ mv elasticsearch-7.10.1 elasticsearch
# 进入安装文件目录
[thinktik@thinkvm01 env]$ cd elasticsearch
[thinktik@thinkvm01 elasticsearch]$ ls
bin  config  jdk  lib  LICENSE.txt  logs  modules  NOTICE.txt  plugins  README.asciidoc
[thinktik@thinkvm01 elasticsearch]$ cd config/
[thinktik@thinkvm01 config]$ ls
elasticsearch.yml  jvm.options  jvm.options.d  log4j2.properties  role_mapping.yml  roles.yml  users  users_roles
# 修改配置,绑定我们的网卡。不修改默认为127.0.0.1,那样其余的机器上的Logstash,Kibana就没法访问这台机的ES了
[thinktik@thinkvm01 config]$ vim elasticsearch.yml

#修改如下

# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
# IP地址
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
# 端口,默认9200
http.port: 9200


[thinktik@thinkvm01 config]$ cd ..
[thinktik@thinkvm01 elasticsearch]$ ls
bin  config  jdk  lib  LICENSE.txt  logs  modules  NOTICE.txt  plugins  README.asciidoc
[thinktik@thinkvm01 elasticsearch]$ cd bin/
[thinktik@thinkvm01 bin]$ ls
elasticsearch           elasticsearch-env-from-file  elasticsearch-setup-passwords     x-pack-env
elasticsearch-certgen   elasticsearch-keystore       elasticsearch-shard               x-pack-security-env
elasticsearch-certutil  elasticsearch-migrate        elasticsearch-sql-cli             x-pack-watcher-env
elasticsearch-cli       elasticsearch-node           elasticsearch-sql-cli-7.10.1.jar
elasticsearch-croneval  elasticsearch-plugin         elasticsearch-syskeygen
elasticsearch-env       elasticsearch-saml-metadata  elasticsearch-users

# 启动
[thinktik@thinkvm01 bin]$ ./elasticsearch
[2021-01-17T17:59:19,715][INFO ][o.e.n.Node               ] [thinkvm01] version[7.10.1], pid[1460], build[default/tar/1c34507e66d7db1211f66f3513706fdf548736aa/2020-12-05T01:00:33.671820Z], OS[Linux/4.18.0-257.el8.x86_64/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/15.0.1/15.0.1+9]
[2021-01-17T17:59:19,718][INFO ][o.e.n.Node               ] [thinkvm01] JVM home [/home/thinktik/env/elasticsearch/jdk], using bundled JDK [true]


....

# 这里报了错,很明显了
ERROR: [2] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
ERROR: Elasticsearch did not exit normally - check the logs at /home/thinktik/env/elasticsearch/logs/elasticsearch.log
[2021-01-17T17:59:38,542][INFO ][o.e.n.Node               ] [thinkvm01] stopping ...
[2021-01-17T17:59:38,571][INFO ][o.e.n.Node               ] [thinkvm01] stopped
[2021-01-17T17:59:38,571][INFO ][o.e.n.Node               ] [thinkvm01] closing ...
[2021-01-17T17:59:38,588][INFO ][o.e.n.Node               ] [thinkvm01] closed


# 我们按它的提示该系统配置
[thinktik@thinkvmc01 bin]$ vim /etc/security/limits.conf
[thinktik@thinkvmc01 bin]$ su
Password: 

#添加如下配置:
* soft nofile 65536
* hard nofile 131072
* soft nproc 4096
* hard nproc 8192

# 继续启动
[thinktik@thinkvmc01 bin]$ ./elasticsearch
# 报错,那么继续修改
ERROR: [2] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
ERROR: Elasticsearch did not exit normally - check the logs at /home/thinktik/env/elasticsearch/logs/elasticsearch.log
[2021-01-17T18:01:23,596][INFO ][o.e.n.Node               ] [thinkvm01] stopping ...
[2021-01-17T18:01:23,621][INFO ][o.e.n.Node               ] [thinkvm01] stopped
[2021-01-17T18:01:23,621][INFO ][o.e.n.Node               ] [thinkvm01] closing ...
[2021-01-17T18:01:23,635][INFO ][o.e.n.Node               ] [thinkvm01] closed


# 继续修改
[thinktik@thinkvmc01 bin]$ su
Password: 
[root@thinkvm01 bin]# sysctl -w vm.max_map_count=262144
vm.max_map_count = 262144
# 继续启动,还是报错
ERROR: [1] bootstrap checks failed
[1]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
ERROR: Elasticsearch did not exit normally - check the logs at /home/thinktik/env/elasticsearch/logs/elasticsearch.log
[2021-01-17T18:03:15,709][INFO ][o.e.n.Node               ] [thinkvm01] stopping ...
[2021-01-17T18:03:15,743][INFO ][o.e.n.Node               ] [thinkvm01] stopped
[2021-01-17T18:03:15,743][INFO ][o.e.n.Node               ] [thinkvm01] closing ...
[2021-01-17T18:03:15,757][INFO ][o.e.n.Node               ] [thinkvm01] closed

# 我们会过去继续修改下elasticsearch.yml,修改的内容如下:
node.name: node-1
cluster.initial_master_nodes: ["node-1"]

# 再次启动
[thinktik@thinkvmc01 bin]$ ./elasticsearch
# 成功

这一部分参考:

验证


[thinktik@thinkvm01 ~]$ netstat -nlp
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -                   
# 9200,9300 被ES监听
tcp6       0      0 :::9200                 :::*                    LISTEN      2157/java           
tcp6       0      0 :::9300                 :::*                    LISTEN      2157/java           
tcp6       0      0 :::22                   :::*                    LISTEN      -                   
raw6       0      0 :::58                   :::*                    7           -   

# 防火墙开放端口
[root@thinkvm01 thinktik]# firewall-cmd --zone=public --add-port=9200/tcp --permanent
success
[root@thinkvm01 thinktik]# firewall-cmd --zone=public --add-port=9300/tcp --permanent
success
[root@thinkvm01 thinktik]# firewall-cmd --reload
success


# thinkvm02 主机验证 thinkvm01 的ES效果。你用浏览器验证下面的地址也可以 
[thinktik@thinkvm02 ~]$ curl -i http://192.168.50.132:9200/
HTTP/1.1 200 OK
content-type: application/json; charset=UTF-8
content-length: 532

{
  "name" : "node-1",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "lgd2BGWnRma1GJzOwG9Urg",
  "version" : {
    "number" : "7.10.1",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "1c34507e66d7db1211f66f3513706fdf548736aa",
    "build_date" : "2020-12-05T01:00:33.671820Z",
    "build_snapshot" : false,
    "lucene_version" : "8.7.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

你用浏览器验证下面的地址也可以
es

到这里ES就安装好了

thinkvmc02 先安装Logstash
# 验证java
[thinktik@thinkvm02 ~]$ java -version
java version "11.0.9" 2020-10-20 LTS
Java(TM) SE Runtime Environment 18.9 (build 11.0.9+7-LTS)
Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.9+7-LTS, mixed mode)

[thinktik@thinkvm02 env]$ ls
jdk11  logstash-7.10.1-linux-x86_64.tar.gz
[thinktik@thinkvm02 env]$ tar -zxvf logstash-7.10.1-linux-x86_64.tar.gz
[thinktik@thinkvm02 env]$ ls
jdk11  logstash-7.10.1  logstash-7.10.1-linux-x86_64.tar.gz
[thinktik@thinkvm02 env]$ cd logstash-7.10.1
[thinktik@thinkvm02 logstash-7.10.1]$ ls
bin     CONTRIBUTORS  Gemfile       jdk  LICENSE.txt    logstash-core-plugin-api  NOTICE.TXT  vendor
config  data          Gemfile.lock  lib  logstash-core  modules                   tools       x-pack
[thinktik@thinkvm02 logstash-7.10.1]$ cd config/
[thinktik@thinkvm02 config]$ ls
jvm.options  log4j2.properties  logstash-sample.conf  logstash.yml  pipelines.yml  startup.options
[thinktik@thinkvm02 config]$ cp logstash-sample.conf logstash.conf
[thinktik@thinkvm02 config]$ vim logstash.conf 
# 这里将ES地址写对就可以了
    input {
      beats {
        port => 5044
      }
    }
    
    output {
      elasticsearch {
        #hosts => ["http://localhost:9200"]
        #index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
        hosts => ["http://192.168.50.132:9200"]
        index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
            
        #user => "elastic"
        #password => "changeme"
      }
    }


[thinktik@thinkvmc02 config]$ vim logstash.yml 
# 这里写正确自己的IP   
    # ------------ Metrics Settings --------------
    #
    # Bind address for the metrics REST endpoint
    #
    http.host: 192.168.50.67

[thinktik@thinkvm02 config]$ cd ..
[thinktik@thinkvm02 logstash-7.10.1]$ ls
bin     CONTRIBUTORS  Gemfile       jdk  LICENSE.txt    logstash-core-plugin-api  NOTICE.TXT  vendor
config  data          Gemfile.lock  lib  logstash-core  modules                   tools       x-pack
[thinktik@thinkvm02 logstash-7.10.1]$ cd bin/
[thinktik@thinkvm02 bin]$ ls
benchmark.bat  dependencies-report  logstash           logstash-keystore.bat  logstash-plugin.bat  pqrepair      setup.bat
benchmark.sh   ingest-convert.bat   logstash.bat       logstash.lib.sh        pqcheck              pqrepair.bat  system-install
cpdump         ingest-convert.sh    logstash-keystore  logstash-plugin        pqcheck.bat          ruby
# 启动
[thinktik@thinkvm02 bin]$ ./logstash -f ../config/logstash.conf 
Using bundled JDK: /home/thinktik/env/logstash-7.10.1/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/tmp/jruby-1529/jruby1493929140020547441jopenssl.jar) to field java.security.MessageDigest.provider
WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Sending Logstash logs to /home/thinktik/env/logstash-7.10.1/logs which is now configured via log4j2.properties
[2021-01-17T18:35:42,993][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.10.1", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +indy +jit [linux-x86_64]"}
[2021-01-17T18:35:43,178][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/home/thinktik/env/logstash-7.10.1/data/queue"}
[2021-01-17T18:35:43,194][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/home/thinktik/env/logstash-7.10.1/data/dead_letter_queue"}
[2021-01-17T18:35:43,596][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2021-01-17T18:35:43,637][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"e0a6425a-6a4b-46fc-a710-607de1cc4268", :path=>"/home/thinktik/env/logstash-7.10.1/data/uuid"}
[2021-01-17T18:35:45,292][INFO ][org.reflections.Reflections] Reflections took 29 ms to scan 1 urls, producing 23 keys and 47 values 
[2021-01-17T18:35:46,114][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.50.132:9200/]}}
[2021-01-17T18:35:46,326][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://192.168.50.132:9200/"}
[2021-01-17T18:35:46,377][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2021-01-17T18:35:46,381][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}

# 日志显示ES地址对了
[2021-01-17T18:35:46,474][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://192.168.50.132:9200"]}
[2021-01-17T18:35:46,593][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/home/thinktik/env/logstash-7.10.1/config/logstash.conf"], :thread=>"#<Thread:0xa42bab3 run>"}
[2021-01-17T18:35:46,649][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2021-01-17T18:35:46,741][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2021-01-17T18:35:46,767][INFO ][logstash.outputs.elasticsearch][main] Installing elasticsearch template to _template/logstash
[2021-01-17T18:35:47,372][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.78}
# 日志显示5044,9600被监听
[2021-01-17T18:35:47,407][INFO ][logstash.inputs.beats    ][main] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2021-01-17T18:35:47,429][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2021-01-17T18:35:47,487][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2021-01-17T18:35:47,588][INFO ][org.logstash.beats.Server][main][dcf5d109667b76252ff43d3baac1388252198de237a1586035b93eb26361d868] Starting server on port: 5044
[2021-01-17T18:35:47,908][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}


# 检查下端口监听
[thinktik@thinkvm02 ~]$  netstat -nlp
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -                   
tcp6       0      0 :::22                   :::*                    LISTEN      -                   
tcp6       0      0 192.168.50.67:9600      :::*                    LISTEN      1529/java           
tcp6       0      0 :::5044                 :::*                    LISTEN      1529/java           
raw6       0      0 :::58                   :::*                    7           -                 
# 防火墙打开
[thinktik@thinkvm02 ~]$ su
Password: 
[root@thinkvm02 thinktik]# firewall-cmd --zone=public --add-port=9600/tcp --permanent
success
[root@thinkvm02 thinktik]# firewall-cmd --zone=public --add-port=5044/tcp --permanent
success
[root@thinkvm02 thinktik]# firewall-cmd --reload
success

到这里logstash安装完毕

thinkvmc03 先安装Kibana
[thinktik@thinkvm03 ~]$ java -version
java version "11.0.9" 2020-10-20 LTS
Java(TM) SE Runtime Environment 18.9 (build 11.0.9+7-LTS)
Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.9+7-LTS, mixed mode)

[thinktik@thinkvm03 env]$ ls
jdk11  kibana-7.10.1-linux-x86_64.tar.gz
[thinktik@thinkvm03 env]$ tar -zxvf kibana-7.10.1-linux-x86_64.tar.gz 
[thinktik@thinkvm03 env]$ cd kibana-7.10.1-linux-x86_64
[thinktik@thinkvm03 kibana-7.10.1-linux-x86_64]$ ls
bin  config  data  LICENSE.txt  node  node_modules  NOTICE.txt  package.json  plugins  README.txt  src  x-pack
# 修改配置
[thinktik@thinkvm03 kibana-7.10.1-linux-x86_64]$ cd config/
[thinktik@thinkvm03 config]$ ls
kibana.yml  node.options
[thinktik@thinkvm03 config]$ vim kibana.yml
# 这里修改为自己的IP,端口默认5601
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "192.168.50.51"
# 这里修改ES服务的地址
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://192.168.50.132:9200"]

# 启动
[thinktik@thinkvmc03 bin]$ ./kibana

  log   [16:04:24.455] [info][status][plugin:kibana@6.7.1] Status changed from uninitialized to green - Ready
  log   [16:04:24.507] [info][status][plugin:elasticsearch@6.7.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [16:04:24.510] [info][status][plugin:xpack_main@6.7.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [16:04:24.523] [info][status][plugin:graph@6.7.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch


# 检查
[root@thinkvm03 thinktik]# netstat -nlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      885/sshd            
tcp        0      0 192.168.50.51:5601      0.0.0.0:*               LISTEN      1631/./../node/bin/ 
tcp6       0      0 :::22                   :::*                    LISTEN      885/sshd            
raw6       0      0 :::58                   :::*                    7           878/NetworkManager  

# 防火墙开放端口
[root@thinkvmc03 config]# firewall-cmd --zone=public --add-port=5601/tcp --permanent
success
[root@thinkvmc03 config]# firewall-cmd --reload
success

kibana效果:
es

到这里就完成了ELK基础搭建

thinkvm03 先安装Filebeat

接下来我们安装 Filebeat,使用ELKF架构来实现log4j的日志收集。

为了方便Filebeat安装在thinkvmc03上与thinkvmc02的Logstash形成分布式结构来模拟日志数据的收集与传输

官方的安装教程也很简单,属于基本操作了

[thinktik@thinkvm03 env]$ ls
filebeat-7.10.1-linux-x86_64.tar.gz  jdk11  kibana-7.10.1-linux-x86_64  kibana-7.10.1-linux-x86_64.tar.gz
[thinktik@thinkvm03 env]$ tar -zxvf filebeat-7.10.1-linux-x86_64.tar.gz 
[thinktik@thinkvm03 env]$ ls
filebeat-7.10.1-linux-x86_64         jdk11                       kibana-7.10.1-linux-x86_64.tar.gz
filebeat-7.10.1-linux-x86_64.tar.gz  kibana-7.10.1-linux-x86_64
[thinktik@thinkvm03 env]$ cd filebeat-7.10.1-linux-x86_64
[thinktik@thinkvm03 filebeat-7.10.1-linux-x86_64]$ ls
fields.yml  filebeat  filebeat.reference.yml  filebeat.yml  kibana  LICENSE.txt  module  modules.d  NOTICE.txt  README.md
# 修改配置使filebeat对准我们的输出
[thinktik@thinkvm03 filebeat-7.10.1-linux-x86_64]$ vim filebeat.yml 



    #=========================== Filebeat inputs =============================
    # 设置Filebeat读取/home/thinktik/ELKF_TEST.log日志
    filebeat.inputs:
    
    # Each - is an input. Most options can be set at the input level, so
    # you can use different inputs for various configurations.
    # Below are the input specific configurations.
    
    - type: log
    
      # Change to true to enable this input configuration.
      # 这里设为True开启日志读入
      enabled: true
    
      # Paths that should be crawled and fetched. Glob based paths.
      paths:
        # 这里指定自己要输入的日志文件
        - /home/thinktik/ELKF_TEST.log
        #- /var/log/*.log
        #- c:\programdata\elasticsearch\logs\*

    #-------------------------- Elasticsearch output ------------------------------
    # 直接输出到Elasticsearch 这里我们不建议直接输出
    #output.elasticsearch:
      # Array of hosts to connect to.
      # hosts: ["192.168.50.207:9200"]
    
      # Enabled ilm (beta) to use index lifecycle management instead daily indices.
      #ilm.enabled: false
    
      # Optional protocol and basic auth credentials.
      #protocol: "https"
      #username: "elastic"
      #password: "changeme"
    
    #----------------------------- Logstash output --------------------------------
    # 这里才是Logstash,直接输出到logstash 这里我们建议直接输出,地址配对就可以
    output.logstash:
      # The Logstash hosts
      hosts: ["192.168.50.67:5044"]
    
      # Optional SSL. By default is off.
      # List of root certificates for HTTPS server verifications
      #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
    
      # Certificate for SSL client authentication
      #ssl.certificate: "/etc/pki/client/cert.pem"
    
      # Client Certificate Key

# 保存后启动
[thinktik@thinkvm03 filebeat-7.10.1-linux-x86_64]$ ./filebeat

# 后续的你直接修改/home/thinktik/ELKF_TEST.log,写入一些数据到这个文件里面等待Kibana显示出来效果
# 我们先理下思路,流程是:filebeat -> Logstash -> ES -> Kibana
# 如果没有问题,那么我们再Kibana检查效果

检查ELKF效果:
image
这里看到日志被正确读取

我们再细节设置下:
image
image
image
image

我们搜索下看看,先搜索主机message=gf32r32qe的日志
image
看到匹配正确

本文原创链接: Linux ELK 安装
参考链接:

12 声望
2 粉丝
0 条评论
推荐阅读
zookeeper watch
watch是zookeeper的核心功能,client可以监控zookeeper中node的状态变化以及node子节点的状态变化,一旦这些状态状态发生了改变,zookeeper就会通知这个node上的client,这样client就能做相应的业务逻辑了.

thinktik阅读 2.9k

Java8的新特性
Java语言特性系列Java5的新特性Java6的新特性Java7的新特性Java8的新特性Java9的新特性Java10的新特性Java11的新特性Java12的新特性Java13的新特性Java14的新特性Java15的新特性Java16的新特性Java17的新特性Java...

codecraft32阅读 27.4k评论 1

一文彻底搞懂加密、数字签名和数字证书!
微信搜索🔍「编程指北」,关注这个写干货的程序员,回复「资源」,即可获取后台开发学习路线和书籍来源:个人CS学习网站:[链接]前言这本是 2020 年一个平平无奇的周末,小北在家里刷着 B 站,看着喜欢的 up 主视...

编程指北71阅读 33.5k评论 20

Java11的新特性
Java语言特性系列Java5的新特性Java6的新特性Java7的新特性Java8的新特性Java9的新特性Java10的新特性Java11的新特性Java12的新特性Java13的新特性Java14的新特性Java15的新特性Java16的新特性Java17的新特性Java...

codecraft28阅读 19.3k评论 3

Java5的新特性
Java语言特性系列Java5的新特性Java6的新特性Java7的新特性Java8的新特性Java9的新特性Java10的新特性Java11的新特性Java12的新特性Java13的新特性Java14的新特性Java15的新特性Java16的新特性Java17的新特性Java...

codecraft13阅读 21.7k

Java9的新特性
Java语言特性系列Java5的新特性Java6的新特性Java7的新特性Java8的新特性Java9的新特性Java10的新特性Java11的新特性Java12的新特性Java13的新特性Java14的新特性Java15的新特性Java16的新特性Java17的新特性Java...

codecraft20阅读 15.3k

Java13的新特性
Java语言特性系列Java5的新特性Java6的新特性Java7的新特性Java8的新特性Java9的新特性Java10的新特性Java11的新特性Java12的新特性Java13的新特性Java14的新特性Java15的新特性Java16的新特性Java17的新特性Java...

codecraft17阅读 11.2k

12 声望
2 粉丝
宣传栏