安装
docker pull prom/prometheus
配置文件
# my global config
global:
scrape_interval: 10s # By default, scrape targets every 15 seconds.
evaluation_interval: 10s # By default, scrape targets every 15 seconds.
# scrape_timeout is set to the global default (10s).
# Load and evaluate rules in this file every 'evaluation_interval' seconds.
rule_files:
# - "first.rules"
# - "second.rules"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 10s
scrape_timeout: 10s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
# 这里配置要监控的所有etcd节点
target_groups:
- targets: ['10.2.122.70:5001','10.2.122.71:5001','10.2.122.72:5001']
运行
注意prometheus的web界面使用9090端口
docker run -p 5002:9090 -v /home/codecraft/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml prometheus
访问
http://10.2.122.70:5002
自动把集群的所有请求汇总了。
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。