Log output plugin configuration
- elasticsearch
ELASTICSEARCH_HOST "(required) elasticsearch host"
ELASTICSEARCH_PORT "(required) elasticsearch port"
ELASTICSEARCH_USER "(optinal) elasticsearch authentication username"
ELASTICSEARCH_PASSWORD "(optinal) elasticsearch authentication password"
ELASTICSEARCH_PATH "(optinal) elasticsearch http path prefix"
ELASTICSEARCH_SCHEME "(optinal) elasticsearch scheme, default is http"
- logstash
LOGSTASH_HOST "(required) logstash host"
LOGSTASH_PORT "(required) logstash port"
- file
FILE_PATH "(required) output log file directory"
FILE_NAME "(optinal) the name of the generated files, default is filebeat"
FILE_ROTATE_SIZE "(optinal) the maximum size in kilobytes of each file. When this size is reached, the files are rotated. The default value is 10240 KB"
FILE_NUMBER_OF_FILES "(optinal) the maximum number of files to save under path. When this number of files is reached, the oldest file is deleted, and the rest of the files are shifted from last to first. The default is 7 files"
FILE_PERMISSIONS "(optinal) permissions to use for file creation, default is 0600"
- redis
REDIS_HOST "(required) redis host"
REDIS_PORT "(required) redis port"
REDIS_PASSWORD "(optinal) redis authentication password"
REDIS_DATATYPE "(optinal) redis data type to use for publishing events"
REDIS_TIMEOUT "(optinal) redis connection timeout in seconds, default is 5"
- kafka
KAFKA_BROKERS "(required) kafka brokers"
KAFKA_VERSION "(optinal) kafka version"
KAFKA_USERNAME "(optianl) kafka username"
KAFKA_PASSWORD "(optianl) kafka password"
KAFKA_PARTITION_KEY "(optinal) kafka partition key"
KAFKA_PARTITION "(optinal) kafka partition strategy"
KAFKA_CLIENT_ID "(optinal) the configurable ClientID used for logging, debugging, and auditing purposes. The default is beats"
KAFKA_BROKER_TIMEOUT "(optinal) the number of seconds to wait for responses from the Kafka brokers before timing out. The default is 30 (seconds)."
KAFKA_KEEP_ALIVE "(optinal) keep-alive period for an active network connection. If 0s, keep-alives are disabled, default is 0 seconds"
KAFKA_REQUIRE_ACKS "(optinal) ACK reliability level required from broker. 0=no response, 1=wait for local commit, -1=wait for all replicas to commit. The default is 1"
log-pilot.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: log-pilot
labels:
k8s-app: log-pilot
spec:
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app: log-pilot
template:
metadata:
labels:
app: log-pilot
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: log-pilot
image: registry.cn-hangzhou.aliyuncs.com/acs/log-pilot:0.9.7-filebeat
env:
- name: "LOGGING_OUTPUT"
value: "kafka"
- name: "KAFKA_BROKERS"
value: "10.23.140.95:9092"
- name: "NODE_NAME"
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: sock
mountPath: /var/run/docker.sock
- name: logs
mountPath: /var/log/filebeat
- name: state
mountPath: /var/lib/filebeat
- name: root
mountPath: /host
readOnly: true
- name: localtime
mountPath: /etc/localtime
securityContext:
capabilities:
add:
- SYS_ADMIN
terminationGracePeriodSeconds: 30
volumes:
- name: sock
hostPath:
path: /var/run/docker.sock
- name: logs
hostPath:
path: /var/log/filebeat
- name: state
hostPath:
path: /var/lib/filebeat
- name: root
hostPath:
path: /
- name: localtime
hostPath:
path: /etc/localtime
启动
kubectl apply -f log-pilot.yaml
kubectl get pod -o wide | grep log-pilot
新建一个测试用的nginx,来收集nginx的日志
nginx-test.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-test
spec:
selector:
matchLabels:
app: nginx-test
replicas: 1
template:
metadata:
labels:
app: nginx-test
spec:
nodeName: 10.23.190.12
containers:
- name: nginx
image: nginx:1.21-alpine
imagePullPolicy: IfNotPresent
env:
- name: aliyun_logs_nginxtest ###深坑,必须以aliyun_logs为开头
value: "stdout"
---
apiVersion: v1
kind: Service
metadata:
name: nginx-test
spec:
selector:
app: nginx-test
ports:
- port: 80
targetPort: 80
nodePort: 38888
type: NodePort
查看在哪台work节点,然后访问work节点的38888端口制造出几条日志。
kubectl get pod -o wide | grep nginx-test
kubectl get pod -o wide | grep log-pilot #查看一共多少pod,然后随机选一个查看日志
kubectl logs log-pilot-4qj57 #然后随机选一个查看日志
这时候已经不显示has not log config, skip,之前因为没写aliyun_log,pod识别不到要收集的pod,所以一直跳过。
time="2022-12-08T19:51:58+08:00" level=info msg="logs: 928fa2dbe79d7617cda619f51c556c91745d3202b8d590395424e920d276fa90 = &{nginxtest /host/data/kube/docker/containers/928fa2dbe79d7617cda619f51c556c91745d3202b8d590395424e920d276fa90 nonex map[time_format:%Y-%m-%dT%H:%M:%S.%NZ] 928fa2dbe79d7617cda619f51c556c91745d3202b8d590395424e920d276fa90-json.log* map[index:nginxtest topic:nginxtest] false true}"
time="2022-12-08T19:51:58+08:00" level=info msg="Reload filebeat"
time="2022-12-08T19:51:58+08:00" level=info msg="Start reloading"
time="2022-12-08T19:51:58+08:00" level=debug msg="do not need to reload filebeat"
- 由最后的输出看到,已经正常采集到了,通过kafka-ui看到,topic也成功被创建,后面就是logstash>es的常规操作了
log-pilot:k8s中日志收集神器
官方介绍:
github地址: https://github.com/AliyunContainerService/log-pilot
log-pilot官方介绍: https://yq.aliyun.com/articles/674327
log-pilot官方搭建: https://yq.aliyun.com/articles/674361?spm=a2c4e.11153940.0.0.21ae21c3mTKwWS
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。