1. Background Introduction
The management of logs in the project is one of the basic functions. Different users and scenarios have specific requirements for logs, so different strategies are needed for log collection and management. If it is in a distributed project, the log system design more complicated.
- Log type: business operation, information printing, request link;
- Role requirements: R&D end, user end, service level, system level;
users and needs
- Client: addition, deletion and modification of core data, business operation log;
- R&D side: log collection and management strategy, abnormal log monitoring;
- Service level: key log printing, problem discovery and troubleshooting;
- System level: link generation and monitoring system in distributed projects;
In different scenarios, different technical means need to be used to implement log collection management, such as log printing, operation records, ELK system, etc. Pay attention to avoid abnormal program interruption caused by log management.
The more complex the system design and business scenarios are, the more they rely on the log output information. In a large-scale architecture, an independent log platform is usually built to provide a complete set of solutions for log data collection, storage, and analysis.
2. Slf4j components
1. Appearance mode
The components of the log follow the appearance design pattern. Slf4j, as the appearance object of the log system, defines the standard of the log, and the specific implementation of the log capability is implemented by each sub-module; Slf4j specifies the loading method and functional interface of the log object, and interacts with the client Provide log management function;
private static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(Impl.class) ;
It is generally forbidden to directly use the APIs of specific implementation components such as Logback and Log4j to avoid unnecessary trouble caused by component replacement, and to achieve unified maintenance of logs.
2. SPI interface
From the perspective of the interaction between Slf4j and Logback components, in the process of using the log, the basic entry point is to use the interface of Slf4j to identify and load the specific implementation in Logback; the interface specification defined by SPI is usually used as a third-party (external) component. accomplish.
The above SPI is used as the connection point of the two sets of components. The loading process can be roughly viewed through the source code, and the source code of LoggerFactory can be traced:
public final class org.slf4j.LoggerFactory {
private final static void performInitialization() {
bind();
}
private final static void bind() {
try {
StaticLoggerBinder.getSingleton();
} catch (NoClassDefFoundError ncde) {
String msg = ncde.getMessage();
if (messageContainsOrgSlf4jImplStaticLoggerBinder(msg)) {
Util.report("Failed to load class \"org.slf4j.impl.StaticLoggerBinder\".");
}
}
}
}
Only a few lines of schematic source code are posted here. When the initial binding association is performed in the LoggerFactory, if no specific log implementation component is found, the corresponding exception information will be reported, and the output of System.err is used. Error message.
3. Custom Components
1. Functional packaging
For log (or other) common functions, an independent code package is usually encapsulated in the code project as a common dependency, unified management and maintenance. For the custom encapsulation of the log, you can refer to the previous documents. Here are usually several core points:
- Starter loading: The package is configured as a starter component, which can be scanned and loaded by the framework;
- AOP aspect programming: usually add log annotations to related methods to automatically record actions;
- Annotation: define the core parameters and processing logic that need to be marked for logging;
As for how to assemble log content, adapt business semantics, and follow-up management processes, you can design corresponding strategies according to specific scenarios, such as how to store logs, whether to analyze them in real time, and whether to execute them asynchronously.
2. Object Analysis
In the custom annotation, the problem of object parsing will be involved, that is, the attribute to be parsed from the object is put in the annotation, and the value is spliced into the log content, which can enhance the semantic readability of the business log.
import org.springframework.expression.Expression;
import org.springframework.expression.spel.standard.SpelExpressionParser;
public class Test {
public static void main(String[] args) {
// Map集合
HashMap<String,Object> infoMap = new HashMap<>() ;
infoMap.put("info","Map的描述") ;
// List集合
ArrayList<Object> arrayList = new ArrayList<>() ;
arrayList.add("List-00");
arrayList.add("List-01");
// User对象
People oldUser = new People("Wang",infoMap,arrayList) ;
People newUser = new People("LiSi",infoMap,arrayList) ;
// 包装对象
WrapObj wrapObj = new WrapObj("WrapObject",oldUser,newUser) ;
// 对象属性解析
SpelExpressionParser parser = new SpelExpressionParser();
// objName
Expression objNameExp = parser.parseExpression("#root.objName");
System.out.println(objNameExp.getValue(wrapObj));
// oldUser
Expression oldUserExp = parser.parseExpression("#root.oldUser");
System.out.println(oldUserExp.getValue(wrapObj));
// newUser.userName
Expression userNameExp = parser.parseExpression("#root.newUser.userName");
System.out.println(userNameExp.getValue(wrapObj));
// newUser.hashMap[info]
Expression ageMapExp = parser.parseExpression("#root.newUser.hashMap[info]");
System.out.println(ageMapExp.getValue(wrapObj));
// oldUser.arrayList[1]
Expression arr02Exp = parser.parseExpression("#root.oldUser.arrayList[1]");
System.out.println(arr02Exp.getValue(wrapObj));
}
}
@Data
@AllArgsConstructor
class WrapObj {
private String objName ;
private People oldUser ;
private People newUser ;
}
@Data
@AllArgsConstructor
class People {
private String userName ;
private HashMap<String,Object> hashMap ;
private ArrayList<Object> arrayList ;
}
Pay attention to the SpelExpressionParser
parser used above, which is the native API of the Spring framework; for many problems encountered in the business, it is recommended to first find solutions from the core dependencies (Spring+JDK), and spend more time familiarizing yourself with the system The overall picture of the core components in the development will be of great help to the development vision and ideas.
3. Pattern design
Here is a more complex solution for custom logs, identify log annotations through AOP mode, parse the object attributes to be recorded in the annotations, construct the corresponding log body, and finally adapt to different business strategies according to the scenarios marked by annotations:
The higher the requirement for the versatility of the function, the more abstract the built-in adaptation strategy will be during the encapsulation. When dealing with complex logical processes, it is necessary to be good at using different components together, which can share the pressure of business support and form a stable and reliable system. solution.
4. Distributed link
1. Link identification
Based on the distributed system implemented by microservices, processing a request will go through multiple sub-services. If a service is abnormal during the process, it is necessary to locate the request action to which the abnormality belongs, so as to better determine the cause of the abnormality and reproduce the solution.
The positioning action depends on a core identifier: TraceId-Track ID, that is, when the request flows through each service, it will carry the TraceId bound to the request, so that it can identify which actions of different services are generated by the same request.
The requested link view can be restored through TraceId and SpanId, and combined with related log printing and other actions, abnormal problems can be quickly resolved. The Sleuth component provides the support for this capability in the microservice system.
The core parameters of the link view can be integrated into the Slf4j component, here you can refer to the org.slf4j.MDC
syntax, MDC provides the parameter transfer mapping capability before and after the log, and internally wraps the Map container to manage the parameters; in the Logback component, StaticMDCBinder
Provide the binding of this capability, so that the log printing can also carry the identifier of the link view, so as to achieve the complete integration of this capability.
2. ELK system
The log generated by the link view is very large, so how to manage and quickly query and use these document logs is also a key issue. A very common solution is the ELK system, which has now been replaced by the ElasticStack product.
- Kibana: You can use graphs and charts to visualize data in Elasticsearch;
- Elasticsearch: provides data storage, search and analysis engine capabilities;
- Logstash: a data processing pipeline capable of collecting, transforming, and pushing data from multiple sources at the same time;
Logstash provides log collection and transmission capabilities, Elasticsearch stores a large number of log records in JSON format, and Kibana can visualize data.
3. Service and configuration
Configuration dependency : You need to configure the Logstash address and port in the service, that is, the log transmission address, and the service name;
spring:
application:
name: app_serve
logstash:
destination:
uri: Logstash-地址
port: Logstash-端口
Configuration reading : The above core parameters are loaded in the Logback component configuration, so that the parameter can be used by the value of name in the configuration context;
<springProperty scope="context" name="APP_NAME" source="spring.application.name" defaultValue="butte_app" />
<springProperty scope="context" name="DES_URI" source="logstash.destination.uri" />
<springProperty scope="context" name="DES_PORT" source="logstash.destination.port" />
Log transmission : configure the transmission content accordingly, specify the LogStash service configuration, encoding, core parameters, etc.;
<appender name="LogStash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<!-- 日志传输地址 -->
<destination>${DES_URI:- }:${DES_PORT:- }</destination>
<!-- 日志传输编码 -->
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<timeZone>UTC</timeZone>
</timestamp>
<!-- 日志传输参数 -->
<pattern>
<pattern>
{
"severity": "%level",
"service": "${APP_NAME:-}",
"trace": "%X{X-B3-TraceId:-}",
"span": "%X{X-B3-SpanId:-}",
"exportable": "%X{X-Span-Export:-}",
"pid": "${PID:-}",
"thread": "%thread",
"class": "%logger{40}",
"rest": "%message"
}
</pattern>
</pattern>
</providers>
</encoder>
</appender>
Output format : You can also manage the output content of the log file or console through the log format setting;
<pattern>%d{yyyy-MM-dd HH:mm:ss} %contextName [%thread] %-5level %logger{100} - %msg %n</pattern>
For other configurations of Logback component logs, such as output location, level, data transmission method, etc., you can refer to the official documentation for continuous optimization.
4. Data channel
Let's take a look at how the data is transferred to the ES after it is transferred to the Logstash service. The corresponding transfer configuration is also required here. Note that the same version is recommended for logstash and ES. In this case, it is the 6.8.6
version.
Configuration file: logstash-butte.conf
input {
tcp {
host => "192.168.37.139"
port => "5044"
codec => "json"
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "log-%{+YYYY.MM.dd}"
}
}
- Input configuration: specify the host and port of the logstash connection, and specify the data format as json type;
- Output configuration: Specify the ES address for log data output, and specify how the index index is created by day;
Start logstash service
/opt/logstash-6.8.6/bin/logstash -f /opt/logstash-6.8.6/config/logstash-butte.conf
In this way, the complete ELK log management link is realized. By using the Kibana tool, the log records can be viewed, and the view link can be found according to the TraceId.
5. Reference source code
应用仓库:
https://gitee.com/cicadasmile/butte-flyer-parent
组件封装:
https://gitee.com/cicadasmile/butte-frame-parent
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。