Take a look at the following figure. When the client initiates an Http request, what is the processing flow of the server?
In simple terms, it can be divided into the following steps:
- Establish network communication based on TCP protocol.
- Start transmitting data to the server.
- The server receives the data for analysis and starts to process the logic of this request.
- After the server finishes processing, it returns the result to the client.
In this process, network IO communication is involved. In the traditional BIO mode, the client initiates a data read request to the server, and the client is in a blocked state until the server returns the data until the server receives the data. Complete this session after returning the data. This process is called synchronous blocking IO. In the BIO model, if you want to achieve asynchronous operation, you can only use the multi-threaded model, that is, one request corresponds to one thread, which can prevent the server link from being occupied by a client and causing the number of connections to be unavailable. improve.
Synchronous blocking IO is mainly reflected in two blocking points
- Blocking when the server receives the client connection.
- Blocking when the data is not ready when the IO communication between the client and the server.
In this traditional BIO mode, it will cause a very serious problem. As shown in the figure below, if N clients initiate requests at the same time, according to the characteristics of the BIO model, the server can only process one request at the same time. This will cause the client request to be queued for processing, and the impact is that the user waits for a very long time for a request to be returned. This means that the server does not have concurrent processing capabilities, which is obviously inappropriate.
So, how should the server be optimized?
Non-blocking IO
From the previous analysis, it is found that when the server is processing a request, it will be in a blocked state and cannot process subsequent requests. Can the blocked place be optimized to be non-blocking? So there is non-blocking IO (NIO)
Non-blocking IO means that when the client initiates a request to the server, if the server's data is not ready, the client request will not be blocked, but will be returned directly. But it is possible that when the server's data is not ready, the response received by the client is empty. How can the client get the final data?
As shown in the figure, the client can only obtain the request result through polling. Compared with BIO, NIO has a significant improvement in performance and the number of connections without blocking.
NIO still has a drawback, that is, there will be a lot of empty polls during the polling process, and this polling will have a large number of system calls (initiating kernel instructions to load data from the network card buffer, switching from user space to kernel space). As the number of connections increases, performance problems can result.
Multiplexing mechanism
The essence of I/O multiplexing is through a mechanism (system kernel buffering I/O data), so that a single process can monitor multiple file descriptors, once a descriptor is ready (usually read or write) , Can notify the program to perform the corresponding read and write operations
What is fd? : In linux, the kernel treats all external devices as a file to operate. Reading and writing of a file will call the system command provided by the kernel and return an fd (file descriptor). There will also be corresponding file descriptors for reading and writing of a socket, which becomes socketfd.
Common IO multiplexing methods are [select, poll, epoll] , which are all IO multiplexing methods provided by Linux API, so let’s focus on the two models of select and epoll.
select: The process can pass one or more fd to the select system call, the process will be blocked in the select operation, so select can help us detect whether multiple fd is in the ready state, this mode has two disadvantages
- Because it can monitor multiple file descriptors at the same time, if there are 1000, if one of the fd is in the ready state at this time, then the current process needs to poll all fd linearly, that is, the more fd is monitored, the more performance overhead Big.
- At the same time, the select fd that can be opened in a single process is limited, the default is 1024, which is indeed a bit less for those TCP connections that need to support tens of thousands of single machines.
- epoll : Linux also provides epoll system calls. Epoll is based on an event-driven approach instead of sequential scanning, so the performance is relatively higher. The main principle is that when one of the monitored fd is ready, it will notify Which fd of the current process is ready, then the current process only needs to read data from the specified fd. In addition, the fd online that epoll can support is the largest file handle of the operating system, and this number is much greater than 1024
[Because epoll can tell the application process which fd is readable through events, we also call this IO asynchronous non-blocking IO, of course it is pseudo-asynchronous, because it also needs to copy the data from the kernel to the user space synchronously In, the real asynchronous non-blocking, it should be that the data has been completely prepared, I only need to read from the user space]
The advantage of I/O multiplexing is that multiple I/O blocks can be multiplexed onto the same select block, so that the system can handle multiple client requests at the same time in a single thread. Its biggest advantage is that the system overhead is small, and there is no need to create new processes or threads, which reduces the system's resource overhead. Its overall realization idea is shown in Figure 2-3.
After the client requests to the server, the client is in the process of transmitting data. In order to avoid the server from being blocked in the process of reading client data, the server will register the request to the Selector multiplexer, and the server will not Need to wait, only need to start a thread, through selector.select () to block polling the ready channel on the multiplexer, that is, if a client connection data transmission is completed, then the select () method will return ready Channel, and then perform related processing.
Asynchronous IO
The biggest difference between asynchronous IO and multiplexing mechanism is: when the data is ready, the client does not need to send kernel instructions to read data from the kernel space, but the system will asynchronously copy this data directly to the user space, and the application only You need to use the data directly.
<center>Figure 2-4 Asynchronous IO</center>
In Java, we can use NIO's api to complete the multiplexing mechanism and implement pseudo-asynchronous IO. In the network communication evolution model analysis this article demonstrates the Java API implementation of the multiplexing mechanism code, found that the code is not only cumbersome, but also very troublesome to use.
So Netty appeared. Netty's I/O model is implemented based on non-blocking IO, and the bottom layer relies on the multiplexer Selector of the JDK NIO framework.
A multiplexer Selector can poll multiple Channels at the same time. After adopting the epoll mode, only one thread is responsible for the polling of the Selector, and thousands of client connections can be accessed.
Reactor model
http://gee.cs.oswego.edu/dl/cpjslides/nio.pdf
After understanding NIO multiplexing, it is necessary to tell you about the Reactor multiplexing high-performance I/O design pattern. Reactor is essentially a high-performance IO design pattern based on the NIO multiplexing mechanism. Its core idea is to separate the response to IO events and business processing, process the IO events through one or more threads, and then distribute the prepared events to the business processing handlers thread for asynchronous non-blocking processing, as shown in Figure 2-5 .
The Reactor model has three important components:
- Reactor: sends I/O events to the corresponding Handler
- Acceptor: handles client connection requests
- Handlers: performs non-blocking read/write
<center>Figure 2-5 Reactor model</center>
This is the most basic single-Reactor single-threaded model (the overall I/O operation is done by the same thread) .
Among them, the Reactor thread is responsible for multiplexed sockets. After a new connection arrives and triggers the connect event, it is handed over to the Acceptor for processing, and IO read and write events are then handed over to hanlder for processing.
The main task of Acceptor is to build a handler. After obtaining the SocketChannel related to the client, bind it to the corresponding hanlder. After the corresponding SocketChannel has read and write events, based on racotor distribution, hanlder can handle it (all IO events are bound Set to the selector, distributed by Reactor)
Reactor mode essentially refers to the I/O multiplexing (I/O multiplexing) + non-blocking I/O (non-blocking I/O) mode.
Multi-threaded single Reactor model
The single-threaded Reactor implementation has shortcomings. It can be seen from the example code that the execution of the handler is serial. If one of the handlers processes the thread blocking, it will cause other business processing to block. Due to the execution of handler and reactor in the same thread, this will also cause new requests to be unable to be received. Let's do a small experiment:
- In the run method of DispatchHandler of the above Reactor code, add a Thread.sleep().
- Open multiple client windows to connect to the Reactor Server. One of the windows is blocked after sending a message. When the other window sends a message again, the subsequent requests cannot be processed due to the blocking of the previous request.
In order to solve this problem, some people propose to use multithreading to process business, that is, add a thread pool for asynchronous processing where the business is processed, and execute the reactor and handler in different threads, as shown in Figure 4-7.
<center>Figure 2-6</center>
Multi-threaded multi-reactor model
In the multi-threaded single Reactor model, we found that all I/O operations are done by one Reactor, while Reactor runs in a single thread. It needs to process operations including Accept()
/ read()
/ write
/ connect
. For small-capacity scenarios, Has little effect. But for application scenarios with high load, large concurrency or large data volume, it is easy to become a bottleneck. The main reasons are as follows:
- A NIO thread processes hundreds of thousands of links at the same time, and its performance cannot be supported. Even if the CPU load of the NIO thread reaches 100%, it cannot satisfy the reading and sending of massive messages;
- When the NIO thread is overloaded, the processing speed will slow down, which will cause a large number of client connections to time out, and retransmissions will often occur after the timeout, which adds to the load of the NIO thread, which will eventually lead to a large number of message backlogs and processing timeouts. Become the performance bottleneck of the system;
Therefore, we can further optimize and introduce multi-reactor multi-threading mode, as shown in Figure 2-7, Main Reactor is responsible for receiving client connection requests, and then passing the received request to SubReactor (where there can be more than one subReactor) , The specific business IO processing is completed by SubReactor.
The Multiple Reactors mode can usually be equivalent to the Master-Workers mode. For example, Nginx and Memcached use this multi-threaded model. Although the implementation details of different projects are slightly different, the overall mode is the same.
<center>Figure 2-7</center>
- Acceptor , the request receiver, in practice, its responsibilities are similar to the server, and it is not really responsible for the establishment of the connection request, but only entrusts its request to the Main Reactor thread pool to implement it, which plays a role of forwarding.
- Main Reactor , the main Reactor thread group, the main is responsible for the connection event , and IO read and write requests are forwarded to the SubReactor thread pool .
- Sub Reactor , Main Reactor usually monitors the client connection and forwards the read and write of the channel to a thread in the Sub Reactor thread pool (load balancing), which is responsible for the read and write of data. The read (OP_READ) and write events (OP_WRITE) of the channel are usually registered in NIO.
Netty of high-performance communication framework
In Java, there are many network programming frameworks, such as Java NIO, Mina, Netty, Grizzy, etc. But in all the middleware that everyone comes into contact with, most of them use Netty.
The reason is that Netty is currently the most popular high-performance Java network programming framework, and it is widely cited in middleware, live broadcasting, social networking, games and other fields. When it comes to open source middleware, the well-known Dubbo, RocketMQ, Elasticsearch, Hbase, RocketMQ, etc. are all implemented using Netty.
In the actual development, 99% of the classmates who came to attend the class today will not be involved in using Netty for network programming development, but why bother to tell everyone? There are several reasons
When interviewing in many big factories, relevant knowledge points will be involved
- What are the aspects of Netty's high performance
- What are the important components in Netty
- The design of Netty's memory pool and object pool
- Many middleware use netty for network communication, so when we analyze the source code of these middleware, we can reduce the difficulty of understanding network communication
- Improve the Java knowledge system and realize the comprehensive understanding of the technical system as much as possible.
Why choose Netty
Netty is actually a high-performance NIO framework, so it is a package based on NIO, essentially providing high-performance network IO communication functions. Since we have analyzed the network communication in detail in the previous course, it should be easier to learn when learning Netty.
Netty provides support for the above three Reactor models. We can quickly complete the development of different Reactor models through the API packaged by Netty. This is one of the reasons why everyone chooses Netty. In addition, Netty is compared to NIO Native API, it has the following characteristics:
- Provides efficient I/O model, thread model and time processing mechanism
- Provides a very simple and easy-to-use API. Compared with NIO, it provides a higher level of packaging for basic Channel, Selector, Sockets, Buffers and other APIs, which shields the complexity of NIO.
- Provides good support for data protocol and serialization
- Stability. Netty fixes many problems of JDK NIO, such as 100% cpu consumption caused by select idling, TCP disconnection and reconnection, keep-alive detection and other issues.
- Extensibility is very good in the same type of framework. For example, one is a customizable threading model. Users can select the Reactor model and the extensible event-driven model in the startup parameters to focus on business and framework Point separation.
Performance optimization, as a network communication framework, needs to deal with a large number of network requests, and inevitably faces the problem of creating and destroying network objects. This is not very friendly to JVM GC. In order to reduce the pressure of JVM garbage collection, the introduction Two optimization mechanisms
- Object pool reuse,
- Zero copy technology
Netty's ecological introduction
First of all, we need to understand what functions Netty provides, as shown in Figure 2-1, which represents the function description provided in the Netty ecosystem. These functions will be analyzed step by step in the follow-up content.
<center>Figure 2-1 Netty function ecology</center>
Basic use of Netty
Need to explain, the Netty version we explained is 4.x version. Some time ago, Netty released a 5.x version, but it was officially abandoned. The reason is: uses ForkJoinPool to increase the complexity and does not show obvious The performance advantage. keeps all branches synchronized at the same time is quite a lot of work, not necessary.
Add jar package dependency
uses 4.1.66 version
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-all</artifactId>
</dependency>
Create Netty Server service
In most scenarios, we use the master-slave multi-threaded Reactor model. The Boss thread is the live Reactor, and the Worker is the slave Reactor. They use different NioEventLoopGroup
The master Reactor is responsible for processing Accept, and then the Channel is registered to the slave Reactor, and the slave Reactor is mainly responsible for all I/O events in the Channel life cycle.
public class NettyBasicServerExample {
public void bind(int port){
// 我们要创建两个EventLoopGroup,
// 一个是boss专门用来接收连接,可以理解为处理accept事件,
// 另一个是worker,可以关注除了accept之外的其它事件,处理子任务。
//上面注意,boss线程一般设置一个线程,设置多个也只会用到一个,而且多个目前没有应用场景,
// worker线程通常要根据服务器调优,如果不写默认就是cpu的两倍。
EventLoopGroup bossGroup=new NioEventLoopGroup();
EventLoopGroup workerGroup=new NioEventLoopGroup();
try {
//服务端要启动,需要创建ServerBootStrap,
// 在这里面netty把nio的模板式的代码都给封装好了
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.group(bossGroup, workerGroup) //配置boss和worker线程
//配置Server的通道,相当于NIO中的ServerSocketChannel
.channel(NioServerSocketChannel.class)
//childHandler表示给worker那些线程配置了一个处理器,
// 配置初始化channel,也就是给worker线程配置对应的handler,当收到客户端的请求时,分配给指定的handler处理
.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
protected void initChannel(SocketChannel socketChannel) throws Exception {
socketChannel.pipeline().addLast(new NormalMessageHandler()); //添加handler,也就是具体的IO事件处理器
}
});
//由于默认情况下是NIO异步非阻塞,所以绑定端口后,通过sync()方法阻塞直到连接建立
//绑定端口并同步等待客户端连接(sync方法会阻塞,直到整个启动过程完成)
ChannelFuture channelFuture=bootstrap.bind(port).sync();
System.out.println("Netty Server Started,Listening on :"+port);
//等待服务端监听端口关闭
channelFuture.channel().closeFuture().sync();
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
//释放线程资源
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
public static void main(String[] args) {
new NettyBasicServerExample().bind(8080);
}
}
The above code is explained as follows:
- EventLoopGroup defines the thread group, which is equivalent to the thread we defined before when we wrote the NIO code. Two thread groups are defined here, namely the boss thread and the worker thread. The boss thread is responsible for receiving connections, and the worker thread is responsible for handling IO events. The boss thread generally sets up one thread, but only one is used to set multiple ones, and there is no application scenario for multiple ones. The worker thread is usually tuned according to the server, if not written, the default is twice the cpu.
- ServerBootstrap, to start the server, you need to create ServerBootStrap, in which netty encapsulates the nio template code.
- ChannelOption.SO_BACKLOG
Set the channel type
The NIO model is the most mature and widely cited model in Netty. Therefore, when using Netty, we will use NioServerSocketChannel as the Channel type.
bootstrap.channel(NioServerSocketChannel.class);
In addition to NioServerSocketChannel, it also provides
- EpollServerSocketChannel, epoll model can only be supported in linux kernel 2.6 and above. It is not supported in windows and mac. If you set Epoll to run in a window environment, an error will be reported.
- OioServerSocketChannel, used for receiving TCP connection blocking on the server side
- KQueueServerSocketChannel, the kqueue model, is a more efficient IO multiplexing technology in Unix. Common IO multiplexing technologies include select, poll, epoll, kqueue, and so on. Among them, epoll is exclusive to Linux, while kqueue exists on many UNIX systems.
Register ChannelHandler
In Netty, multiple ChannelHandlers can be registered through ChannelPipeline. The handler is the processor for the execution of the worker thread. When the IO event is ready, it will be called according to the Handler configured here.
Multiple ChannelHandlers can be registered here, and each ChannelHandler performs its own duties, such as a handler for encoding and decoding, a handler for the heartbeat mechanism, a handler for message processing, and so on. This can maximize code reuse.
.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
protected void initChannel(SocketChannel socketChannel) throws Exception {
socketChannel.pipeline().addLast(new NormalMessageHandler());
}
});
The childHandler method in ServerBootstrap needs to register a ChannelHandler, where an implementation class of ChannelInitializer is configured, and the Channel is initialized by instantiating ChannelInitializer.
When an IO event is received, this data will be propagated among these multiple handlers. A NormalMessageHandler is configured in the above code to receive and output client messages.
Binding port
After completing the basic configuration of Netty, the start is really triggered by the bind() method, and the sync() method will block until the entire startup process is completed.
ChannelFuture channelFuture=bootstrap.bind(port).sync();
NormalMessageHandler
ServerHandler inherits ChannelInboundHandlerAdapter, which is an event handler in netty. The handler in netty is divided into Inbound (inbound) and Outbound (outbound) processors, which will be described in detail later.
public class NormalMessageHandler extends ChannelInboundHandlerAdapter {
//channelReadComplete方法表示消息读完了的处理,writeAndFlush方法表示写入并发送消息
@Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
//这里的逻辑就是所有的消息读取完毕了,在统一写回到客户端。Unpooled.EMPTY_BUFFER表示空消息,addListener(ChannelFutureListener.CLOSE)表示写完后,就关闭连接
ctx.writeAndFlush(Unpooled.EMPTY_BUFFER).addListener(ChannelFutureListener.CLOSE);
}
//exceptionCaught方法就是发生异常的处理
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
cause.printStackTrace();
ctx.close();
}
//channelRead方法表示读到消息以后如何处理,这里我们把消息打印出来
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ByteBuf in=(ByteBuf) msg;
byte[] req=new byte[in.readableBytes()];
in.readBytes(req); //把数据读到byte数组中
String body=new String(req,"UTF-8");
System.out.println("服务器端收到消息:"+body);
//写回数据
ByteBuf resp=Unpooled.copiedBuffer(("receive message:"+body+"").getBytes());
ctx.write(resp);
//ctx.write表示把消息再发送回客户端,但是仅仅是写到缓冲区,没有发送,flush才会真正写到网络上去
}
}
Through the above code, we found that we only need to complete the development of the NIO server with very little code. Compared with the server of the traditional NIO native class library, the amount of code is greatly reduced, and the development difficulty is also greatly reduced.
Netty and NIO api correspondence
TransportChannel ----corresponding to the channel in NIO
EventLoop---- corresponds to the while loop in NIO
EventLoopGroup: Multiple EventLoops are event loops
ChannelHandler and ChannelPipeline---corresponding to the client logic implementation in NIO handleRead/handleWrite (interceptor pattern)
ByteBuf---- corresponds to ByteBuffer in NIO
Bootstrap and ServerBootstrap ---corresponding to the creation, configuration, and startup of Selector, ServerSocketChannel, etc. in NIO
Netty's overall working mechanism
Netty's overall working mechanism is as follows. The overall design is the multi-threaded Reactor model we talked about earlier, which separates request monitoring and request processing, and executes specific handlers through multiple threads.
<center>Figure 2-2</center>
Network communication layer
The main responsibility of the network communication layer is to perform network IO operations, and it supports link operations of multiple network communication protocols and I/O models. When the network data is read into the kernel buffer, read and write events are triggered, and these events are distributed to the time scheduler for processing.
In Netty, the core components of network communication are the following three components
- Bootstrap, the client starts the api, used to link to the remote netty server, only binds one EventLoopGroup
- ServerBootStrap, the server-side listening api, used to monitor the specified port, will bind two EventLoopGroups, and the bootstrap component can start Netty applications very conveniently and quickly
- Channel, Channel is the carrier of network communication. The Channel implemented by Netty itself is based on the JDK NIO channel, which provides a higher level of abstraction, while also shielding the complexity of the underlying Socket, and provides more powerful functions for Channel.
As shown in Figure 2-3, it shows the common implementation class relationship diagram of Channel. AbstractChannel is the base class of the entire Channel implementation. It derives AbstractNioChannel (non-blocking io) and AbstractOioChannel (blocking io). Each subclass represents Different I/O models and protocol types.
<center>Figure 2-3 Class relationship diagram of Channel</center>
As the connection and data change, the Channel will also have multiple states, such as connection establishment, connection registration, connection reading and writing, and connection destruction. As the state changes, the Channel will also be in a different life cycle, and each state will be bound to a corresponding event callback. The following are common time callback methods.
- channelRegistered, the channel is registered to EventLoop after it is created
- channelUnregistered, the channel is not registered after creation or unregistered from EventLoop
- channelActive, the channel is in a ready state and can be read and written
- channelInactive, Channel is not ready
- channelRead, Channel can read data from the source
- channelReadComplete, Channel read data completed
To summarize briefly, Bootstrap and ServerBootStrap are responsible for the startup of the client and server respectively. Channel is the carrier of network communication, and it provides the ability to interact with the underlying Socket.
When events in the Channel life cycle change, further processing needs to be triggered. This processing is done by Netty's event scheduler.
Event scheduler
The event scheduler aggregates and processes various events through the Reactor thread model, and integrates multiple events (I/O time, signal time) through the Selector main loop thread. When these events are triggered, specific processing needs to be given to the event. To the related Handler in the service orchestration layer to process.
The core components of the event scheduler:
- EventLoopGroup. Equivalent to thread pool
- EventLoop. Equivalent to threads in the thread pool
EventLoopGroup is essentially a thread pool, which is mainly responsible for receiving I/O requests and assigning threads to execute processing requests. In order to better understand the relationship between EventLoopGroup, EventLoop, and Channel, let's look at the process shown in Figure 2-4.
<center>Figure 2-4, the working mechanism of EventLoop</center>
It can be seen from the figure
- An EventLoopGroup can contain multiple EventLoops. EventLoop is used to process all I/O events in the Channel life cycle, such as accept, connect, read, write, etc.
- EventLoop is bound to a thread at the same time, and each EventLoop is responsible for processing multiple Channels
- Each time a new Channel is created, EventLoopGroup will select an EventLoop to bind, and this Channel can bind and unbind EventLoop multiple times during its life cycle.
Figure 2-5 shows the class relationship diagram of EventLoopGroup. It can be seen that Netty provides multiple implementations of EventLoopGroup, such as NioEventLoop, EpollEventLoop, NioEventLoopGroup, etc.
As you can see from the figure, EventLoop is a sub-interface of EventLoopGroup. We can equate EventLoop to EventLoopGroup, provided that EventLoopGroup contains only one EventLoop.
<img src="https://mic-blob-bucket.oss-cn-beijing.aliyuncs.com/202111090024225.png" alt="image-20210812221329760" style="zoom:80%;" />
<center>Figure 2-5 EventLoopGroup class relationship diagram</center>
EventLoopGroup is the core processing engine of Netty. What does it have to do with the Reactor threading model we explained earlier? In fact, we can simply regard EventLoopGroup as the specific implementation of the Reactor thread model in Netty. We can configure different EventLoopGroup to make Netty support a variety of different Reactor models.
- Single-threaded model, EventLoopGroup contains only one EventLoop, Boss and Worker use the same EventLoopGroup.
- Multi-threaded model: EventLoopGroup contains multiple EventLoops, and Boss and Worker use the same EventLoopGroup.
- Master-slave multithreading model: EventLoopGroup contains multiple EventLoops, Boss is the master Reactor, and Worker is the slave Reactor model. They use different EventLoopGroup, the main Reactor is responsible for the creation of a new network connection Channel (that is, the connection event), after the main Reactor receives the client connection, it is handed over to the slave Reactor for processing.
Service Orchestration Layer
The responsibility of the service orchestration layer is to assemble all kinds of services. Simply put, after the I/O event is triggered, a Handler is needed to handle it. Therefore, the service orchestration layer can realize the dynamic orchestration of network events through a Handler processing chain. Orderly spread.
It contains three components
ChannelPipeline, it uses a doubly linked list to link multiple Channelhandlers together. When an I/O event is triggered, ChannelPipeline will call multiple assembled ChannelHandlers in turn to realize the data processing of the Channel. ChannelPipeline is thread-safe, because each new Channel is bound to a new ChannelPipeline. A ChannelPipeline is associated with an EventLoop, and an EventLoop will only bind one thread, as shown in Figure 2-6, which represents the ChannelPIpeline structure diagram.
<img src="https://mic-blob-bucket.oss-cn-beijing.aliyuncs.com/202111090024172.png" alt="image-20210812223234507" style="zoom: 50%;" />
<center>Figure 2-6 ChannelPipeline</center>
As can be seen from the figure, ChannelPipeline contains inbound ChannelInBoundHandler and outbound ChannelOutboundHandler. The former is to receive data, and the latter is to write out data. In fact, they are InputStream and OutputStream. For a better understanding, let's look at Figure 2-7.
<center>Figure 2-7 The relationship between InBound and OutBound</center>
- ChannelHandler is a processor for IO data. After the data is received, it is processed by the designated Handler.
ChannelHandlerContext, ChannelHandlerContext is used to save the context information of ChannelHandler, that is to say, when the event is triggered, data between multiple handlers is passed through ChannelHandlerContext. The relationship between ChannelHandler and ChannelHandlerContext is shown in Figure 2-8.
Each ChannelHandler corresponds to its own ChannelHandlerContext, which retains the context information required by the ChannelHandler, and the data transfer between multiple ChannelHandlers is implemented through the ChannelHandlerContext.
<center>Figure 2-8 ChannelHandler and ChannelHandlerContext relationship</center>
The above is an introduction to the features and working mechanism of the core components in Netty. These components will be analyzed in detail in the follow-up content. It can be seen that the layered design of Netty's architecture is very reasonable. It shields the implementation details of the underlying NIO and the framework layer. For business developers, they only need to care about the arrangement and implementation of business logic.
Summary of component relationships and principles
As shown in Figure 2-9, it represents the key component coordination principle in Netty. The specific working mechanism is described as follows.
- The service order starts to initialize the Boss and Worker thread groups. The Boss thread group is responsible for monitoring network connection events. When a new connection is established, the Boss thread will bind the connection Channel registration to the Worker thread
- The Worker thread group will allocate an EventLoop to handle the read and write events of the Channel, and each EventLoop is equivalent to a thread. Event loop monitoring through Selector.
- When the client initiates an I/O event, the EventLoop of the server distributes the ready Channel to the Pipeline for data processing
- After the data is transmitted to the ChannelPipeline, it is processed from the first ChannelInBoundHandler and passed one by one according to the pipeline chain
- After the server processing is completed, the data must be written back to the client. The written data will be propagated in the chain composed of ChannelOutboundHandler, and finally reach the client.
<center>Figure 2-9 The working principle of each component of Netty</center>
Detailed introduction of core components in Netty
After having a global understanding of Netty in Section 2.5, we will make a very detailed description of these components to deepen everyone's understanding.
Starter Bootstrap and ServerBootstrap, as the intersection of Netty to build client and server, is the first step in writing Netty network programs. It allows us to assemble the core components of Netty like building blocks. In the process of building the Netty Server side, we need to pay attention to three important steps
- Configure thread pool
- Channel initialization
- Handler processor construction
Copyright statement: All articles in this blog, except for special statements, adopt the CC BY-NC-SA 4.0 license agreement. Please indicate the reprint from Mic takes you to learn architecture!
If this article is helpful to you, please help me to follow and like. Your persistence is the motivation for my continuous creation. Welcome to follow the WeChat public account of the same name for more technical dry goods!
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。