This article mainly analyzes the core components in Netty in detail.
Starter Bootstrap and ServerBootstrap, as the intersection of Netty to build client and server, is the first step in writing Netty network programs. It allows us to assemble the core components of Netty like building blocks. In the process of building Netty Server, we need to pay attention to three important steps
- Configure thread pool
- Channel initialization
- Handler processor construction
Detailed scheduler
Earlier we talked about the Reactor model of the NIO multiplexing design pattern. The main idea of the Reactor model is to separate the responsibilities of network connection, event distribution, and task processing, and to increase the throughput in the Reactor model by introducing multithreading. Including three Reactor models
- Single-threaded single-reactor model
- Multi-threaded single Reactor model
- Multi-threaded multi-reactor model
In Netty, the above three threading models can be implemented very easily, and Netty recommends the use of a master-slave multi-threading model, so that it can easily handle thousands of client connections. In a large number of concurrent client requests, the master-slave multi-threading model can increase the number of SubReactor threads and make full use of multi-core capabilities to improve system throughput.
The operating mechanism of the Reactor model is divided into four steps, as shown in Figure 2-10.
- Connection registration, after the Channel is established, register to the Selector selector in the Reactor thread
- Event polling, polling the I/O events of all Channels registered in the Selector
- Event distribution, allocate corresponding processing threads for ready I/O events
- For task processing, the Reactor thread is also responsible for non-I/O tasks in the task queue, and each Worker thread takes out tasks from the task queue maintained by each for asynchronous execution.
<center>Figure 2-10 Reactor workflow</center>
EventLoop event loop
In Netty, the event handler of the Reactor model is implemented using EventLoop. An EventLoop corresponds to a thread. EventLoop maintains a Selector and taskQueue internally to process network IO events and internal tasks. Its working principle is shown in Figure 2. -11 shown.
<center>Figure 2-11 Principle of NioEventLoop</center>
EventLoop basic application
The following code represents EventLoop, which implements Selector registration and common task submission functions respectively.
public class EventLoopExample {
public static void main(String[] args) {
EventLoopGroup group=new NioEventLoopGroup(2);
System.out.println(group.next()); //输出第一个NioEventLoop
System.out.println(group.next()); //输出第二个NioEventLoop
System.out.println(group.next()); //由于只有两个,所以又会从第一个开始
//获取一个事件循环对象NioEventLoop
group.next().register(); //注册到selector上
group.next().submit(()->{
System.out.println(Thread.currentThread().getName()+"-----");
});
}
}
The core process of EventLoop
Based on the above explanation, after understanding the working mechanism of EventLoop, we will explain through an overall flowchart, as shown in Figure 2-12.
EventLoop is an Reactor model . An EventLoop corresponds to a thread. It maintains a selector and taskQueue internally to handle IO events and internal tasks. The percentage of execution time of IO events and internal tasks is adjusted by ioRatio, which represents the percentage of execution time of IO. Tasks include ordinary tasks and delayed tasks that have already arrived. Delayed tasks are stored in a priority queue PriorityQueue. Before executing the task, all the tasks that are up to date are read from the PriorityQueue, and then added to the taskQueue, and finally the tasks are executed uniformly.
<center>Figure 2-12 EventLoop working mechanism</center>
How EventLoop implements multiple Reactor models
Single thread mode
EventLoopGroup group=new NioEventLoopGroup(1); ServerBootstrap b=new ServerBootstrap(); b.group(group);
Multi-threaded mode
EventLoopGroup group =new NioEventLoopGroup(); //默认会设置cpu核心数的2倍 ServerBootstrap b=new ServerBootstrap(); b.group(group);
Multi-threaded master-slave mode
EventLoopGroup boss=new NioEventLoopGroup(1); EventLoopGroup work=new NioEventLoopGroup(); ServerBootstrap b=new ServerBootstrap(); b.group(boss,work);
EventLoop implementation principle
EventLoopGroup initialization method, in MultithreadEventExecutorGroup.java, build an EventExecutor array according to the configured number of nThreads
protected MultithreadEventExecutorGroup(int nThreads, Executor executor, EventExecutorChooserFactory chooserFactory, Object... args) { checkPositive(nThreads, "nThreads"); if (executor == null) { executor = new ThreadPerTaskExecutor(newDefaultThreadFactory()); } children = new EventExecutor[nThreads]; for (int i = 0; i < nThreads; i ++) { boolean success = false; try { children[i] = newChild(executor, args); } } }
Register the channel to the implementation of the multiplexer, MultithreadEventLoopGroup.register method ()
SingleThreadEventLoop ->AbstractUnsafe.register ->AbstractChannel.register0->AbstractNioChannel.doRegister()
You can see that the channel will be registered to the unwrappedSelector multiplexer in a certain eventLoop.
protected void doRegister() throws Exception { boolean selected = false; for (;;) { try { selectionKey = javaChannel().register(eventLoop().unwrappedSelector(), 0, this); return; } } }
The event processing process is continuously traversed through the run method in NioEventLoop
protected void run() { int selectCnt = 0; for (;;) { try { int strategy; try { //计算策略,根据阻塞队列中是否含有任务来决定当前的处理方式 strategy = selectStrategy.calculateStrategy(selectNowSupplier, hasTasks()); switch (strategy) { case SelectStrategy.CONTINUE: continue; case SelectStrategy.BUSY_WAIT: // fall-through to SELECT since the busy-wait is not supported with NIO case SelectStrategy.SELECT: long curDeadlineNanos = nextScheduledTaskDeadlineNanos(); if (curDeadlineNanos == -1L) { curDeadlineNanos = NONE; // nothing on the calendar } nextWakeupNanos.set(curDeadlineNanos); try { if (!hasTasks()) { //如果队列中数据为空,则调用select查询就绪事件 strategy = select(curDeadlineNanos); } } finally { nextWakeupNanos.lazySet(AWAKE); } default: } } selectCnt++; cancelledKeys = 0; needsToSelectAgain = false; /* ioRatio调节连接事件和内部任务执行事件百分比 * ioRatio越大,连接事件处理占用百分比越大 */ final int ioRatio = this.ioRatio; boolean ranTasks; if (ioRatio == 100) { try { if (strategy > 0) { //处理IO时间 processSelectedKeys(); } } finally { //确保每次都要执行队列中的任务 ranTasks = runAllTasks(); } } else if (strategy > 0) { final long ioStartTime = System.nanoTime(); try { processSelectedKeys(); } finally { // Ensure we always run tasks. final long ioTime = System.nanoTime() - ioStartTime; ranTasks = runAllTasks(ioTime * (100 - ioRatio) / ioRatio); } } else { ranTasks = runAllTasks(0); // This will run the minimum number of tasks } if (ranTasks || strategy > 0) { if (selectCnt > MIN_PREMATURE_SELECTOR_RETURNS && logger.isDebugEnabled()) { logger.debug("Selector.select() returned prematurely {} times in a row for Selector {}.", selectCnt - 1, selector); } selectCnt = 0; } else if (unexpectedSelectorWakeup(selectCnt)) { // Unexpected wakeup (unusual case) selectCnt = 0; } } }
Coordination processing of service orchestration layer Pipeline
EventLoop can realize task scheduling, responsible for monitoring I/O events, signal events, etc. After receiving related events, someone needs to respond to these events and data, and these events are completed by the ChannelHandler defined in ChannelPipeline. It is the core component of the service orchestration layer in Netty.
In the following code, we have added two InboundHandlers, h1 and h2, to process the client data reading operation. The code is as follows.
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.group(bossGroup, workerGroup)
//配置Server的通道,相当于NIO中的ServerSocketChannel
.channel(NioServerSocketChannel.class)
//childHandler表示给worker那些线程配置了一个处理器,
// 这个就是上面NIO中说的,把处理业务的具体逻辑抽象出来,放到Handler里面
.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
protected void initChannel(SocketChannel socketChannel) throws Exception {
// socketChannel.pipeline().addLast(new NormalMessageHandler());
socketChannel.pipeline().addLast("h1",new ChannelInboundHandlerAdapter(){
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
System.out.println("handler-01");
super.channelRead(ctx, msg);
}
}).addLast("h2",new ChannelInboundHandlerAdapter(){
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
System.out.println("handler-02");
super.channelRead(ctx, msg);
}
});
}
});
The above code constructs a ChannelPipeline, and obtains the structure shown in Figure 2-13. Each Channel is bound to a ChannelPipeline, and a ChannelPipeline contains multiple ChannelHandlers. These Handlers will be packaged into ChannelHandlerContext and added to the two-way linked list constructed by the Pipeline.
ChannelHandlerContext is used to save the context of ChannelHandler, which contains all events in the life cycle of ChannelHandler, such as connect/bind/read/write, etc. The advantage of this design is that when each ChannelHandler performs data transfer, the general logic of pre and post It can be directly saved to the ChannelHandlerContext for delivery.
<center>Figure 2-13</center>
Outbound and inbound operations
According to the flow of network data, ChannelPipeline is divided into two processors: inbound ChannelInBoundHandler and outbound ChannelOutboundHandler, as shown in Figure 2-14. During the communication between the client and the server, the process of sending data from the client to the server is called outbound. , For the server, the data flows from the client to the server, this time is inbound.
<center>Figure 2-14 The relationship between InBound and OutBound</center>
ChannelHandler event trigger mechanism
When a Channel triggers an IO event, it will be processed by the Handler, and the ChannelHandler is designed around the life cycle of the I/O event, such as establishing a connection, reading data, writing data, and connection destruction.
ChannelHandler has two important sub-interface implementations, which intercept the I/O events of data inflow and data outflow respectively
- ChannelInboundHandler
- ChannelOutboundHandler
The Adapter class shown in Figure 2-15 provides many default operations. For example, there are many and many methods in ChannelHandler. Sometimes our user-defined methods do not need to be overloaded, but only one or two methods are required. Then you can use Adapter Class, there are many default methods in it. The role of classes ending with Adapter in other frameworks is also mostly the same. So when we use netty, we often rarely implement the ChannelHandler interface directly, and often inherit the Adapter class.
<img src="https://mic-blob-bucket.oss-cn-beijing.aliyuncs.com/202111090025881.png" alt="image-20210816200206761" style="zoom:67%;" />
<center>Figure 2-15 ChannelHandler class relationship diagram</center>
ChannelInboundHandler event callback and trigger timing are as follows
Event callback method | Trigger timing |
---|---|
channelRegistered | Channel is registered to EventLoop |
channelUnregistered | Channel is unregistered from EventLoop |
channelActive | Channel is in a ready state and can be read and written |
channelInactive | Channel is not ready |
channelRead | Channel can read data from the remote |
channelReadComplete | Channel read data completed |
userEventTriggered | When a user event is triggered |
channelWritabilityChanged | The write status of the channel has changed |
ChannelOutboundHandler time callback trigger timing
Event callback method | Trigger timing |
---|---|
bind | Called when a request is made to bind a channel to a local address |
connect | Called when a request is made to connect a channel to a remote node |
disconnect | Called when a request is made to disconnect the channel from the remote node |
close | Called when the channel is requested to be closed |
deregister | Called when a request is made to unregister the channel from its EventLoop |
read | Called when a request is made to read data through the channel |
flush | Called when a request is made to refresh the enqueue data to the remote node through the channel |
write | Called when a request is made to write data to a remote node through a channel |
Demonstration of event propagation mechanism
public class NormalOutBoundHandler extends ChannelOutboundHandlerAdapter {
private final String name;
public NormalOutBoundHandler(String name) {
this.name = name;
}
@Override
public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception {
System.out.println("OutBoundHandler:"+name);
super.write(ctx, msg, promise);
}
}
public class NormalInBoundHandler extends ChannelInboundHandlerAdapter {
private final String name;
private final boolean flush;
public NormalInBoundHandler(String name, boolean flush) {
this.name = name;
this.flush = flush;
}
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
System.out.println("InboundHandler:"+name);
if(flush){
ctx.channel().writeAndFlush(msg);
}else {
super.channelRead(ctx, msg);
}
}
}
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.group(bossGroup, workerGroup)
//配置Server的通道,相当于NIO中的ServerSocketChannel
.channel(NioServerSocketChannel.class)
//childHandler表示给worker那些线程配置了一个处理器,
// 这个就是上面NIO中说的,把处理业务的具体逻辑抽象出来,放到Handler里面
.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
protected void initChannel(SocketChannel socketChannel) throws Exception {
socketChannel.pipeline()
.addLast(new NormalInBoundHandler("NormalInBoundA",false))
.addLast(new NormalInBoundHandler("NormalInBoundB",false))
.addLast(new NormalInBoundHandler("NormalInBoundC",true));
socketChannel.pipeline()
.addLast(new NormalOutBoundHandler("NormalOutBoundA"))
.addLast(new NormalOutBoundHandler("NormalOutBoundB"))
.addLast(new NormalOutBoundHandler("NormalOutBoundC"));
}
});
After the above code runs, you will get the following execution results
InboundHandler:NormalInBoundA
InboundHandler:NormalInBoundB
InboundHandler:NormalInBoundC
OutBoundHandler:NormalOutBoundC
OutBoundHandler:NormalOutBoundB
OutBoundHandler:NormalOutBoundA
When the client sends a request to the server, it will trigger the NormalInBound call chain of the server, and call the Handler one by one in the sequence. When the InBound processing is completed, the WriteAndFlush method is called to write data back to the client, and the write event of the NormalOutBoundHandler call chain is triggered. .
From the execution result, the event propagation direction of Inbound and Outbound is different. Inbound propagation direction is head->tail, and Outbound propagation direction is Tail-Head.
Anomaly propagation mechanism
The ChannelPipeline time propagation mechanism is a typical responsibility chain model, so some students will definitely have questions. If a handler in this link is abnormal, what problems will it cause? We modify the NormalInBoundHandler to the previous example
public class NormalInBoundHandler extends ChannelInboundHandlerAdapter {
private final String name;
private final boolean flush;
public NormalInBoundHandler(String name, boolean flush) {
this.name = name;
this.flush = flush;
}
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
System.out.println("InboundHandler:"+name);
if(flush){
ctx.channel().writeAndFlush(msg);
}else {
//增加异常处理
throw new RuntimeException("InBoundHandler:"+name);
}
}
}
Once an exception is thrown at this time, the entire request chain will be interrupted. An exception capture method is provided in the ChannelHandler. This method can prevent a Handler exception in the ChannelHandler chain from causing the request link to be interrupted. It will propagate the exception from the head node to the Tail node in the order of the Handler link. If the user does not handle the exception in the end, the Tail node will finally handle it in a unified manner
Modify NormalInboundHandler and rewrite the following method.
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
System.out.println("InboundHandlerException:"+name);
super.exceptionCaught(ctx, cause);
}
In Netty application development, good exception handling is very important to make troubleshooting easier, so we can solve the exception handling problem through a unified interception method.
Add a composite processor implementation class
public class ExceptionHandler extends ChannelDuplexHandler {
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
if(cause instanceof RuntimeException){
System.out.println("处理业务异常");
}
super.exceptionCaught(ctx, cause);
}
}
Add the new ExceptionHandler to ChannelPipeline
bootstrap.group(bossGroup, workerGroup)
//配置Server的通道,相当于NIO中的ServerSocketChannel
.channel(NioServerSocketChannel.class)
//childHandler表示给worker那些线程配置了一个处理器,
// 这个就是上面NIO中说的,把处理业务的具体逻辑抽象出来,放到Handler里面
.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
protected void initChannel(SocketChannel socketChannel) throws Exception {
socketChannel.pipeline()
.addLast(new NormalInBoundHandler("NormalInBoundA",false))
.addLast(new NormalInBoundHandler("NormalInBoundB",false))
.addLast(new NormalInBoundHandler("NormalInBoundC",true));
socketChannel.pipeline()
.addLast(new NormalOutBoundHandler("NormalOutBoundA"))
.addLast(new NormalOutBoundHandler("NormalOutBoundB"))
.addLast(new NormalOutBoundHandler("NormalOutBoundC"))
.addLast(new ExceptionHandler());
}
});
In the end, we will be able to achieve unified handling of exceptions.
Copyright statement: All articles in this blog, except for special statements, adopt the CC BY-NC-SA 4.0 license agreement. Please indicate the reprint from Mic takes you to learn architecture!
If this article is helpful to you, please help me to follow and like. Your persistence is the motivation for my continuous creation. Welcome to follow the WeChat public account of the same name for more technical dry goods!
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。