头图

Preface

Regarding Netty , I have watched a lot of related videos and books recently, and I have gained a lot. I hope to share what I know with you, cheer and grow together. Earlier we Java IO , BIO , NIO , AIO and the links to related articles are as follows:

depth analysis of Java IO (1) Overview

depth analysis of Java IO (two) BIO

depth analysis of Java IO (3) NIO

depth analysis of Java IO (four) AIO

In this article, we will start an in-depth analysis Netty . First, AIO 's understand the shortcomings of JAVA NIO

The pain of Java native API

Although the JAVA NIO and JAVA AIO frameworks provide multiplexed IO/asynchronous IO support, they do not provide a good encapsulation of the upper-level "information format". It is not easy to implement a real web application with these APIs.

JAVA NIO and JAVA AIO disconnection and reconnection, network flash, half-packet read and write, failed cache, network congestion and abnormal code stream, etc., all of which require the developer to complete the related work.

AIO in practice, and no more than NIO better. AIO have different platforms different implementations, the use of windows system is an asynchronous IO technology: IOCP ; under Linux due to the absence of such asynchronous IO technology, so using epoll asynchronous IO simulation. Therefore, the performance of AIO under Linux is not ideal. AIO also does not provide support for UDP.

In summary, in actual large-scale Internet projects, the native Java API is not widely used. Instead, a third-party Java framework is used, which is Netty .

Netty's advantages

Netty provides asynchronous, event-driven network application framework and tools to quickly develop high-performance, high-reliability network server and client programs.

Non-blocking I/O

Netty is a network application framework based on the Java NIO API. It can be used to quickly and easily develop network applications, such as server and client programs. Netty greatly simplifies the process of network program development, such as the development of TCP and UDP Socket services.

Because it is based on the NIO API, Netty can provide non-blocking I/O operations, which greatly improves performance. At the same time, Netty encapsulates the complexity of the Java NIO API and provides thread pool processing, making it extremely simple to develop NIO applications.

Rich agreement

Netty provides a simple and easy-to-use API, but this does not mean that the application will have problems with difficult maintenance and low performance. Netty is a well-designed framework, it has absorbed a lot of experience from the implementation of many protocols, such as FTP, SMTP, HTTP, many binary and text-based traditional protocols.

Netty supports a wealth of network protocols, such as TCP , UDP , HTTP , HTTP/2 , WebSocket , SSL/TLS etc. These protocols can be used out of the box, so Netty developers can achieve easy development without losing flexibility and high performance And stability.

Asynchronous and event driven

Netty is an asynchronous event-driven framework. The framework reflects that all I/O operations are asynchronous. All I/O calls will return immediately. There is no guarantee that the call will succeed or not, but the call will return ChannelFuture . Netty will ChannelFuture call succeeded or failed, or was cancelled.

At the same time, Netty is event-driven, the caller cannot get the result immediately, but through the event monitoring mechanism, the user can easily obtain the result of the I/O

When the Future object is just created, it is in the non-complete state. The caller can ChannelFuture , and then perform the completed operation by registering the listener function. Common operations are as follows:

  • Use the isDone method to determine whether the current operation is complete.
  • Use the isSuccess method to determine whether the current operation that has been completed is successful.
  • Use the getCause method to obtain the reason for the failure of the current operation that has been completed.
  • Use the isCancelled method to determine whether the current operation that has been completed has been cancelled.
  • Register the listener through the addListener method. When the operation is completed (the isDone method returns to completion), the designated listener will be notified; if the future object is completed, the designated listener will be notified.

For example: the binding port in the following code is an asynchronous operation. When the binding operation is processed, the corresponding listener processing logic will be called.

serverBootstrap.bind(port).addListener(future -> {
    if(future.isSuccess()){
        System.out.println("端口绑定成功!");
    }else {
        System.out.println("端口绑定失败!");
    }
});

Compared with the traditional blocking I/O , the advantage of Netty asynchronous processing is that it will not cause thread blocking. The thread I/O operation, which will be more stable and have higher throughput in high concurrency situations.

Well-designed API

Netty has provided users with the best API and implementation design from the beginning.

For example, when the number of users is small, you may choose the traditional blocking API. After all, it will be easier to use the blocking API compared to Java NIO. However, when the business volume grows exponentially and the server needs to handle thousands of client connections at the same time, problems will arise. In this case, you may try to use Java NIO, but the complicated NIO Selector programming interface will consume a lot of time and will eventually hinder rapid development.

Netty provides a unified asynchronous I/O channel , which abstracts all point-to-point communication operations. In other words, if the application is based on a certain transmission implementation of Netty, then the same application can also run on another transmission implementation of Netty. Channel common sub-interfaces of 06111d6a20b20b are:

image-20210804105936809

Rich buffer implementation

Netty uses a self-built cache API instead of using Java NIO's ByteBuffer to represent a continuous byte sequence. Compared with ByteBuffer , this method has obvious advantages.

Netty uses a new buffer type ByteBuf , and is designed to solve the ByteBuffer problem from the bottom, while also meeting the needs of daily network application development.

Netty has the following features:

  • Allows the use of custom buffer types.
  • A transparent zero-copy implementation is built into the composite buffer type.
  • The out-of-the-box dynamic buffering type has the same dynamic buffering capacity StringBuffer
  • It is no longer necessary to call the flip() method.
  • Under normal circumstances, it has a faster response speed ByteBuffer

Efficient network transmission

Java's native serialization mainly has the following disadvantages:

  • Cannot cross languages.
  • The code stream is too large after serialization.
  • Performance is too low after serialization.

There are many frameworks in the industry to solve the above problems, such as Google Protobuf , JBoss Marshalling , Facebook Thrift etc. For these frameworks, Netty provides corresponding packages to integrate these frameworks into applications. At the same time, Netty itself also provides numerous codec tools for developers to use. Developers can develop efficient network transmission applications based on Netty, such as: high-performance message middleware Apache RocketMQ , high-performance RPC framework Apache Dubbo etc.

Netty core concepts

Netty功能特性图

As can be seen from the above architecture diagram, Netty is mainly composed of three major blocks:

  • Core components
  • Transmission service
  • protocol

Core components

Core components include: event model, byte buffer and communication API

Event model

Netty is based on asynchronous event-driven. The framework reflects that all I/O operations are asynchronous, and the caller cannot get the result immediately, but through the event monitoring mechanism, users can easily obtain the I/O operation actively or through the notification mechanism. the result of.

Netty classifies all events according to their relevance to inbound or outbound data flow.

Events that may be triggered by inbound data or related state changes include the following:

  • The connection has been activated or the connection has been deactivated.
  • Data read.
  • User events.
  • Error event.

An outbound event is the result of an action that will be triggered in the future, including the following actions:

  • Open or close the connection to the remote node.
  • Write or flush data to the socket.

Each event can be distributed to a user-implemented method in the ChannelHandler

Byte buffer

Netty uses a new buffer type ByteBuf that is Java ByteBuffer ByteBuf provides a wealth of features.

Communication API

Netty's communication API is abstracted into Channel , and a unified asynchronous I/O programming interface is used to satisfy all point-to-point communication operations.

Transmission service

Netty has some built-in transmission services out of the box. Because not all of their transmissions support every protocol, a transmission compatible with the protocol used by the application must be selected. The following are all the transmissions provided by Netty.

NIO

io.netty.channel.socket.nio package is used to support NIO. The following implementation of this package is java.nio.channels package (based on the selector).

epoll

io.netty.channel.epoll package is used to support epoll and non-blocking IO driven by JNI.

It should be noted that this epoll transmission can only be supported on Linux. epoll also provides a variety of features, such as SO_REUSEPORT, etc., which is faster than NIO transmission and is completely non-blocking.

OIO

io.netty.channel.socket.oio package is used to support the use of the java.net package as the basic blocking I/O .

local

io.netty.channel.local package is used to support local transmission of communication through pipes within the VM.

Inline

io.netty.channel.embedded packet is used as an embedded transmission, allowing the use of ChannelHandler without the need for a real network-based transmission.

Protocol support

Netty supports a wealth of network protocols, such as TCP , UDP , HTTP , HTTP/2 , WebSocket , SSL/TLS etc. These protocols can be used out of the box, so Netty developers can achieve easy development without losing flexibility and high performance And stability.

Netty simple application

Introduce Maven dependency

<dependency>
    <groupId>io.netty</groupId>
    <artifactId>netty-all</artifactId>
    <version>4.1.49.Final</version>
</dependency>

The pipeline processor on the server side

public class NettyServerHandler extends ChannelInboundHandlerAdapter {

    //读取数据实际(这里我们可以读取客户端发送的消息)
    /*
    1. ChannelHandlerContext ctx:上下文对象, 含有 管道pipeline , 通道channel, 地址
    2. Object msg: 就是客户端发送的数据 默认Object
     */
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
        System.out.println("server ctx =" + ctx);
        Channel channel = ctx.channel();
        //将 msg 转成一个 ByteBuf
        //ByteBuf 是 Netty 提供的,不是 NIO 的 ByteBuffer.
        ByteBuf buf = (ByteBuf) msg;
        System.out.println("客户端发送消息是:" + buf.toString(CharsetUtil.UTF_8));
        System.out.println("客户端地址:" + channel.remoteAddress());
    }


    //数据读取完毕
    @Override
    public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
        //writeAndFlush 是 write + flush
        //将数据写入到缓存,并刷新
        //一般讲,我们对这个发送的数据进行编码
        ctx.writeAndFlush(Unpooled.copiedBuffer("公司最近账户没啥钱,再等几天吧!", CharsetUtil.UTF_8));
    }

    //处理异常, 一般是需要关闭通道
    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
        ctx.close();
    }
}

NettyServerHandler inherits from ChannelInboundHandlerAdapter , this class implements the ChannelInboundHandler interface. ChannelInboundHandler provides many interface methods for event handling.

This covers the channelRead() event handling method. Whenever new data is received from the client, this method will be called when a message is received.

channelReadComplete() event handler method is invoked when the data has been read by calling ChannelHandlerContext the writeAndFlush() method, the message is written to the pipe, and ultimately sent to the client.

exceptionCaught() event processing method is to be called Throwable

Server main program

public class NettyServer {

    public static void main(String[] args) throws Exception {
        //创建BossGroup 和 WorkerGroup
        //说明
        //1. 创建两个线程组 bossGroup 和 workerGroup
        //2. bossGroup 只是处理连接请求 , 真正的和客户端业务处理,会交给 workerGroup完成
        //3. 两个都是无限循环
        //4. bossGroup 和 workerGroup 含有的子线程(NioEventLoop)的个数
        //   默认实际 cpu核数 * 2
        //
        EventLoopGroup bossGroup = new NioEventLoopGroup(1);
        EventLoopGroup workerGroup = new NioEventLoopGroup(); //8
        try {
            //创建服务器端的启动对象,配置参数
            ServerBootstrap bootstrap = new ServerBootstrap();
            //使用链式编程来进行设置
            bootstrap.group(bossGroup, workerGroup) //设置两个线程组
                    .channel(NioServerSocketChannel.class) //bossGroup使用NioSocketChannel 作为服务器的通道实现
                    .option(ChannelOption.SO_BACKLOG, 128) // 设置线程队列得到连接个数 option主要是针对boss线程组,
                    .childOption(ChannelOption.SO_KEEPALIVE, true) //设置保持活动连接状态 child主要是针对worker线程组
                    .childHandler(new ChannelInitializer<SocketChannel>() {//workerGroup使用 SocketChannel创建一个通道初始化对象                                                                                                                        (匿名对象)
                        //给pipeline 设置处理器
                        @Override
                        protected void initChannel(SocketChannel ch) throws Exception {
                            //可以使用一个集合管理 SocketChannel, 再推送消息时,可以将业务加入到各个channel 对应的 NIOEventLoop 的                                     taskQueue 或者 scheduleTaskQueue
                            ch.pipeline().addLast(new NettyServerHandler());
                        }
                    }); // 给我们的workerGroup 的 EventLoop 对应的管道设置处理器

            System.out.println(".....服务器 is ready...");
            //绑定一个端口并且同步, 生成了一个 ChannelFuture 对象
            //启动服务器(并绑定端口)
            ChannelFuture cf = bootstrap.bind(7788).sync();
            //给cf 注册监听器,监控我们关心的事件
            cf.addListener(new ChannelFutureListener() {
                @Override
                public void operationComplete(ChannelFuture future) throws Exception {
                    if (cf.isSuccess()) {
                        System.out.println("服务已启动,端口号为7788...");
                    } else {
                        System.out.println("服务启动失败...");
                    }
                }
            });
            //对关闭通道进行监听
            cf.channel().closeFuture().sync();
        } finally {
            bossGroup.shutdownGracefully();
            workerGroup.shutdownGracefully();
        }
    }
}

NioEventLoopGroup is a multi-threaded event looper used to handle I/O Netty provides many different EventLoopGroup to handle different transmissions.

In the above server application, two NioEventLoopGroup are used. The first is called bossGroup and is used to receive incoming connections. The second one is called workerGroup , which is used to process the connection that has been received. Once bossGroup receives the connection, it will register the connection information to workerGroup .

ServerBootstrap is a boot class for NIO service. Channel directly in this service.

  • group method is used to set EventLoopGroup .
  • Through the Channel method, you can specify the type of the Channel NioServerSocketChannel type.
  • childHandler used to specify ChannelHandler , which is the previously implemented NettyServerHandler .
  • By option setting specified Channel achieved NioServerSocketChannel configuration parameters.
  • childOption mainly sets the option of SocketChannel of Channel
  • bind used to bind the port to start the service.

Client pipeline processor

public class NettyClientHandler extends ChannelInboundHandlerAdapter {

    //当通道就绪就会触发该方法
    @Override
    public void channelActive(ChannelHandlerContext ctx) throws Exception {
        System.out.println("client ctx =" + ctx);
        ctx.writeAndFlush(Unpooled.copiedBuffer("老板,工资什么时候发给我啊?", CharsetUtil.UTF_8));
    }

    //当通道有读取事件时,会触发
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
        ByteBuf buf = (ByteBuf) msg;
        System.out.println("服务器回复的消息:" + buf.toString(CharsetUtil.UTF_8));
        System.out.println("服务器的地址: "+ ctx.channel().remoteAddress());
    }

    //处理异常, 一般是需要关闭通道
    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
        cause.printStackTrace();
        ctx.close();
    }
}

channelRead method converts the received message into a string for easy printing on the console.

channelRead type of the message received by ByteBuf is 06111d6a20cab9, and ByteBuf provides a convenient way to convert to a string.

Client main program

public class NettyClient {

    public static void main(String[] args) throws Exception {
        //客户端需要一个事件循环组
        EventLoopGroup group = new NioEventLoopGroup();
        try {
            //创建客户端启动对象
            //注意客户端使用的不是 ServerBootstrap 而是 Bootstrap
            Bootstrap bootstrap = new Bootstrap();
            //设置相关参数
            bootstrap.group(group) //设置线程组
                    .channel(NioSocketChannel.class) // 设置客户端通道的实现类(反射)
                    .handler(new ChannelInitializer<SocketChannel>() {
                        @Override
                        protected void initChannel(SocketChannel ch) throws Exception {
                            ch.pipeline().addLast(new NettyClientHandler()); //加入自己的处理器
                        }
                    });
            System.out.println("客户端 ok..");
            //启动客户端去连接服务器端
            //关于 ChannelFuture 要分析,涉及到netty的异步模型
            ChannelFuture channelFuture = bootstrap.connect("127.0.0.1", 7788).sync();
            //给关闭通道进行监听
            channelFuture.channel().closeFuture().sync();
        } finally {
            group.shutdownGracefully();
        }
    }
}

The client only needs one NioEventLoopGroup .

Test run

Start the server NettyServer and client NettyClient programs respectively

Server console output content:

.....服务器 is ready...
服务已启动,端口号为7788...
server ctx =ChannelHandlerContext(NettyServerHandler#0, [id: 0xa1b2233c, L:/127.0.0.1:7788 - R:/127.0.0.1:63239])
客户端发送消息是:老板,工资什么时候发给我啊?
客户端地址:/127.0.0.1:63239

Client console output:

客户端 ok..
client ctx =ChannelHandlerContext(NettyClientHandler#0, [id: 0x21d6f98e, L:/127.0.0.1:63239 - R:/127.0.0.1:7788])
服务器回复的消息:公司最近账户没啥钱,再等几天吧!
服务器的地址: /127.0.0.1:7788

At this point, a simple server and client developed based on Netty are complete.

Summarize

This article mainly explains the background, characteristics, core components of Netty and how to quickly start the first Netty application.

Later we will analyze the Netty architecture design, Channel , ChannelHandler , byte buffer ByteBuf , thread model, codec, boot program and so on.

end

I am a code farmer who is being beaten and working hard to advance. If the article is helpful to you, remember to like and follow, thank you!


初念初恋
175 声望17 粉丝