【Chapter 5 Buffers】
Netty s buffer API has two interfaces:

  • ByteBuf
  • ByteBufHolder

Netty uses reference-counting to know when it s safe to release a Buf and its claimed resources. While it s useful to know that Netty uses reference counting, it s all done automatically. This allows Netty to use pooling and other tricks to speed things up and keep the memory utilization at a sane level. You aren t required to do anything to make this happen, but when developing a Netty application, you should try to process your data and release pooled resources as soon as possible.
即,Netty使用引用计数来管理何时释放Buffer,Netty通过这种方法将引用计数等于0的Buf放在池中,以便下次使用,从而减少不必要的内存分配与释放,这就要求程序员需要尽快release支持ReferenceCount的资源(如PooledByteBuffer),否则会造成内存泄露(除非JVM进程退出,否则Buf的内存空间永远得不到释放,也不会被再次使用)

Netty包含三种ByteBuf:

  1. HEAP BUFFERS:
    The most used type is the ByteBuf that stores its data in the heap space of the JVM. This is done by storing it in a backing array. This type is fast to allocate and also de-allocate when you re not using a pool. It also offers a way to directly access the backing array, which may make it easier to interact with legacy code .
  2. DIRECT BUFFERS
    Another ByteBuf implementation is the direct one. Direct means that it allocates the memory directly, which is outside the heap . You won't see its memory usage in your heap space. You must take this into account when calculating the maximum amount of memory your application will use and how to limit it, as the max heap size won t be enough. Direct buffers on the other side are optimal when it s time to transfer data over a socket. In fact, if you use a nondirect buffer, the JVM will make a copy of your buffer to a direct buffer internally before sending it over the socket.
    The down side of direct buffers is that they re more expensive to allocate and de-allocate compared to heap buffers. This is one of the reasons why Netty supports pooling, which makes this problem disappear. Another possible down side can be that you re no longer able to access the data via the backing array, so you ll need to make a copy of the data if it needs to work with legacy code that requires this.
  3. COMPOSITE BUFFERS
    The last ByteBuf implementation you may be confronted with is the CompositeByteBuf. This does exactly what its name says; it allows you to compose different ByteBuf instances and provides a view over them. The good thing is you can also add and remove them on-the-fly, so it s kind of like a List.
    For example, a message could be composed of two parts: header and body. In a modularized application, the two parts could be produced by different modules and assembled later when the message is sent out. Also, you may use the same body all the time and just change the header. So it would make sense here to not allocate a new buffer every time.
    This would be a perfect fit for a CompositeByteBuf as no memory copy will be needed and the same API could be used as with non-composite buffers.
    clipboard.png
    clipboard.png
    You also may welcome that Netty will optimize read and write operations on the socket whenever possible while using a CompositeByteBuf. This means that using gathering and scattering doesn t incur performance penalties when reading or writing to a socket or suffer from the memory leak issues in the JDK's implementation. All of this is done in the core of Netty itself so you don t need to worry about it too much, but it can t hurt to know that some optimization is done under-the-hood.

图片描述

在ByteBuf中进行查找 :

  • indexOf()
  • ByteBufProcessor接口
  • bytesBefore()

To create a view of an existing buffer, call duplicate(), slice(), slice(int, int), readOnly(), or order(ByteOrder). A derived buffer has an independent readerIndex, writerIndex, and marker indexes, but it shares other internal data representation the way a NIO ByteBuffer does. Because it shares the internal data representation, its cheap to create and is the preferred way if, for example, you need a slice of a ByteBuf in an operation.

【P84 ByteBufHolder】
If you want to implement a message object that stores its payload/data in a ByteBuf, its always a good idea to make use of ByteBufHolder.

Netty's buffer utility classes:

  1. ByteBufAllocator Interface
    As mentioned before, Netty supports pooling for the various ByteBuf implementations. To make this possible it provides an abstraction called ByteBufAllocator. As the name implies it's responsible for allocating ByteBuf instances of the previously explained types. Whether these are pooled or not is specific to the implementation but doesnt change the way you operate with it.
    clipboard.png
    Netty comes with two different implementations of ByteBufAllocator. One implementation pools ByteBuf instances to minimize the allocation/de-allocation costs and keeps memory fragmentation to a minimum. How exactly these are implemented is outside the scope of this book, but let me note its based on the jemalloc paper and so uses the same algorithm as many operating systems use to efficiently allocate memory.
    The other implementation does not pool ByteBuf instances at all and returns a new instance every time. Netty uses the PooledByteBufAllocator (which is the pooled implementation of ByteBufAllocator) by default but this can be changed easily by either changing it through the ChannelConfig or specifying a different one when bootstrapping the server. More details can be found in chapter 9 Bootstrap your application.
  2. UnpooledBuffer creation made easy
    This class contains static helper methods to create unpooled ByteBuf instances.
  3. ByteBufUtilSmall but useful
    Another useful class is the ByteBufUtil class. This class offers static helper methods, which are helpful when operating on a ByteBuf. One of the main reasons to have these outside the Unpooled class mentioned before is that these methods are generic and arent dependent on a
    ByteBuf being pooled or not.
    Perhaps the most valuable is the hexdump() method which is provided as a static method like the others in this class.

bedew
278 声望3 粉丝