2

Guided reading

 本文基于Go源码版本1.16、64位Linux平台、1Page=8KB、本文的内存特指虚拟内存

Today we start to enter the second part of the "Go Language Easy and Advanced" series "Memory and Garbage Collection", the second part of "Go Language Memory Management".

About the "Memory and Garbage Collection" chapter, it will be expanded from the following three parts:

The first part of "Pre-reading Knowledge Reserve" has been completed. In order to better understand this article, you can click the historical link to view or review it.

content

My thoughts on explaining the "Go language memory management" part are as follows:

  1. Introduce the overall architecture
  2. Introduce a very interesting place in architecture design
  3. 通过介绍Go内存管理中的关键结构mspan ,带出pagemspanobjectsizeclass , spanclass , heaparena , chunk concept
  4. Then introduce the allocation of heap memory and stack memory
  5. Review and Summary

The directory disassembled through this idea:

  • Go memory management architecture (the content of this article)

    • mcache
    • mcentral
    • mheap
  • Why thread cache mcache is held by logical processor p , but not by system thread m ?
  • Go Memory Management Unit mspan

    • The concept of page
    • The concept of mspan
    • The concept of object
    • The concept of sizeclass
    • The concept of spanclass
    • The concept of heaparena
    • The concept of chunk
  • Go heap memory allocation

    • Micro object allocation
    • Small object allocation
    • Large object allocation
  • Go stack memory allocation

    • stack memory allocation timing
    • Stack allocations smaller than 32KB
    • Stack allocations greater than or equal to 32KB

Go memory management architecture

Go's memory is uniformly managed by the memory manager. Go's memory manager is based on Google's own open source TCMalloc memory allocator designed and implemented with the idea of TCMalloc memory allocator A detailed introduction can be found in the previous article.

Let's briefly review the core design of the memory allocator TCMalloc .

Recap TCMalloc Memory Allocator

TCMalloc The background of the birth?

In today's multi-core and hyper-threading era, multi-threading technology has been widely used in various programming languages. When using multithreading technology, due to multithreading shared memory , when threads apply for memory (virtual memory), it will be unsafe due to the parallel problem.

In order to ensure that the process of allocating memory is safe enough, it is necessary to lock in the process of memory allocation. The locking process will cause blocking and affect performance. After that, the TCMalloc memory allocator was born and open sourced.

TCMalloc How to solve this problem?

TCMalloc full name Thread Cache Memory alloc thread cache memory allocator. As the name implies, it is to add a memory cache to the thread to reduce competition and improve performance. When the thread memory is insufficient, it will be locked to obtain memory from the shared memory.

Then let's take a look at the architecture of TCMalloc .

TCMalloc the architecture?

TCMalloc Three-layer logical architecture

  • ThreadCache : thread cache
  • CentralFreeList (CentralCache): Central Cache
  • PageHeap : heap memory
TCMalloc How do the different layers of the architecture work together?

TCMalloc the requested memory objects into two categories according to their size:

  • Small objects <= 256 KB
  • Large objects > 256 KB

Here we take the allocation of small objects as an example, when allocating memory to small objects:

  • First go to the thread cache ThreadCache to allocate
  • When the thread cache ThreadCache is insufficient in memory, it is obtained from the central cache CentralFreeList corresponding to SizeClass
  • Finally, allocate it from the PageHeap 921ea825326f03007be1b216d71ddb3f--- corresponding to SizeClass

https://cdn.tigerb.cn/20210120132244.png

The logical architecture of the Go memory allocator

It adopts the same three-layer logical architecture as the TCMalloc memory allocator:

  • mcache : thread cache
  • mcentral : Central cache
  • mheap : heap memory

<p align="center">
<img src="http://cdn.tigerb.cn/20220405133623.png" style="width:60%">
</p>

The actual central cache central is an array of 136 elements of type mcentral .

In addition, special attention should be paid to: mcache is held by the logical processor p , not by the real system thread m . (This design is very interesting, there will be a follow-up article to explain this problem)

We update the architecture diagram as follows:

http://cdn.tigerb.cn/20220405224809.png

The "Go Memory Allocator" divides the requested memory objects into three categories by size:

  • Micro Object 0 < Micro Object < 16B
  • Small Object 16B =< Small Object <= 32KB
  • Large Object 32KB < Large Object

In order to clearly see the relationship between these three layers, here is an example of allocating small objects on the heap:

  • First go to the thread cache mcache to allocate memory
  • When not found, go to the central cache central to allocate memory
  • Finally, go directly to the heap mheap allocate a piece of memory

<p align="center">
<img src="http://cdn.tigerb.cn/20220405224348.png" style="width:80%">
</p>

Architecture Summary

Through the above analysis, it can be seen that the design of the Go memory allocator and the open source TCMalloc the concept and idea of the memory allocator are basically the same. The comparison chart is as follows:

http://cdn.tigerb.cn/20220405225026.png

Finally we conclude:

  • The Go memory allocator uses the same three-tier architecture as TCMalloc . Logically:

    • mcache : thread cache
    • mcentral : Central cache
    • mheap : heap memory
  • Thread cache mcache is held by logical processor p , not system thread m

View more content in the "Go Language Easy and Advanced" series

Link http://tigerb.cn/go/#/kernal/


施展TIGERB
9.5k 声望2.7k 粉丝

// Trying to be the person you want to be.