I. Introduction
OpenAtom OpenHarmony (hereinafter referred to as "OpenHarmony") is an open source project incubated and operated by the OpenAtom Foundation (OpenAtom Foundation). The framework and platform of the operating system promote the prosperity and development of the Internet of Everything industry.
As an all-scenario, all-connected, all-intelligent distributed pan-terminal operating system, OpenHarmony integrates the capabilities of various terminal devices to achieve hardware mutual assistance and resource sharing, providing users with a smooth all-scenario experience. In order to adapt to various hardware, OpenHarmony provides LiteOS and Linux kernels, and forms different system types based on these kernels, and at the same time builds a set of unified system capabilities in these systems.
The OpenHarmony LiteOS-M kernel is a lightweight IoT operating system kernel built for the IoT field. The LiteOS-M kernel provides a variety of mechanisms for inter-task communication, including queues, events, mutex locks and semaphores. What key data structures are involved in each mechanism? How do these data structures work? Next, I will start from several kernel objects such as queues, events, mutex locks, and semaphores to explain the data structure of the kernel IPC mechanism.
Second, the data structure - queue
A queue, also known as a message queue, is a data structure commonly used for inter-task communication, which can transfer message content or message addresses between tasks. The kernel uses the queue control block to manage the message queue, and at the same time uses the doubly circular linked list to manage the control block.
Queue control block: The data block that manages the specific message queue is created by calling OsQueueInit() when the kernel is initialized, and then mounted to the doubly circular linked list g_freeQueueList in turn. At this time, the state of the control block is OS\_QUEUE\_UNUSED, and the queue control block is used to save The status of the queue, the length of the queue, the length of the message, the queue ID, the position of the head and tail of the queue, and the list of tasks waiting to be read and written. The kernel manages the message queue and tasks to complete operations such as reading and writing messages based on this information.
typedef struct {
UINT8 *queue;
UINT16 queueState;
UINT16 queueLen;
UINT16 queueSize;
UINT16 queueID;
UINT16 queueHead;
UINT16 queueTail;
UINT16 readWriteableCnt[OS_READWRITE_LEN];
LOS_DL_LIST readWriteList[OS_READWRITE_LEN];
LOS_DL_LIST memList;
}LosQueueCB;
After initialization, the queue control block is organized as follows:
Create queue: The queue is used to store specific message content. The task can call LOS\_QueueCreate() to create a queue. At this time, the kernel will apply for memory to create a queue according to the queue length and message size specified by the input parameters, and allocate it from g\_freeQueueList A control block is used to manage the queue. The assigned queue control block state is OS\_QUEUE\_INUSED. The queue control block is always allocated from the head node, as shown in the figure below, control block 0 is first allocated to manage the newly created queue.
Write queue: The kernel supports two write queue methods: write LOS\_QueueWrite() from the tail and write LOS\_QueueWriteHead() from the head:
Read queue: There is only one way to read the queue. LOS\_QueueRead() is read from the head of the queue. After reading, the head points to the next node.
Delete queue: When the queue is no longer used, you can use LOS\_QueueDelete() to delete the queue. At this time, the queue control block will be returned to g_freeQueueList and the message queue will be released:
Third, the data structure - events
Events are used to achieve synchronization between tasks, but event communication can only be event-type communication without data transmission. The event control block is requested by the task, and the kernel is responsible for maintenance. Event control block: The event control block is used to record events and manage tasks waiting to read events. uwEventID has a total of 32 bits representing 31 events (bit25 is reserved), the stEventList event control block is a bidirectional circular linked list, when a task reads an event but the event has not yet occurred, the task will be mounted in the linked list, when the event occurs, the system wakes up and waits for the event task, the task will be taken out of the linked list at this time.
typedef struct tagEvent {
UINT32 uwEventID;
LOS_DL_LIST stEventList;
} EVENT_CB_S, *PEVENT_CB_S;
Event initialization : The event control block is created by the task, and then calls LOS_EventInit() for initialization. The initialized state is as follows:
Event read : When the event does not occur, the read event operation will trigger the system scheduling, suspend the current task and add it to the stEventList linked list. In the following figure, event 1 occurs, and task Task1 reads event 2, but event 2 does not occur, causing Task1 to be disabled. hang.
Event writing : When event 2 occurs, task Task2 writes event 2 into uwEventID. At this time, task Task1 is scheduled to read the event successfully, and the corresponding bit of event 2 is cleared to 0 (or can be cleared to 0), and Task1 is retrieved from the linked list stEventList. was taken out.
Event deletion : The event control block is created by the task, the kernel is not responsible for the deletion of the control block, but the task can call LOS\_EventClear to clear the event.
Four, data structure - mutual exclusion lock
Mutual exclusion lock, also known as mutual exclusion semaphore, is a special binary semaphore used to implement exclusive processing of shared resources. The state of the mutex at any time is only unlocked or locked. When a task holds the mutex, the mutex is in a locked state, and the task acquires the ownership of the mutex; when the task releases it, the mutex is unlocked and the task is lost. Ownership of the mutex; when a task holds the mutex, other tasks can no longer unlock or hold the mutex. Mutex control block: The mutex control block resource is created and maintained by the kernel. When the kernel is initialized, the function OsMuxInit() will be called to initialize the lock resource. The tasks waiting for the mux will be mounted in the muxList.
typedef struct {
UINT8 muxStat; /**< State OS_MUX_UNUSED,OS_MUX_USED */
UINT16 muxCount; /**< Times of locking a mutex */
UINT32 muxID; /**< Handle ID */
LOS_DL_LIST muxList; /**< Mutex linked list */
LosTaskCB *owner; /**< The current thread that is locking a mutex */
UINT16 priority; /**< Priority of the thread that is locking a mutex */
} LosMuxCB;
During initialization, the kernel will apply for LOSCFG\_BASE\_IPC\_MUX\_LIMIT lock resources, and mount each resource block to the doubly circular linked list g\_unusedMuxList. The global variable g\_allMux points to the first address of the lock resource memory, and then according to the first address Add ID method to quickly find the corresponding control block:
Mutex creation : The task calls LOS\_MuxCreate() to create a mutex, and the kernel allocates a lock resource to the task from the head of g_unusedMuxList.
Mutex lock application : The task calls LOS\_MuxPend() to apply for a mutex lock. If the lock is held by other tasks, the task is queued on the muxList.
Mutex lock release : The task calls LOS\_MuxPost() to release the mutex lock. If there are other tasks queued, it will trigger the scheduling to release the lock to the queued task.
Mutex deletion : The task calls LOS\_MuxDelete() to delete the mutex. If the deletion is successful, the lock resource is returned to g\_unusedMuxList.
Five, data structure - semaphore
Semaphore is a synchronization mechanism that realizes synchronization between tasks or mutually exclusive access to critical resources, and is often used to assist a group of competing tasks to access critical resources. In a multitasking system, each task needs to be synchronized or mutually exclusive to protect critical resources, and the semaphore function can provide users with support in this regard. Usually the count value of a semaphore is used to correspond to the number of valid resources, indicating the number of mutually exclusive resources that can be occupied. Semaphore control block: The semaphore control block resource is created and maintained by the kernel. When the kernel is initialized, the function OsSemInit() will be called to initialize the semaphore resource. Apply for LOSCFG\_BASE\_IPC\_SEM\_LIMIT semaphore control blocks during initialization, g\_allSem points to the first address of the semaphore control block, and the created semaphore control block will be mounted to the free list g\_unusedSemList. Tasks requesting a semaphore are queued on the control block's linked list semList, where semCount indicates the number of resources that can be accessed.
typedef struct {
UINT16 semStat; /**< Semaphore state */
UINT16 semCount; /**< Number of available semaphores */
UINT16 maxSemCount; /**< Max number of available semaphores */
UINT16 semID; /**< Semaphore control structure ID */
LOS_DL_LIST semList; /**< Queue of tasks that are waiting on a semaphore */
} LosSemCB;
Semaphore creation : The task calls LOS\_SemCreate() to create a semaphore, and specifies the maximum number of tasks that access this resource at the same time. The kernel allocates a semaphore control block from the head of g\_unusedSemList and initializes it.
Semaphore application : The task calls LOS\_MuxPend() to apply for a semaphore. If there are resources available, the application is successful, otherwise it will wait in line on the semList.
Semaphore release : The task calls LOS\_SemPost() to release the semaphore. If other tasks are queued, the scheduling is triggered to make the queued tasks access resources.
Semaphore deletion : The task calls LOS\_SemDelete() to delete the semaphore. If the deletion is successful, the lock resource is returned to the head of g\_unusedSemList.
6. Summary
This article analyzes the data structure of the kernel IPC mechanism through the four aspects of the data structure: queue, event, mutex, and semaphore. I hope the above explanation can give you an overall understanding of the IPC mechanism. Regarding the content of the OpenHarmony kernel, I also introduced the algorithm of the kernel object queue and the operation mechanism of the OpenHarmony LiteOS-M kernel event. Interested readers can click to read: "OpenHarmony - Detailed Algorithm of Kernel Object Queue (Part 1)", "OpenHarmony - Detailed Explanation of the Kernel Object Queue Algorithm (Part 2)", "OpenHarmony - Detailed Explanation of the Source Code of Kernel Object Events". On paper, I feel shallow at the end, and I absolutely know that this matter has to be done. All knowledge is transformed into ability, and must be practiced. May all developers who love OpenHarmony learn the true knowledge and understand the true meaning in the future development work, strengthen their tempering and increase their skills, and continue to move forward for the prosperity and development of the OpenHarmony ecosystem!
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。