1

This article is shared by the WeChat development team engineer "virwu".

1 Introduction

Recently, WeChat mini-games have supported one-click launch of video accounts, upgraded WeChat to the latest version, and opened Tencent mini-games (such as Jump, Happy Landlord, etc.), and you can see the button for launching live broadcast in the upper right corner of the menu. The button has become the anchor of the game (as shown in the figure below).

However, due to a series of considerations such as performance and security, WeChat mini-games run in a separate process. In this environment, the modules related to the video number live broadcast will not be initialized. This means that the audio and video data of the mini game must be transmitted across processes to the main process for push streaming, which brings us a series of challenges in realizing the live broadcast of the mini game.

(This article was published simultaneously at: http://www.52im.net/thread-3594-1-1.html)

2. Series of articles

This article is the fifth in a series:

"Live Broadcast System Chat Technology (1): The Road to Real-time Push Technology Practice of Million Online's Meipai Live Barrage System"
"Live broadcast system chat technology (2): Ali e-commerce IM messaging platform, technical practice in group chat and live broadcast scenarios"
"Live System Chat Technology (3): The Evolution of 15 Million Online Message Architecture in WeChat Live Chat Room Single Room"
"Live broadcast system chat technology (4): Baidu live broadcast massive user real-time messaging system architecture evolution practice"
"Live Broadcast System Chat Technology (5): Cross-process Rendering and Push Streaming Practice of WeChat Mini Game Live Broadcast on Android" (* This article)

3. Video capture and push streaming

3.1 How to record screen capture?
Mini game live broadcast is essentially to show the content on the host’s mobile phone screen to the audience. Naturally, we can think of using the system’s screen recording interface MediaProjection to collect video data.

This solution has these advantages:

1) The system interface is simple to implement, and compatibility and stability are guaranteed;
2) It can be expanded into a general-purpose live screen recording in the later stage;
3) The impact on the game performance is small, and the impact on the frame rate is within 10% after testing;
4) Data processing and streaming can be performed directly in the main process, without having to deal with the cross-process problem of mini games.
But in the end this plan was rejected, mainly because of the following considerations:

1) The system authorization pop-up window needs to be displayed;
2) It is necessary to carefully handle the situation where the screen push is paused after cutting out the mini game, otherwise it may be recorded to other interfaces of the anchor, which may cause privacy risks;
3) The most critical point: In product design, a comment widget (as shown in the figure below) needs to be displayed on the mini game, which is convenient for the anchor to view the live comment and interact. The recorded live broadcast will make the audience see this component, which will affect the viewing experience at the same time It will expose some data that only the anchor can see.

Secondly, since the rendering of the mini game is completely controlled by us, in order to have a better live broadcast experience, can the content rendered by the mini game be transmitted to the main process for push streaming?

3.2 Mini game rendering architecture
In order to better describe the scheme we adopted, here is a brief introduction to the rendering architecture of the mini game:

You can see that the left half of the figure represents the progress of the mini game in the foreground. MagicBrush is the mini game rendering engine. It receives the rendering instruction call from the mini game code and renders the screen to the Surface provided by the on-screen SurfaceView. The main process does not participate in the background during the whole process.

3.3 The situation when recording a mini game
The mini game has previously supported the recording of game content. Similar to the principle of live broadcast, it is necessary to obtain the screen content of the current mini game.

When screen recording is enabled, the mini game will switch to the following mode for rendering:

It can be seen that the output target of MagicBrush is no longer the SurfaceView on the screen, but a SurfaceTexture generated by the Renderer.

Here first introduce the role of Renderer:

Renderer is an independent rendering module that represents an independent GL environment. It can create SurfaceTexture as input. After receiving the onFrameAvailable callback of SurfaceTexture, it uses the updateTexImage method to convert the image data into a texture of type GL_TEXTURE_EXTERNAL_OES to participate in the subsequent rendering process. Output the rendering result to another Surface.

The following explains the process in the figure step by step:

1) MagicBrush receives the rendering instruction call from the mini game code, and renders the mini game content to the SurfaceTexture created by the first Renderer;

2) Then this Renderer did two things:

2.1) Re-render the obtained small game screen texture to the Surface of the screen;
2.2) Provide the texture ID to the second Renderer (here, the two Renderers share the GLContext to realize the shared texture).
3) The second Renderer renders the texture provided by the first Renderer to the input SurfaceTexture provided by the mp4 encoder, and the final encoder encodes the mp4 screen recording file.

3.4 How to transform the screen recording program?
It can be seen that in the screen recording solution, one Renderer is responsible for uploading the game content to the screen, and the other Renderer renders the same texture to the encoder to achieve the recording of the game content. The live broadcast is actually similar, so just replace the encoder with Can the push module of the live broadcast work?

This is true, but there is still a lack of a key link: the push module runs in the main process, and we need to achieve cross-process transfer of image data! How to cross-process?

Speaking of cross-process: Maybe the first reaction that pops out of our minds is traditional IPC communication methods such as Binder, Socket, and shared memory. But if you think about it carefully, the SurfaceView provided by the system is a very special View component. It does not participate in the drawing through the traditional View tree, but is directly synthesized on the screen through the system's SurfaceFlinger, and SurfaceFlinger runs on the system process. The content drawn on the Surface provided by SurfaceView must be transferable across processes, and the cross-process method of Surface is very simple-it itself implements the Parcelable interface, which means that we can use Binder to directly transfer Surface objects across processes .

So we have the following preliminary plan:

It can be seen that the third step is no longer to render to the mp4 encoder, but to the Surface that the main process transfers across processes. The Surface of the main process is packaged by a SurfaceTexture created by a Renderer. Now it is small. The game process as a producer renders the screen to this Surface. When a frame is rendered, the SurfaceTexture of the main process will receive the onFrameAvailable callback to notify that the image data has been prepared, and then obtain the corresponding texture data through updateTexImage. Here, because the live streaming module only supports GL_TEXTURE_2D textures, the main process The process Renderer will convert GL_TEXTURE_EXTERNAL_OES to GL_TEXTURE_2D texture and send it to the live streaming encoder to complete the streaming process.

After some transformation: the above scheme successfully realized that the mini game was rendered on the screen and passed to the main process for push streaming, but is this really the optimal scheme?

After thinking about it, I found that there are too many Renderers in the above scheme. There are two in the mini game process, one for rendering on the screen, one for rendering to the cross-process Surface, and one for the main process Convert the texture and call the push module. If you want to support screen recording at the same time, you need to start another Renderer in the mini game process to render to the mp4 encoder. Too much Renderer means too much extra rendering overhead, which will affect the running performance of the mini game.

3.5 Cross-process rendering scheme
Throughout the whole process, in fact, only the Renderer of the main process is necessary. The additional Renderer used by the mini game is nothing more than to satisfy both the rendering on the screen and the cross-process transmission. Let us open our minds-since the Surface itself is not affected by the process Constraint, then we simply pass the on-screen Surface of the small game process to the main process for rendering on the screen!

In the end, we drastically cut off the two redundant Renderers of the mini game process. The MagicBrush is directly rendered to the Surface passed from the cross-process, and the Renderer of the main process is responsible for the texture type conversion while also responsible for rendering the texture to the cross-process. The passed mini game process is on the screen Surface, which realizes the rendering of the screen on the screen.

In the end, the number of Renderers required was reduced from the original 3 to the necessary 1, which improved the performance while the architecture was clearer.

When you need to support screen recording in the future, you only need to make a slight change to pass the input SurfaceTexture of the mp4 encoder to the main process across the process, and then add a Renderer rendering texture to it (as shown in the figure below).

3.6 Compatibility and performance
At this point, I can't help but be a little worried, will there be any problems with the compatibility of this solution for cross-process transfer and rendering of Surface?

In fact, although it is not common, the official document has instructions that it can be drawn across processes:

SurfaceView combines a surface and a view. SurfaceView's view components are composited by SurfaceFlinger (and not the app), enabling rendering from a separate thread/process and isolation from app UI rendering.

And Chrome and the system WebView after Android O all have the scheme of using cross-process rendering.

In our compatibility test, we covered all mainstream system versions and models of Android 5.1 and later. Except for the cross-process rendering black screen problem on Android 5.x models, the rest can be rendered on screen and push normally. flow.

In terms of performance: We used WebGL Aquarium’s Demo for performance testing. We can see that the impact on the average frame rate is about 15%. The CPU of the main process has increased due to rendering and push streaming. The strange thing is the small game process. However, there has been some decline in CPU overhead. The reason for this decline has not been confirmed for the time being. It is suspected that it is related to the movement of the screen operation to the main process, and it is not ruled out that it is the influence of the statistical method.

3.7 Summary
In order not to record the comment widget on the anchor side, we started with the rendering process of the mini game. With the help of Surface's ability to render and transmit images across processes, the process of rendering the mini game on the screen was moved to the main process, and at the same time, the texture was generated and pushed. Compatibility and performance meet the requirements.

4. Audio collection and streaming

4.1 Scheme selection
In the audio capture solution, we noticed that the AudioPlaybackCapture solution provided in Android 10 and above systems allows us to capture system audio within certain limits. Some conclusions of the pre-research at that time are as follows.

Capture party-conditions for capture:

1) Android 10 (api 29) and above;
2) Obtained the RECORD_AUDIO permission;
3) Apply for MediaProjection permission through MediaProjectionManager.createScreenCaptureIntent() (shared with MediaProjection screen recording);
4) Add/exclude the MEDIA type to be captured through AudioPlaybackCaptureConfiguration.addMatchingUsage()/AudioPlaybackCaptureConfiguration.excludeUsage();
5) Through AudioPlaybackCaptureConfiguration.addMatchingUid() /AudioPlaybackCaptureConfiguration.excludeUid() add/exclude the UID of the app that can be captured.
Captured party-conditions that can be captured:

1) The Usage set by the Player’s AudioAttributes is USAGE_UNKNOWN, USAGE_GAME or USAGE_MEDIA (most of them currently use default values and can be captured);
2) The CapturePolicy of the application is set to AudioAttributes#ALLOW_CAPTURE_BY_ALL, and there are three ways to set it (the most stringent one, there is currently no configuration in WeChat, and the default is to capture);
3) Set android:allowAudioPlaybackCapture="true" through manifest.xml, where the default is true for applications whose TargetApi is 29 and above, otherwise it is false;
4) API 29 and above can be set at runtime through the setAllowedCapturePolicy method;
5) API 29 and above can be set individually for each Player through AudioAttributes.
In general: Android 10 and above can use the AudioPlaybackCapture solution for audio capture, but considering that the system version of Android 10 is too restrictive, we finally chose to collect and mix all the audio played in the game by ourselves.

4.2 Cross-process audio data transmission
Now, the old problem lies before our eyes again: the audio data after the mini game is mixed is in the mini game process, and we need to transmit the data to the main process to push the stream.

It is different from general IPC cross-process communication for method invocation: in this scenario, we need to frequently (every 40 milliseconds) transmit large data blocks (the amount of data within 16 milliseconds is about 8k).

At the same time: Due to the characteristics of live broadcast, the delay of this cross-process transmission process needs to be as low as possible, otherwise the audio and video will not be synchronized.

In order to achieve the above goals: We have tested several IPC solutions such as Binder, LocalSocket, MMKV, SharedMemory, and Pipe. In the built test environment, we simulate the real audio transmission process in the mini game process, and send the serialized data object every 16 milliseconds. The data object size is divided into three blocks of 3k/4M/10M and stored before sending. The time stamp is in the object; the time when the data is received in the main process and deserialized into the data object is used as the end time, and the transmission delay is calculated.

Finally got the following results:

Note: XIPCInvoker (Binder) and MMKV take too long to transfer large amounts of data, and are not shown in the results.

The analysis of each scheme is as follows (the stall rate represents the proportion of data with a delay> 2 times the average delay and> 10 milliseconds to the total):

It can be seen that the transmission delay performance of the LocalSocket solution in each case is extremely excellent. The main reason for the difference is that after the bare binary data is transmitted to the main process across processes, it still needs to perform a data copy operation to deserialize it into a data object. When using LocalSocket, you can use ObjectStream and Serializeable to achieve streaming copy. Compared with other schemes, it saves a lot of time to copy the data after receiving it once (of course, other schemes can also be designed to be streamed in blocks and copy at the same time, but it has a certain cost to implement, not as stable and easy to use as ObjectStream).
We also conducted compatibility and performance tests on LocalSocket, and there was no failure to transmit or disconnect. Only on Samsung S6, the average delay exceeded 10 milliseconds, and the delays of other models were about 1 millisecond, which can meet our expectations. .

4.3 Security of LocalSocket
The cross-process security of the commonly used Binder is guaranteed by the system-implemented authentication mechanism. As the encapsulation of Unix domain socket, LocalSocket must consider its security issues.

The paper "The Misuse of Android Unix Domain Sockets and Security Implications" analyzes the security risks brought by the use of LocalSocket in Android in more detail.

PS: Download the original paper attachment (please download from section 4.3 of this link: http://www.52im.net/thread-3594-1-1.html)

To summarize the paper: Because LocalSocket itself lacks an authentication mechanism, any application can connect to intercept data or transmit illegal data to the receiving end to cause an exception.

In view of this characteristic, there are two defense methods we can do:

1) Randomize the naming of the LocalSocket, for example, use the AppId and user uin of the current live game to calculate md5 as the name of the LocalSocket, so that the attacker cannot try to establish a connection by fixing or exhausting the name;
2) Introduce an authentication mechanism, send specific random information to verify the authenticity of the other party after the connection is successful, and then start the real data transmission.
4.4 Summary
In order to be compatible with models below Android 10 to be able to broadcast live, we chose to handle the collection of small game audio by ourselves, and through comparison and evaluation, we chose LocalSocket as the cross-process audio data transmission solution, which met the needs of live broadcast in terms of latency.

At the same time, through some countermeasures, the security risks of LocalSocket can be effectively avoided.

5. Problems caused by multiple processes

Looking back, although the whole scheme looks relatively smooth, many pits have been stepped on due to multiple processes in the process of implementation. Here are two of the main ones.

5.1 glFinish causes a serious drop in the frame rate of the rendering push
After just implementing the cross-process rendering push-streaming solution, we conducted a round of performance and compatibility testing. In the test, we found that the frame rate of some mid-to-low-end models dropped very seriously (as shown in the figure below).

After recurring, check the frame rate rendered by the mini-game process (that is, the frame rate of the mini-game process drawn on the cross-process Surface) and found that the frame rate can be reached when the live broadcast is not available.

The test software PerfDog we used records the frame rate of the on-screen Surface, which shows that the performance degradation is not due to the low efficiency of the small game code execution caused by the excessive live broadcast overhead, but the low efficiency of the main process on the screen Renderer.

So we profiled the operating efficiency of the main process during live broadcast and found that the time-consuming function is glFinish.

And there are two calls:

1) The first time the Renderer is called to convert an external texture to a 2D texture, it will take more than 100 milliseconds;
2) The second call is inside Tencent Cloud Live SDK, which takes less than 10 milliseconds.
If the first call is removed, this time inside the live SDK will take more than 100 milliseconds.

In order to understand why this GL instruction takes so long, let's take a look at its description:

glFinish does not return until the effects of all previously called GL commands are complete.

The description is simple: it will block until all GL instructions previously called are completed.

So it seems that there are too many GL instructions before? However, the GL instruction queue is isolated in the dimension of threads. In the Renderer thread of the main process, only a very small number of GL instructions for texture type conversion will be executed before glFinish. Students from Tencent Cloud learned that the push interface will not be in this The thread executes many GL instructions. How can such a small number of GL instructions block glFinish for so long? Wait, a lot of GL instructions? Isn't the mini game process executing a large number of GL instructions at this time? Could it be that the large number of GL instructions in the mini game process caused the glFinsih of the main process to take too long?

Such a guess is not unreasonable: Although the GL instruction queue is isolated by thread, there is only one GPU that processes instructions. Too many GL instructions in one process cause another process to block for too long when glFinish is needed. Google didn't find a relevant description in a circle, so you need to verify this guess by yourself.

Re-observe the above test data: It is found that if it can reach 60 frames before the live broadcast, it can reach about 60 frames after the live broadcast. Does this mean that the time consumption of glFinish will also decrease when the GPU load of the mini game is low? ?

On models with severe performance degradation: control other variables unchanged and try to run a low-load mini game, and found that glFinsih's time-consuming success has dropped to about 10 milliseconds, which confirms the above guess-it is indeed the mini game process being executed A large number of GL instructions blocked the execution of the main process glFinish.

How to solve it? The high load of the mini game process cannot be changed. Can the mini game stop after one frame of rendering is completed and wait for the glFinish of the main process to complete before rendering the next frame?

Various attempts have been made here: OpenGL's glFence synchronization mechanism cannot be used across processes; because GL instructions are executed asynchronously, locking the GL thread of the mini game through cross-process communication does not guarantee the progress of the mini game when the main process executes glFinish The instruction has been executed, and this can only be guaranteed by adding glFinish to the mini-game process, but this will invalidate the double buffering mechanism, resulting in a significant drop in the mini-game rendering frame rate.

Since the blocking caused by glFinish is unavoidable, let's go back to the beginning of the question: Why do we need glFinish? Due to the existence of the double buffering mechanism, generally speaking, glFinish is not needed to wait for the previous drawing to be completed, otherwise the double buffering will be meaningless. In the two glFinish, the first texture processing call can be directly removed, and the second Tencent Cloud SDK call was communicated, and it was discovered that it was introduced to solve a historical problem, and you can try to remove it. With the help of Tencent Cloud students, after removing glFinish, the rendered frame rate was finally consistent with the output frame rate of the mini game. After compatibility and performance testing, no problems caused by removing glFinish were found.

The final solution to this problem is very simple: but the process of analyzing the cause of the problem has actually done a lot of experiments. In the same application, a process with a high GPU load will affect the time-consuming glFinish of another process. Rare, there are not many reference materials. This process also gave me a deep understanding of the performance impact caused by glFinish's failure of the double buffering mechanism. The use of glFinish should be very cautious when using OpenGL for rendering.

5.2 Background process priority issue
During the test: we found that no matter what frame rate is used to send pictures to the live SDK, the frame rate of the picture that the viewer sees is always only about 16 frames. After excluding the background cause, it is found that the frame rate of the encoder is insufficient. Tested by Tencent Cloud students, the frame rate of the same process encoding can reach the set 30 frames, then it shows that it is still a problem caused by multiple processes. The encoding here is a very heavy operation and requires more CPU resources, so The first thing we suspect is the priority of background processes.

To confirm the problem:

1) We found a rooted mobile phone and used the chrt command to increase the priority of the encoding thread. The audience frame rate immediately reached 25 frames;
2) On the other hand, after testing, if a floating window of the main process is displayed on the small game process (making the main process have the foreground priority), the frame rate can be up to 30 frames.
In summary: It can be confirmed that the frame rate drop is caused by the low priority of the background process (and the threads it owns).

The practice of increasing thread priority is more common in WeChat. For example, the JS thread of a small program and the rendering thread of a small game will set the thread priority through the android.os.Process.setThreadPriority method at runtime. The students of Tencent Cloud SDK quickly provided an interface for us to set the thread priority, but when we actually started running, we found that the encoding frame rate only increased from 16 frames to around 18 frames. What went wrong?

As mentioned earlier: it is effective to set the thread priority through the chrt command, but the thread priority set by the android.os.Process.setThreadPriority method corresponds to the nice value set by the renice command. After reading the manual of chrt carefully, I found that the understanding of the previous test was incorrect. I used the chrt -p [pid] [priority] command to set the priority, but did not set the parameter of the scheduling policy, which caused the scheduling policy of the thread to be changed from Linux The default SCHED_OTHER has been changed to the default SCHED_RR of the command, and SCHED_RR is a "real-time strategy", which causes the thread scheduling priority to become very high.

In fact: the thread priority set by renice (that is, android.os.Process.setThreadPriority) is not very helpful for the threads owned by the background process.

In fact, someone has already explained this:

To address this, Android also uses Linux cgroups in a simple way to create more strict foreground vs. background scheduling. The foreground/default cgroup allows thread scheduling as normal. The background cgroup however applies a limit of only some small percent of the total CPU time being available to all threads in that cgroup. Thus if that percentage is 5% and you have 10 background threads all wanting to run and one foreground thread, the 10 background threads together can only take at most 5% of the available CPU cycles from the foreground. (Of course if no foreground thread wants to run, the background threads can use all of the available CPU cycles.)

Regarding the thread priority setting, interested students can read another big guy’s article: "Android's Bizarre Trap-WeChat Caton Tragedy Caused by Setting Thread Priority".

Finally: In order to increase the encoding frame rate and prevent the background main process from being killed, we finally decided to create a foreground service in the main process during live broadcast.

6. Summary and Outlook

Multi-process is a double-edged sword. While it brings us isolation and performance advantages, it also brings the problem of cross-process communication. Fortunately, with the help of the system Surface's capabilities and a variety of cross-process solutions, it can be better Solve the problems encountered in the live broadcast of the mini game.

Of course: The best solution to the problem of cross-process is to avoid cross-process. We also considered the solution of running the push module of the video number live broadcast in the small game process, but did not choose this solution due to the consideration of the cost of transformation.

At the same time: The practice of cross-process rendering of SurfaceView this time also has certain reference value for other businesses-for some scenes where the memory pressure is high or the security risk is high, and the SurfaceView rendering and drawing are required, the logic can be placed in an independent The process is then drawn to the View of the main process through cross-process rendering, which not only gains the advantages of independent processes, but also avoids the fragmentation of experience caused by inter-process jumps.

Appendix 1: Summary of articles on live broadcast technology

"Detailed explanation of real-time audio and video live broadcast technology on mobile terminal (1): opening"
"Detailed Explanation of Real-time Audio and Video Live Broadcast Technology on Mobile Terminal (2): Acquisition"
"Detailed Explanation of Real-time Audio and Video Live Broadcast Technology on Mobile Terminal (3): Processing"
"Detailed Explanation of Real-time Audio and Video Live Broadcast Technology on Mobile Terminal (4): Encoding and Packaging"
"Detailed Explanation of Real-time Audio and Video Live Broadcast Technology on Mobile Terminal (5): Push Streaming and Transmission"
"Detailed explanation of real-time audio and video live broadcast technology on mobile terminal (6): Delay optimization"
"Integration of Theory with Practice: Realizing a Real-Time Video Broadcasting Simply Based on html]5"
"Real-time video live broadcast client technology inventory: Native, html]5, WebRTC, WeChat applet"
"Android live broadcast introductory practice: hands-on to build a simple live broadcast system"
"Taobao live broadcast technology dry goods: high-definition, low-latency real-time video live broadcast technology decryption"
"Technical dry goods: real-time live video broadcast first screen optimization practice within 400ms"
"Sina Weibo Technology Sharing: Practice of Million High Concurrency Architecture for Weibo Real-time Live Broadcast Answering Questions"
"Summary of technical principles and practice of real-time audio mixing in live video broadcasting"
"Qiniu Cloud Technology Sharing: Use QUIC Protocol to Realize Real-time Video Broadcasting 0 Caton! 》
"Recently hot real-time live broadcast answering system realization ideas and technical difficulties sharing"
"How does P2P technology reduce the bandwidth of real-time video broadcast by 75%? 》
"Some optimization ideas for NetEase Yunxin real-time video live broadcast at the TCP data transmission layer"
"First Disclosure: How does Kuaishou make it possible for millions of viewers to watch the live broadcast at the same time and still be able to start in seconds without lag? 》
"Talking about several key technical indicators that directly affect user experience in real-time audio and video live broadcast"
"Technical Secret: Facebook Live Video Broadcasting Supporting Millions of Fan Interactions"
"Real-time video live broadcast technology practice on mobile terminal: How to achieve real-time seconds, smooth and non-blocking"
"Practical sharing of real-time live audio and video live broadcast at 1080P with latency less than 500 milliseconds"
"On the technical points of opening a real-time video live broadcast platform"
"Live Broadcast System Chat Technology (1): The Road to Real-time Push Technology Practice of Million Online's Meipai Live Barrage System"
"Live chat technology (2) Ali e-commerce IM messaging platform, technical practice in group chat and live broadcast scenarios"
"Live System Chat Technology (3): The Evolution of 15 Million Online Message Architecture in WeChat Live Chat Room Single Room"
"Live broadcast system chat technology (4): Baidu live broadcast massive user real-time messaging system architecture evolution practice"
"Live Broadcast System Chat Technology (5): Cross-process Rendering and Push Streaming Practice of WeChat Mini Game Live Broadcast on Android"
"The Evolution of the Video Live Broadcast System Architecture with Massive Real-time Messages (Video + PPT) [Attachment Download]"
"The deep optimization practice sharing of YY live broadcast in the mobile weak network environment (video + PPT) [Attachment download]"
"From 0 to 1: Real-time audio and video live broadcast technology practice sharing for 10,000 people online (video + PPT) [Attachment download]"
"Best Practices of Online Audio and Video Live Room Server Architecture (Video + PPT) [Attachment Download]"

More similar articles...

Appendix 2: Summary of technical articles shared by the WeChat team

"Technical Challenges and Practice Summary Behind the 100 Billion Visits of WeChat Moments"
"Shared by WeChat Team: Solution to the Multi-phonetic Word Problem in Full-Text Search on WeChat Mobile"
"WeChat Team Sharing: Technical Practice of High-Performance Universal Key-value Component of WeChat for iOS"
"WeChat team sharing: How does the iOS version of WeChat prevent group explosions and APP crashes caused by special characters? 》
"Original Sharing by the WeChat Team: Technical Practice of the Memory Monitoring System of WeChat on iOS"
"IOS background wake-up combat: Summary of WeChat voice reminder technology for receipt of funds"
WeChat team sharing: super-resolution technology principles and application scenarios of video images"
"WeChat Team Sharing: The Technology Decryption Behind WeChat Real-time Audio and Video Chats 100 Million Times a Day"
"WeChat team sharing: the pits filled in by the WeChat Android version of the small video encoding"
"The road to optimize the full-text search of local data on WeChat mobile phone"
"Optimization of the synchronization update plan of organizational structure data in the enterprise WeChat client"
"WeChat Team Disclosure: The ins and outs of the super bug "15..."
"How the 889 million monthly active WeChat Super IM WeChat is tested for compatibility on the Android side"
"An article get everything about WeChat open source mobile terminal database component WCDB! 》
"Technical Interview with WeChat Client Team Leader: How to Start Client Performance Monitoring and Optimization"
"WeChat background based on the time series of massive data cold and hot hierarchical architecture design practice"
"WeChat team original sharing: the bloatedness of WeChat Android version and the road to modular practice"
"WeChat background team: optimization and upgrade practice sharing of WeChat background asynchronous message queue"
"Original Sharing by the WeChat Team: Practice of Repairing SQLite Database Damage on the WeChat Client"
"WeChat Mars: The network layer packaging library being used inside WeChat, will be open source soon"
"As promised: WeChat's own mobile IM network layer cross-platform component library Mars has been officially open sourced"
"Open source libo library: the cornerstone of the backend framework that supports tens of millions of connections on a single machine and supports 800 million WeChat users [Source code download]"
"WeChat New Generation Communication Security Solution: Detailed Explanation of MMTLS Based on TLS1.3"
"WeChat team original sharing: Android version of WeChat background keep-alive actual sharing (process keep-alive)"
"WeChat team original sharing: Android version of WeChat background keep-alive actual sharing (network keep-alive)"
"The technological evolution of WeChat for Android from 300KB to 30MB (PPT) [Attachment download]"
"WeChat team original sharing: the technological evolution of WeChat for Android from 300KB to 30MB"
"WeChat Technical Director Talks about Architecture: The Way of WeChat-Dao Zhi Jian (Full Speech)"
"WeChat Technical Director Talks about Architecture: The Way of WeChat-Dao Zhi Jian (PPT) [Attachment Download]"
"How to Interpret "WeChat Technical Director Talking about Architecture: The Way of WeChat-The Road to the Simple""
"Background System Storage Architecture Behind Massive WeChat Users (Video + PPT) [Attachment Download]"
"The Practice of Asynchronous Transformation of WeChat: 800 Million Monthly Lives and Tens of Millions of Connections in the Back Office"
"WeChat Moments Massive Technology PPT [Attachment Download]"
"Technical Test and Analysis of WeChat's Influence on the Network (Full Paper)"
"A Concluding Note of WeChat Back-end Technical Architecture"
"The Way of Architecture: 3 Programmers Achieve WeChat Moments with an average daily publishing volume of 1 billion [with video]"
"Rapid Fission: Witness the evolution of WeChat's powerful back-end architecture from 0 to 1 (1)"
"Rapid Fission: Witness the evolution of WeChat's powerful back-end architecture from 0 to 1 (2)"
"WeChat team original sharing: Android memory leak monitoring and optimization skills summary"
"Comprehensive summary of the various "pits" encountered in the iOS version of WeChat upgrading iOS9"
"WeChat team original resource obfuscation tool: Let your APK decrease by 1M"
"WeChat team original Android resource obfuscation tool: AndResGuard [source code]"
"Android version of the WeChat installation package "weight loss" actual combat record"
"The actual combat record of the iOS version of the WeChat installation package "weight loss""
"Mobile terminal IM practice: iOS version of WeChat interface freeze monitoring program"
"Technical Difficulties Behind WeChat "Red Packet Photos""
"Mobile IM Practice: Technical Solution Record of WeChat Small Video Function of iOS Version"
"Mobile IM Practice: How to Significantly Improve Interactive Performance of WeChat for Android (1)"
"Mobile IM Practice: How to Significantly Improve Interactive Performance on WeChat for Android (2)"
"Mobile IM Practice: Realizing the Smart Heartbeat Mechanism of Android WeChat"
"Mobile IM Practice: Analysis of the Heartbeat Strategy of WhatsApp, Line and WeChat"
"Mobile IM Practice: Google Message Push Service (GCM) Research (from WeChat)"
"Mobile IM Practice: Discussion on the Multi-device Font Adaptation Scheme of WeChat for iOS"
"IPv6 Technology Detailed Explanation: Basic Concepts, Application Status, Technical Practice (Part 1)"
"IPv6 Technology Detailed Explanation: Basic Concepts, Application Status, Technical Practice (Part 2)"
"Interview with WeChat Multimedia Team: Learning from Audio and Video Development, WeChat Audio and Video Technology and Challenges, etc."
"Tencent Technology Sharing: The Story Behind WeChat Mini Program Audio and Video Technology"
"Summary of Tencent's Senior Architect Dry Goods: An article to understand all aspects of large-scale distributed system design"
"Interview with Liang Junbin from the WeChat Multimedia Team: Talk about the audio and video technologies I know"
"Tencent Audio and Video Lab: Using AI Black Technology to Achieve Ultra-low Bit Rate HD Real-time Video Chat"
"Tencent Technology Sharing: Technical Ideas and Practice of Intercommunication Between WeChat Mini Program Audio and Video and WebRTC"
"Teach you to read the chat records of Android version of WeChat and Mobile QQ (for technical research and study only)"
"WeChat Technology Sharing: Practice of Generating Massive IM Chat Message Sequence Numbers in WeChat (Principles of Algorithms)"
"WeChat Technology Sharing: Practice of Generating Massive IM Chat Message Serial Numbers in WeChat (Disaster Recovery Plan)"
"Tencent Technology Sharing: Detailed Explanation of GIF Motion Picture Technology and Practice of Mobile QQ Dynamic Expression Compression Technology"
"WeChat team sharing: Kotlin is gradually recognized, a technical early adopter tour of WeChat for Android"
"Social Software Red Envelope Technology Decryption (2): Decrypt WeChat Red Envelope Technology Evolution from 0 to 1"
"Social Software Red Envelope Technology Decryption (3): The technical details behind the WeChat Shake Red Envelope Rain"
"Social software red envelope technology decryption (4): How does the WeChat red envelope system deal with high concurrency"
"Social software red envelope technology decryption (5): How does the WeChat red envelope system achieve high availability"
"Social software red envelope technology decryption (6): The storage layer architecture evolution practice of WeChat red envelope system"
"Social Software Red Envelope Technology Decryption (11): Random Algorithm for Decrypting WeChat Red Envelope (including code implementation)"
"WeChat team sharing: extreme optimization, practice summary of 3 times faster compilation speed of iOS version of WeChat"
"IM "Scan" function is easy to do? Take a look at the complete technical realization of WeChat "Scan for Knowledge""
"WeChat team sharing: Thoughts on the mobile terminal software architecture brought about by the reconstruction of WeChat payment code"
"IM Development Collection: The most complete in history, a summary of various function parameters and logic rules of WeChat"
"WeChat Team Sharing: The Evolution of 15 Million Online Message Architecture in a Single Room of WeChat Live Chat Room"

More similar articles...
(This article was published simultaneously at: http://www.52im.net/thread-3594-1-1.html)

JackJiang
1.6k 声望808 粉丝

专注即时通讯(IM/推送)技术学习和研究。