2
头图

Hello everyone, I'm Xiao Cai~

This article mainly introduces network

If necessary, you can refer to it! If it is helpful, don't forget Like

The WeChat public account has been opened, Vegetable , students who haven't paid attention, remember to pay attention!

Let's think about a question first, What is a network?

Network This thing can be said to follow in our lives! But the word network is too generalized. Our most intuitive understanding is to open a browser, enter a certain address, and then give us a response to the page. Since there is a network , In order to communicate normally, there are various network protocols . The main function of the protocol is to restrict by . Only when both parties abide by the protocol can the communication go on normally!

So what protocol do we usually contact the most?

If you have a friend, you will answer, Http protocol ! Another friend said it is Https protocol

Both answers are correct. Normally, they can be collectively referred to as the Http protocol, and the one with the lock is the Https protocol. So what is the difference between the two protocols? This is what we are going to talk about in this article.

HTTP

HTTP ( Hypertext Transfer Protocol ), Hypertext Transfer Protocol

1. History review

There is an inexplicable intimacy to see this word, because it is close to our lives, we can look back on its growth process

  • 1991

    It was born, it was named HTTP 0.9 at the time, this version is extremely simple, there is only one Get command, that is, only resources can be obtained

  • 1996

After 5 years, the 1.0 version was hatched, that is, HTTP 1.0 . This version has been greatly upgraded, so that most people now think that 1.0 is its initial version, but do not know the existence of this version 0.9. This version Relatively mature, content in any format can be sent, which enables the Internet to transmit not only text, but also images, videos, and binary files, which lays the foundation for the great development of the Internet!

  • 1999

Version HTTP 1.1 was released along with it. Http 1.0 is relatively mature, but it has flaws. On the basis of inheriting the advantages of HTTP 1.0, version 1.1 also overcomes the performance problems of HTTP 1.0

  • 2015

HTTP/2 appeared, this release no longer has the .x version, which also shows the confidence and strength of this release. But at present this version is still relatively new and has not yet reached the level of popularity

2. HTTP1.0

What are the characteristics of the HTTP protocol?

One of the most basic features is that back and forth, it will respond after the server receives the request. That is to say, the client will initiate a TCP connection when acquiring resources, and send a connection on the connection. HTTP Request to the server, and then the server will return an HTTP Response to respond, so that time a request comes, a connection will be opened

So the question is, what's wrong with this? The answer is yes, the obvious question is as follows:

  • Performance issues

The establishment and closing of connections are time-consuming operations, and we are now opening a web page and loading dozens of resources. If there is no request to open a TCP connection once, it is very time-consuming. Some friends said it , Concurrency is popular now, you can open multiple connections and send them concurrently! Yes, it is indeed possible to send requests concurrently, but we also need to understand that the number of connections is not unlimited!

  • push issue

We mentioned above that the server responds passively, that is, the server has no way to actively push messages to the client without a request from the client. Of course, this problem is not a big deal now, and there have been many solutions. plan!

Let's first get to know the HTTP1.0 version, 0.9 has too many critics, so it is not perfect, and it has an improved version, which is version 1.0. This version brings us two features

1. Keep-Alive mechanism

As we all know, performance problems are very serious problems. Frequent connection establishment and disconnection seriously deplete performance. We can fully imagine that each resource requires a connection, and a connection is closed immediately after it is used up. This is a bit of a luxury~

So in order to solve this problem, HTTP 1.0 brings the Keep-Alive mechanism, can realize the multiplexing of TCP connections

How does this mechanism manifest itself?

The client only needs to add an identifier to the header of the request: Connection: Keep-Alive , when the server receives the attribute from the client and sees that the header of the request carries the Keep-Alive identifier, it will not end the request processing After closing the connection, this field will also be added to the HTTP response header to tell the client that the request is not closed.

But we need to know that the number of TCP connections is limited. It is unrealistic to keep the connection alive if every resource request comes. Therefore, the server will have a parameter of keep-Alive timeout . After a period of time, if the connection is If no new request comes in, the connection will be closed

2. Content-Length property

This property is closely related to the above mechanism. The previous processing method is to connect a request and respond to a resource. When the resource returns, the client knows that the connection is directly closed. When the Keep-Alive mechanism is in place, the client will not Knowing when the request is closed, a request is processed, and the connection is not closed, so how does the client know that the connection processing is over? In other words, how does the client know that the received data packet is complete?

Therefore, there is a Content-Length property, which can tell the client how many bytes are in the body of the HTTP Response, and then the client will know the response is successful after receiving how many bytes.

3. HTTP 1.1

Every version update is to fill in the pit left by the previous version

The above picture is the solution to the problem brought by version 1.0. It can be said that to solve the performance problem, connection multiplexing is necessary. Therefore, when HTTP1.1 connection multiplexing becomes a default attribute, even if Connection: Keep is not added property, the server will also not close the connection after the request is processed. Unless explicitly added Connection: close

In order to let the client judge whether the response of a request is successful, 1.0 also took great pains to introduce the Content-Length property, then the problem comes, if the content returned by the server is too long and complex, then the calculation of Content-Length is Isn't it also time-consuming? And the client also needs to calculate Content-Length for comparison,

1. Chunk mechanism

Therefore, in order to solve this problem, HTTP 1.1 introduced the Chunk mechanism ( Http Streaming ), that is, the server will add the Transfer-Encoding: chunked attribute to the response header, so as to tell the client that the body of the response is divided into blocks, each block There are delimiters in between, and there is a special mark at the end of all blocks, so the client can quickly determine the end of the response

2. Pipeline and Head-of-line Blocking

When we introduced connection multiplexing, why is it still very slow?

Because there is still a problem at this time, the request is serial, the client sends a request, and only after receiving the response will it continue to send the next request

Those of us who are good at writing code are all used to multi-threading if we disagree with one word. If one thread is not enough, we need two. That is to improve the degree of concurrency. Only when the degree of concurrency is increased can the performance be improved! Of course, the theoretical definition, the specific implementation is not in the development. Must be multi-threaded

HTTP 1.1 also introduced the Piepline mechanism. On the same TCP connection, after a request is sent, the next request can be sent before the response comes back, which improves concurrency and improves performance~

But everything has two sides, and the Piepline mechanism also has a fatal problem, that is Head-of-Line Blocking . The vernacular translation is line head blocking

The request sent by the client is 1, 2, 3, and the three requests can be processed concurrently on the server. But the client also has its own temper, and the order of receiving the response must also be 1, 2, 3. Once request 1 is delayed, Then requests 2 and 3 will also be blocked. Therefore, in order to avoid the side effects of Piepline, many browsers turn off this mechanism by default.

So since Piepline doesn't work, is there any other coup? Of course! You can always trust the whimsy of the predecessors~

3. Brainstorm
In the browser, for the same domain name, only 6~8 connections can be opened, but a web page needs to send dozens of HTTP requests, so 6~8 connections are far from enough.
1. Spriting Technology

This technology is aimed at small pictures. Since the number of connections is limited, many small pictures of songs can be assembled into a large picture. After reaching the browser, a small piece of the picture can be intercepted and displayed through JS or CSS. This can solve the problem of sending many requests before into only one request.

2. Inlining

Inline is another technique for image loading. It directly inserts the Base64 of the image into the CSS file to avoid loading

3. JS stitching

Combine a large number of small JS files into one file and compress it, so that the browser can directly load it in one request

4. Requested sharding technology

Since a domain name can only establish 6~8 connections, wouldn't it be possible to increase the number of connections by adding a few more domain names, which is equivalent to bypassing the limitation of the browser.

4. Resume from a breakpoint

Another useful feature of HTTP 1.1 is breakpoint resuming. When the client downloads a file from the server, if the download is halfway and the connection is interrupted, the client can still continue the download from the place where the last interruption was made after the new connection is created. .

This is because when the client downloads, it records the amount of data downloaded while downloading. Once the connection is interrupted, the Range: first offset-last offset field can be added to the header of the request to specify the download from a certain offset.

But this feature is only applicable to breakpoint download, breakpoint upload needs to be implemented by yourself

4. HTTP/2

Every version update is to fill in the pit left by the previous version

So far we have recognized HTTP1.1 from HTTP0.9 , 1.1 has been with us for six years before HTTP/2 appeared

When HTTP1.1 was blocked due to Piepline, all kinds of solutions came up after brainstorming. But have you found that these solutions are all solved from application level , no Universality, so if you want to be universal, you still have to solve it from the level of the protocol. Therefore, based on this direction, HTTP/2 appeared. You can also find that this version is not called 2.0 , probably because the working group believes that the protocol It is perfect enough, there will be no more minor versions to fix, if there is, it will HTTP/3 .

1. Binary Framing

The problem of head-of-line blocking is a problem, even HTTP/2 has not solved

binary framing is the core feature of HTTP/2 in order to solve queue head blocking problem of . HTTP 1.1 itself is in plaintext character format, and binary framing refers to converting the message in this character format to TCP before sending it to TCP. binary, and is sent in multiple frames (multiple data blocks).

图源网, 侵删

In the process of sending HTTP/2 connection data, the request and response are broken up and sent out in multiple frames out of sequence. The request and response need to be reassembled, and the request and response must be paired one by one. Then came the problem again.

How to assemble and pair?

In fact, the principle of the solution here is also very simple. Each request and response actually forms a logical stream , and assigns a stream ID to each stream, and uses this ID as the label of each stream.

Does this implementation make you think? Is it similar to the packet number we mentioned above? Yes, so HTTP/2 does not completely solve the problem of blocking head of , but blocks head of The problem has been refined from HTTP Request granularity to "frame" granularity. Although not solved, frame-based granularity can reduce the possibility of head-of-line blocking and also improve performance!

图源网, 侵删

Why does HTTP/2 still fail to solve the problem of head-of-line blocking?

We need to analyze this problem from the source, and here is the direct conclusion: As long as the TCP protocol is used, the "head of line blocking" problem cannot be avoided, because the TCP protocol is first-in, first-out!

2. Header compression

In addition to binary framing, another way to improve HTTP/2 efficiency is to use header compression.

In HTTP 1.1, there has been corresponding compression for the body of the message, especially for pictures, which are originally compressed. But the header of the message has not been compressed.

In the era of the barbaric growth of the Internet, the application scenarios are becoming more and more complex, and parameters are often placed in the request header, which will cause the header of the message to become very large. At this time, it becomes very difficult to compress the header. It is necessary. Therefore, HTTP/2 specially designed a HPACK protocol and corresponding algorithm. This method can also improve the transmission speed.

6. HTTPS

The internet is full of dangers, and where there is danger you need security, so here it comes - HTTPS

1. SSL/TLS

Before formally introducing HTTPS , it is necessary for us to understand SSL/TLS

  • SSL : Secure Sockets Layer, Secure Sockets Layer
  • TLS : Transport Layer Security, Transport Layer Security
Xiaocai used to think that these two concepts were introduced after HTTPS was proposed. Now thinking about it, it is inevitable that absurd . I don’t know how many guys think the same as Xiaocai.

Since the introduction time is unknown, it is necessary to make a historical review

SSL/TLS is very early, how early? Almost as old as the Internet

  • 1994 : Netscape designed SSL 1.0
  • 1995 : Netscape released SSL 2.0, but it was soon discovered that there was a serious vulnerability
  • 1996 : SSL 3.0 released, widely used
  • 1999 : The Internet Standardization Organization IETF standardized SSL and released TLS 1.0
  • 2006 and 2008 : TLS was upgraded twice, TLS 1.1 and TLS 1.2

So in the network layer, where is SSL/TLS? We can see the following figure:

It is located between the application layer and the transport layer. It can not only support the HTTP protocol, but also support various other protocols such as FTP and IMTP.

SSL/TLS is mainly used to ensure secure transmission, how to ensure secure transmission? Then we need to use encryption, let's first understand several encryption modes

  • Symmetric encryption

The implementation of symmetric encryption is very simple, that is, both parties jointly hold a key, and the messages are encrypted and decrypted with the same key.

But the question is, how to inform both parties what the key is before the client and the server have made an agreement?

Key A is encrypted by key B, and key B is encrypted by key C? Isn't this a nesting doll!

So this encryption method is not applicable!

  • Asymmetric encryption

The client prepares a pair of public and private keys ( PubA, PriA ) for itself, and the server prepares a pair of public and private keys ( PubB, PriB ). The public and private keys have a key feature: the public key PubA is calculated by the private key PriA, but the reverse You can't come here, you can't calculate PriA based on PubA

In this way, both parties only need to disclose their public key, and keep it for themselves.

And the purpose of signature is equivalent to stamping, indicating that this message was sent by you and is non-repudiation.

This method is quite reasonable, but it also encounters a problem, that is, how to transmit the public key securely?

It is also a classic man-in-the-middle attack. When the public key is transmitted, the man in the middle can intercept and replace the public key of both parties with the public key of the man in the middle. Then the next communication is equivalent to being under the control of the middle.

Then you need to find a way to prove that the public key received by the server is indeed sent by the client, and no one in the middle can tamper with this public key, and vice versa.

  • Digital Certificate and Certificate

In order to solve this problem, we urgently need a notary office CA, which keeps the public key in the notary office. If the public key is stored in the notary office, the notary office will issue a digital certificate (Certficate), which is equivalent to an identity After that, both parties only need to transmit the certificate, and then take the certificate to the notary office to verify whether it is true or not.

Then there will be another problem at this time, what should I do if the CA is fake, and the middleman may also play the role of the CA. The two parties communicate happily for a long time, and in the end, they find that they are communicating with the middleman, which must not spit out old blood

At this time, it is necessary to issue a certificate to CA, then who should issue the CA certificate, that is, the upper-level CA of the CA, which is the certificate trust chain

There are two processes involved here:

  • verification process
  1. To verify the legitimacy of the server, the client needs to take the server's certificate C3 and go to CA2 to verify
  2. To verify the legitimacy of CA2, the client needs to take the certificate C2 of CA2 and go to CA1 to verify
  3. To verify the legitimacy of CA1, the client needs to take the certificate C1 of CA1 and go to CA0 to verify

So how does CA0 certify it? We can only trust it unconditionally. He said that the legendary root certificate, Root CA institutions are some recognized institutions in the world,

  • Issuance process

The issuance process is similar, the upper-level CA issues the certificate to the lower-level CA. Starting from the root CA (CA0), CA0 issues certificates to CA1, CA1 issues certificates to CA2, and CA2 issues certificates to application servers

After we understand the above concepts, it is not difficult to look SSL/TLS ,

After the TCP connection is established and before the data is sent, the SSL/TLS protocol will negotiate a symmetric encryption key between the client and the server through a four-way handshake, and then subsequent connections will bring this key

2. HTTPS

So what is HTTPS , now we just need to remember a formula to understand, that is: HTTPS = HTTP + SSL/TLS

We have mastered HTTP here, and also mastered SSL/TLS, and indirectly mastered HTTPS~

Therefore, when it comes to HTTPS, the transmission process is divided into three stages:

  • stage one : establishment of TCP connection
  • Phase 2 : SSL/TLS four-way handshake negotiated symmetric encryption key
  • Stage three : Based on the key, all HTTP Req/Res are encrypted and decrypted for transmission on the TCP connection

Some folks asked questions. The establishment of a TCP connection is already very time-consuming, and now we need to add the SSL/TLS handshake. Isn't the speed of HTTPS very slow?

In fact, is only done once in the first and second phases of the transmission process when the connection is established. After that, as long as the connection is not closed, each request only needs to go through the third phase , so the performance does not have much impact!

5. TCP/UDP

Now that HTTP is mentioned, TCP and UDP cannot be bypassed. Let's talk about it together~

Why can't it be bypassed, what is the relationship between HTTP and TCP?

HTTP is based on TCP connection. In short, TCP is simply establishing a connection, without involving any actual data we need to request, simple transmission. HTTP is used to send and receive data, that is, the actual application.

We can summarize three points:

  • TCP is the underlying communication protocol of , which defines the specification of data transmission and connection methods
  • HTTP is the application layer protocol, which defines the specification of the content of the transmitted data
  • The data in the HTTP protocol is transmitted using the TCP protocol, so supporting HTTP must also support TCP

To talk about TCP, we will naturally bring UDP for comparison. This is a pair of brothers. Let’s introduce the difference between TCP and UDP from several features.

reliability

If you want to ask your knowledge of TCP and UDP, most people will come along, one is a reliable connection, the other is an unreliable connection! Then Xiaocai will ask a few questions

  • Which is a reliable connection and which is an unreliable connection
  • What is the definition of reliable connection
  • How reliable connections are guaranteed to be reliable

The first question is very simple, and most students can easily answer it: TCP is a reliable connection, UDP is an unreliable connection.

When it comes to the second question, it may be a little embarrassing.

" is reliable? is trustworthy right?"

Trustworthy is right, but trust is too generalized, why is trustworthy, and what are trustworthy places?

Here are three definitions:

  • Packets are not lost
  • packets not heavy
  • Timing is not disordered

Only if it meets the above three points, it can be regarded as a reliable connection, and TCP just meets it~

So continue to our third question, how to ensure reliability? The side dishes will not be sold here, let's look down

The first is that packets are not lost

1. Solve the problem of not losing: ACK + resend

Packet loss in the network is a very normal phenomenon. What should I do if I lose it? Resend it! It’s just that simple and violent. Every time the server receives a packet, it will confirm the client and report back to the client that it received the packet. The signal, that is, the ACK signal, and if the client does not receive the ACK confirmation signal within the specified time, it will resend the data again.

But the ACK signal is to send an acknowledgment when a packet is received? No! This efficiency is too low, so the solution is: client will number each data packet sent, and the number is from small to small. Increase in order from the largest to the largest, then the order can only be determined based on the number

For example, the server receives packets 1, 2, 3. Then it only needs to reply to the client ACK=3 , which means that all packets less than or equal to 3 have been received, and then continue to receive packets 4, 5, 6. And At this time, you need to reply ACK=7 , which means that all packets less than or equal to 7 have been received

2. Solve the non-heavy problem: ACK judgment

It is mentioned above that the client does not receive the ACK acknowledgment signal within the specified time, and will resend the data again. But the timeout does not mean that the server has not received it. It may be because of network delay, the sent data packet or ACK is still there On the way, it is actually received, and if the client resends the data at this time, it will cause duplicate messages, and at this time, it is necessary to judge

So how to judge the weight? This can also use ACK, when the server replies ACK=6 to the client, it means that all data packets less than or equal to 6 have been received, and at this time, if the data packets in this range are received again, the server can be directly discarded without processing

3. Solve the problem of timing disorder

There will be delays in the network, and the data packets are sent through different nodes in the network, and the sending rate of each node is different. It is possible that the client sends three packets of 4, 5, and 6. At this time, 5 , 6 two packages arrive first, and the 4th package is still on the way, at this time it is easy to cause out-of-order problems

The way the server handles it is also very clever. It will temporarily store the 5 and 6 packets until the arrival of the No. 4 packet, and then reply ACK=7 to the client. If the No. 4 packet does not come, the server will not ACK. Reply, then the client will re-send 4, 5, 6 three packets after the specified time

Therefore, through the message sequence number + client retransmission + server sequence ACK, the data packets from the client to the server are not lost, not repeated, and the timing is not disordered.

three-way handshake

TCP's three-way handshake/four-time wave is really a commonplace problem, and Xiaocai is not even willing to spend more space to explain, but in order to ensure the integrity of the content, we will continue to revisit it in the spirit of being responsible to the end!

We can see from the figure that there are two more critical key/value , one is seq=x , the other is ACK=x+1

  • seq : As mentioned above, when the client sends a message to the server, each data packet will be numbered and sorted, and this seq means that the number of the sent packet is x, because TCP is full duplex, so On the one hand, both sides of the communication need to send their own numbered packets, and on the other hand, they need to confirm the received packets to the other party. Therefore, in order to optimize the transmission, the two packets will be combined and transmitted, so there is a seq in the same packet , also contains ACK
  • ACK : The word ACK has also been introduced above, which means to tell the other party that I have received the packet whose number is less than or equal to x, and here it means that the packet less than or equal to x has been received, and the next Ready to receive packets of x+1
Why shake hands three times, can't two? Can't four?

This is a classic interview question~ Here is a direct answer: Twice is not enough, 4 times is OK but not necessary

Before communication, we need to confirm that both parties must have the ability to send and receive , so it is necessary to proceed to the next step, otherwise it is nonsense! To give a popular example

"Hey, do you love me"

"I love you, do you love me"

"I love you too"

In the love trilogy, first of all, it is necessary to confirm whether the two parties are in love. If it is not confirmed whether they are in love, is it necessary to have a relationship?

If it becomes two handshakes

"Hey, do you love me"

"I love you, do you love me"

Two handshakes cannot determine whether the two are in love, then the relationship cannot continue, it is easy to develop misunderstandings, and finally nothing

So what about the four-way handshake? Since the three-way handshake can confirm whether the two are in love, the four-way handshake will certainly work, but it is not necessary

Therefore, the three-way handshake can just ensure that the client and the server confirm their sending and receiving capabilities once. The first time, the client sent seq=x to the server, and could not get whether the other party received it; the second time, the other party replied seq=y, ACK=x+1. At this time, the client knows that its sending and receiving capabilities are OK, but the server only knows that its receiving capabilities are OK; the third time, the client sends ACK=y+1, and the server knows that it has sent the second time after receiving it. ACK is received by the other party, and the sending ability is no problem

waved four times

Compared with the three-way handshake to establish a connection, the four-way handshake to close the connection is more complicated.

The four waves are also due to the fact that TCP is full duplex and has a state of Half-Close , we can simulate the dialogue

Client: "Hi, server, I'm closing the connection"

Server: "Oh ok, I see, wait for me"

Server: "I'm fine, close the connection"

Client: "OK"

Here you can think about it, why not wave three times?

Client: "Hi, server, I'm closing the connection"

Server: "Oh ok, I see, close the connection"

Client: "OK"

I feel that there is nothing wrong with waving my hand three times. I waved four times. What are I waiting for? ? Here we need to focus on


connection is a logical connection, that is to say, the connection is a fake , which does not want our physical connection in reality, and has physical meaning. Then if you wave your hands three times directly, it will lead to some packets still on the road Lost and wandering . Then the problem comes, if the connection is broken, it will be broken, these packets can be discarded directly, but the connection may be reconnected, which will cause the previously wandering packets to be regarded as new after the new connection is opened data packets, there may be confusion problems


resolves :

In the TCP/IP network, there is a value called MSL (Maximum Segment Lifetime), the longest time any IP packet stays on the network is MSL. This value is 120 s by default, which means that a packet must be in the MSL time reach the destination within the range, if it exceeds this time, the intermediate routing node will discard the packet. Therefore, based on the limitation, the server can wait for 2*MSL time when it knows to close the connection, and then enter the shutdown stage.

:

A connection can not be closed immediately if you want to close it. After closing, you have to wait for 2*MSL to reopen.

It's time to test if you've ever been a hacker~ What problems will this cause?

If connections are created frequently, a large number of connections may end up in the TIME_WAIT state, eventually consuming all connection resources.

So we need to take the following measures

  • Don't let the server actively close the connection. This way the connection to the server will not be in the TIME_WAIT state.
  • The client does a connection pool and reuses connections instead of creating and closing them frequently. This is actually the idea adopted by HTTP 1.1 and HTTP/2.

6. QUIC

HTTP 2.0 has brought new features and speed improvements to the World Wide Web. But as a developer, dissatisfaction is the source of constant improvement!

QUIC is a brand new concept, it has another major leap in performance and security, it can replace HTTP/2 , become HTTP/3! It mainly brings the following improvements

1. Greatly reduces the connection establishment time

We have already learned above that it takes seven handshakes (TCP three-way handshake + SSL/TLS four-way handshake) to establish an HTTPS connection. If you want to reduce the time for connection establishment, then you have to start from the handshake, which is to reduce RTT. The number of times. For the QUIC protocol, the previous seven handshakes can be reduced to 0 times!

图源网, 侵删

2. Multiplexing without head-of-line blocking

Let's re-understand what is head of queue blocking , HTTP/2

As long as TCP is used, there is no way to completely solve the problem of head-of-line blocking, because TCP sends first and receives first, while UDP does not have this limitation

Then QUIC solves the headless blocking problem, and we can easily think of causality, that is, QUIC is built on UDP. Losing a packet in QUIC will not slow down other requests in the packet

3. Connection Migration

The TCP connection is composed of 4-tuple [source address, source port, destination address, destination port] , there will be a problem on the mobile terminal, if the client is WIFI or 4G, the client's IP is always on Changed, what problems will this cause? Yes, This means that connections are frequently established and closed. The most intuitive feeling is that if the WIFI signal is not good when we usually watch videos, we want to switch to 4G playback. , will wait for a while before continuing to play

Then the idea of the solution is also very simple, is that in the case of client IP and port floating, the connection can still be maintained!

The TCP connection is inherently false. It is a logical concept. Since it is a logical concept, QUIC can create a logical connection, so it cannot identify the connection in the way of 4-tuple , because it It will change, but let the client generate a 64-bit digital identification connection. As long as the digital identification does not change, no matter how the IP and port drift, this connection will always exist. This is our objective feeling that the connection has always been connected. exist, never stop

It's gimmicky, so how fast is it? Below is a trend graph after Google enables QUIC

Well, guys, here we have a comprehensive overview of the network, this article is not the type of in-depth digging, the purpose is just to give you a general understanding of the network, after all, if you don't know, you have to go deep, then the ending is also It won't be great, you say so!

Don't talk empty-handed, don't be lazy, and be a programmer with who is bragging about X as an architecture~ Just follow and be a companion, so that Xiaocai is no longer alone. See you below!

If you work harder today, tomorrow you will be able to say one less thing to ask for help!

I am Xiao Cai, a man who grows stronger with you. 💋

The WeChat public account has been opened, Vegetable Farmer said , students who didn't follow it remember to pay attention!


写做
624 声望1.7k 粉丝