Preface
Recently, I have been studying computer network related knowledge in Lagou Education, and today I will learn about TCP first.
text
What is TCP
TCP (Transport Control Protocol) is a transport layer protocol that provides reliable transmission of Host-To-Host data, supports full duplex, and is a connection-oriented protocol.
TCP handshake and wave
TCP is a connection-oriented protocol designed to establish a connection (handshake) and disconnect (wave).
The TCP protocol has several basic operations:
- A Host actively initiates a connection to another Host, called SYN (Synchronization), to request synchronization;
- A Host takes the initiative to disconnect request, called FIN (Finish), the request is completed;
- One Host sends data to another Host, called PSH (Push), data push.
In the above three cases, the receiver needs to send an ACK (Acknowledgement) response to the sender after receiving the data. The request/response model is a requirement for reliability. If a request does not respond, the sender may think that it needs to resend the request.
The process of establishing a connection (three-way handshake)
Because of the need to maintain connections and reliability constraints, the TCP protocol must ensure that every piece of data sent must be returned. The returned data is called an ACK (that is, a response).
According to this idea, you can see if a 3-way handshake is required to establish a connection:
- The client sends a message to the server (SYN)
- The server is ready to connect
- The server gives an ACK for the client's SYN
You might ask, isn’t it all right here? A 2-way handshake is sufficient. But it is not, because the server has not yet determined whether the client is ready. For example, after step 3, the server immediately sends data to the client. At this time, the client may not be ready to receive data. Therefore, a process needs to be added.
The following actions will occur next:
- The server sends a SYN to the client
- Client is ready
- The client sends an ACK to the server
You may ask, isn’t the above 6 steps? How is the 3-way handshake? Let's analyze the reasons together.
Step 1 is a handshake;
Step 2 is the preparation of the server, not data transmission, so it is not a handshake;
Steps 3 and 4, because they happen at the same time, can be combined into a SYN-ACK response and passed to the client as a piece of data, so it is the second handshake;
Step 5 does not count as a handshake;
Step 6 is the third handshake.
In order to facilitate your understanding of Step 3 and Step 4, I have drawn a picture here. You can see that SYN and ACK are combined in the following figure, so a total of 3 handshake is required to establish a connection. The process is shown in the following figure:
From the above example, you can further think about how the common flags such as SYN, ACK, and PSH are represented in transmission.
One way of thinking is to add a protocol header to the TCP protocol. Take multiple bits in the protocol header, of which SYN, ACK, and PSH all occupy 1 bit. For example, SYN bit, 1 means SYN is on, 0 means off. Therefore, SYN-ACK means that both the SYN bit and the ACK bit are set. This design is also called Flag.
The process of disconnection (4 waves)
Continuing the above idea, how many handshake is needed if the connection is disconnected? To give you some hints, you can conceive this way in your mind.
- The client requests to disconnect and send a disconnect request, this is called (FIN).
- The server receives the request, and then sends an ACK to the client as a response to FIN.
- Here you need to think about a question, can you upload FIN like a handshake and go back?
In fact, the server cannot upload the FIN at this time, because there are many problems to be dealt with when disconnecting. For example, the server may still send out messages without receiving an ACK; it may also be that the server has its own resources to release. Therefore, disconnection cannot be operated like a handshake—combining two messages. Therefore, after a wait, the server determines that the connection can be closed, and then sends a FIN to the client.
- When the client receives the FIN from the server, the client may also have its own things that need to be processed. For example, the client has sent a request to the server without receiving an ACK. After the client has processed it by itself, it will send an ACK to the server. .
TIME_WAIT
I believe everyone knows that the party that TCP actively closes the connection will finally enter the TIME_WAIT
state.
In fact, the client does not release the TCP connection immediately after sending the confirmation message, but only releases the TCP connection after 2MSL (two times the duration of the longest message segment life).
So what problem is the TIME_WAIT state used to solve or avoid?
Here is also to consider the problem of packet loss. If the fourth waved message is lost, the server will retransmit the third waved message if it does not receive the confirmation ack message, so that the message will take the longest time to go back and forth. It is 2MSL, so it takes so long to confirm that the server has indeed received it.
TCP sticking and unpacking
TCP is a transport layer protocol. When TCP sends data, it often does not send the data at once, as shown in the figure below:
Instead, data into many parts, and then send one by one. Like the picture below:
Similarly, at the destination, the TCP protocol needs to receive data one by one. Please think about it, why doesn't TCP send all the data at once? For example, if we want to transfer a file with a size of 10M, as far as the application layer is concerned, the transfer is completed at one time. And why doesn't the transport layer protocol choose to send this file all at once?
There are many reasons for this. For example, for stability, the more data sent at one time, the greater the probability of error. For example, for efficiency, there are sometimes parallel paths in the network. Splitting data packets can make better use of these parallel paths. Furthermore, for example, when sending and receiving data, there are buffers. As shown below:
The buffer is an area opened up in the memory, the purpose is to buffer. Because a large number of applications frequently send and receive data through the network card, at this time, the network card can only process application requests one by one. When the network card is too busy, the data needs to be queued, that is, put the data into the buffer. If each application sends large amounts of data at will, it may cause the real-time performance of other applications to be destroyed.
In short, there are various reasons: the packet cannot be too large at the transport layer. This limitation is often based on the size of the buffer. That is, the TCP protocol will split the data into parts that do not exceed the size of the buffer. Each part has a unique noun, called TCP Segment.
receives data, each TCP segment is reconstructed into the original data .
Like this, the data is split, then transmitted, and then reorganized at the destination, commonly known as unpacking. So unpacking is to split the data into multiple TCP segments for transmission . So what is a sticky bag? Sometimes, if multiple data sent to a destination are too small, in order to prevent multiple transmissions from occupying resources, the TCP protocol may combine them into one TCP segment to send, and then restore multiple data at the destination. This process Commonly known as sticky bag. So sticky packet is to combine multiple data into one TCP segment and send .
Sequence Number and Acknowledgement Number
In the design of the TCP protocol, the data is split into many parts, and some protocol headers are added. Merge into one TCP segment and transmit. This process is commonly known as unpacking. These TCP segments are transmitted to the destination by the underlying IP protocol through a complex network structure, and then reorganized.
Here is a question for you to think about: stability requires data to be transmitted without loss, that is to say, the data obtained by unpacking needs to be restored to its original state. In a complex network environment, even if all segments are sent out in sequence, there is no guarantee that they will arrive in order. Therefore, every TCP segment sent out needs a sequence number. This serial number is the Sequence Number (Seq).
As shown in FIG. When sending data, a self-increasing Sequence Number is assigned to each TCP segment. When receiving data, although the TCP segment is out of order, it can be sorted by Seq.
But this will create a new problem-if the receiver wants to reply to the sender, it also needs this Seq. However, it is very difficult for two terminals on the network to synchronize a self-increasing serial number. Because time cannot be completely synchronized between any two network entities, and there is no common storage space, data cannot be shared, let alone a distributed self-increasing serial number.
In fact, the essence of this question is as if two people are talking. We have to make sure what they say and the order between their answers. Because TCP is a duplex protocol, both sides may talk at the same time. So smart scientists thought of determining the order of a sentence, which requires two values to describe-that is, the number of bytes sent and the number of bytes received.
Let's redefine Seq (as shown in the figure above). For any receiver, if you know how many bytes of data the sender has sent when sending a certain TCP segment, then you can determine the order in which the sender sends data.
But there is a problem here. If the receiver also sends a data request to the sender (or the two parties are in a conversation), the receiver does not know which piece of data sent by the sender corresponds to the data sent by the sender.
So we also need another piece of data, which is how much data the sender has received when each TCP segment is sent. It is expressed by Acknowledgement Number and is abbreviated as ACK below.
In the figure below, the terminal has sent three pieces of data and received four pieces of data. Through observation, the sent and received data are sorted according to the Seq and ACK in the received data.
For example, in the above figure, the sender has sent 100 bytes of data, and the two packets received (Seq = 0 and Seq =100) are for the sender (Seq = 0). Send 100 bytes, so the received ACK is exactly 100. Explanation (Seq= 0 and Seq= 100) These two packets are sent back after receiving the 100th byte of data. This determines the overall order.
Sliding window and flow rate control
As a transport layer protocol, TCP's core capability is transmission. Transmission needs to ensure reliability and also needs to control flow rate. Both of these core capabilities are provided by sliding windows.
Request/response model
Every request sent in TCP requires a response. If a request does not receive a response, the sender will think that the transmission has failed and will trigger a retransmission.
The general model is very similar to the picture below. But if it is exactly the same as the figure below, after each request receives the response, and then sends the next request, the throughput will be very low. Because of this design, network idle time will be generated. To put it bluntly, it is a waste of bandwidth. The bandwidth is not used up, which means that more requests can be sent at the same time, and more responses can be received.
An improved way is to let the sender send a request instead of waiting for a response. Through this processing method, the sent data are connected together, and the response data are also connected together, and the throughput is improved.
But what if there is really a lot of data that can be sent at the same time? For example, if hundreds or thousands of TCP segments need to be sent, the bandwidth may be insufficient at this time.
Sliding Window
Then we need our sliding window to achieve this at this time, as shown below:
As shown in FIG:
- Dark green represents the segment that has received ACK
- Light green represents the segment that has been sent, but no ACK has been received
- White represents segments that are not sent
- Purple represents segments that cannot be sent temporarily
Let's redesign the order of different types of packets, put the sent data to the far left, the sending data to the middle, and the unsent data to the right. Suppose we send 5 packets at most at the same time, that is, window size = 5. The data in the window is sent out at the same time, and then waits for ACK. If a packet ACK arrives, we mark it as received (dark green).
As shown in the figure below, two packets of ACK have arrived, so they are marked in green.
At this time, the sliding window can slide to the right, as shown in the following figure:
Retransmission
What if some data fails to receive ACK during the sending process? This may cause retransmission.
If the situation shown in the figure below occurs, segment 4 does not receive the ACK for a while.
At this time, the sliding window can only be moved one position to the right, as shown in the following figure:
In this process, if the subsequent segment 4 retransmission is successful (ACK is received), then the window will continue to move to the right. If the transmission of segment 4 fails and the ACK is still not received, the receiver will also discard segments 5, 6, and 7. In this way, all data after segment 4 needs to be retransmitted.
Fast retransmission
In the TCP protocol, if the receiver wants to discard a certain segment, it can choose not to send an ACK. After the sender times out, the TCP segment will be retransmitted. Sometimes, the receiver wants to urge the sender to resend a certain TCP segment as soon as possible. At this time, the fast retransmission capability can be used.
For example, paragraph 1, paragraph 2, and paragraph 4 have arrived, but paragraph 3 has not arrived. The receiver can send segment 3 ACK multiple times. If the sender receives multiple segment 3 ACKs, it will resend segment 3. This mechanism is called fast retransmission. This is different from overtime retransmission and is a urging mechanism.
In order to prevent the sender from mistakenly thinking that segment 3 has been received, in the case of fast retransmission, even if the receiver receives the sent segment 4, it will still send the ACK of segment 3 (not send the ACK of segment 4) until it is sent Fang retransmitted segment 3.
Thinking: What is the unit of window size?
Please think about another question, what is the unit of window size? In all the pictures above, the window size is the number of TCP segments. In actual operation, the size of each TCP segment is different, and limiting the number will make the receiver’s buffer difficult to operate. Therefore, in actual operation, the unit of window size is bytes and .
Thinking: Is the larger the sliding window the better?
Quoting Zhihu's answer:
Flow rate control
The size of the sending and receiving windows can be used to control the flow rate of the TCP protocol. The larger the window, the more data that can be sent and received at the same time, and the greater the throughput supported. Of course, the larger the window, the greater the loss if data errors occur, because more data needs to be retransmitted.
Summarize
In order to increase the transmission rate, the TCP protocol chooses to send multiple segments at the same time. In order to prevent these segments from being rejected by the receiver, the two parties must negotiate the sending rate before sending. But we can't completely determine the network speed, so the way of negotiation becomes to determine the window size.
With the window, the sender uses the sliding window algorithm to send the message; the receiver constructs a buffer to receive the message and sends an ACK to the sender. The implementation of the sliding window only requires an array and a small number of pointers, which is a very efficient algorithm.
So, now you can try to answer the interview questions related to this lecture: sliding window and flow rate control ?
[Analysis] sliding window is the core of TCP protocol control reliability . The sender unpacks the data into multiple packets. Then put the data into an array with sliding windows and send them sequentially, still following the first-in-first-out (FIFO) order, but the packets in the window will be sent all at once. If the packet with the smallest sequence number in the window receives an ACK, the window will slide; if the packet with the smallest sequence number does not receive an ACK for a long time, it will trigger the retransmission of the data of the entire window.
On the other hand, in multiple transmissions, the average delay of the network is often relatively fixed, so that the TCP protocol can control the flow rate by negotiating the window size between the two parties.
TCP and UDP
TCP and UDP are the most widely used transport layer protocols today and have the most core monopoly. The core value of TCP is to provide reliability, and the core value of UDP is flexibility. You can use it to do almost anything. For example: HTTP protocol 1.1 and 2.0 are both based on TCP, but when HTTP 3.0 comes, UDP is used.
The difference between UDP and TCP
Next we talk about the difference between UDP and TCP.
- Purpose difference
First of all, the goals of the two protocols are different: the core goal of the TCP protocol is to provide reliable network transmission, while the goal of UDP is to simplify the protocol as much as possible on the basis of providing message exchange capabilities.
- Reliability difference
The core of TCP is to ensure reliability and provide better services. TCP will have a handshake process, and a connection needs to be established to ensure that both parties are online at the same time. Moreover, TCP has a time window to continuously collect unordered data until this batch of data can be sorted reasonably to form a continuous result. UDP does not have the above characteristics, it only sends data packets, and UDP does not need ACK, which means that UDP does not matter whether the message is sent successfully or not.
- Connected vs No Connected
TCP is a connection-oriented protocol (Connection-oriented Protocol). To transmit data, a connection must be established first. UDP is a connection-less protocol (Connection-less Protocol), data can be sent at any time, only provides the ability to send packets (Datagram).
- Flow Control Technology (Flow Control)
TCP uses flow control technology to ensure that the sender will not overwhelm the receiver by sending too many packets at once. TCP stores data in the send buffer and receives data in the receive buffer. When the application is ready, it will read data from the receive buffer. If the receive buffer is full, the receiver will not be able to process more data and discard it. UDP does not provide similar capabilities.
- Transmission speed
The UDP protocol is simplified, the packet is small, there is no connection, reliability check, etc., so in terms of transmission speed, UDP is faster.
- Scene difference
Each data packet of TCP needs to be confirmed, so it is naturally not suitable for high-speed data transmission scenarios, such as watching videos (streaming media applications), online games (TCP has a delay), etc. Specifically, if TCP is used for online games, each packet needs to be confirmed, which may cause a certain delay; for example, audio and video transmissions inherently allow a certain packet loss rate; Ping and DNSLookup, this type of operation only needs one time Simple request/return, no need to establish a connection, UDP is sufficient.
Summarize
This time TCP has learned here.
refer to
"29 Lectures on Computer Network Clearance"
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。