This article will compare HTTP 1.0 and HTTP 1.1 from the following dimensions:
- response status code
- Cache processing
- Connection method
- Host header processing
- Bandwidth optimization
response status code
HTTP/1.0 defines only 16 status codes. A large number of status codes have been added in HTTP/1.1, and 24 new status codes have been added for error response status codes. For example, 100 (Continue)
request before requesting large resource, 206 (Partial Content)
code for range request, 409 (Conflict)
and current resource A provision violation, 410 (Gone)
-- the resource has been moved permanently and does not have any known forwarding addresses.
Cache processing
Caching technology saves a lot of network bandwidth by avoiding frequent interaction between users and origin servers, and reduces the delay for users to receive information.
HTTP/1.0
The caching mechanism provided by HTTP/1.0 is very simple. The server uses the Expires
tag to mark (time) a response body. Requests within the Expires
mark time will get the response body cache. In the response body returned by the server to the client for the first time, there is a Last-Modified
tag, which marks the last modification of the requested resource on the server. In the request header, use the If-Modified-Since
tag, which marks a time, which means that the client asks the server: "After this time, has the resource I requested been modified?" Usually Next, the value of If-Modified-Since
Last-Modified
the response body when the resource was obtained last time.
If the server receives the request header and judges that the resource has not been modified after the time of If-Modified-Since
, it will return to the client a 304 not modified
response header, indicating that "buffering is available. Take it!".
If the server judges that the resource has been modified after If-Modified-Since
, it will return a response body of 200 OK
to the client, with a new resource content, indicating "I have changed what you want. Yes, I'll give you a new one."
HTTP/1.1
On the basis of HTTP/1.0, the caching mechanism of HTTP/1.1 greatly increases flexibility and scalability. The basic working principle remains the same as HTTP/1.0, but more granular features have been added. Among them, the most common feature in the request header is Cache-Control
, see MDN Web document Cache-Control for details.
Connection method
HTTP/1.0 uses short connections by default , that is, every time the client and the server perform an HTTP operation, a connection is established, and the connection is terminated when the task ends. When a certain HTML or other type of Web page accessed by the client browser contains other Web resources (such as JavaScript files, image files, CSS files, etc.), every time such a Web resource is encountered, the browser will re-create A TCP connection will result in a large number of "handshake packets" and "hand wave packets" occupying the bandwidth.
In order to solve the problem of resource waste in HTTP/1.0, HTTP/1.1 is optimized to the default long connection mode. The request message in the long connection mode will notify the server: "I am requesting a connection from you, and please do not close the connection after the connection is successfully established." Therefore, the TCP connection will remain open to serve the subsequent client-server data interaction. That is to say, in the case of using a long connection, when a web page is opened, the TCP connection used to transmit HTTP data between the client and the server will not be closed. When the client accesses the server again, it will continue to use this already established connection.
It is also a waste of resources if the TCP connection is maintained all the time. Therefore, some server software (such as Apache) also supports the timeout period. The TCP connection will be closed only if no new request arrives within the timeout period.
It is necessary to note that HTTP/1.0 still provides a long connection option, that is, adding Connection: Keep-alive
to the request header. Similarly, in HTTP/1.1, if you don't want to use the long connection option, you can also add Connection: close
in the request header, which will notify the server side: "I don't need a long connection, just after the connection is successful. closure".
The long connection and short connection of the HTTP protocol are essentially the long connection and short connection of the TCP protocol.
Implementing a persistent connection requires both the client and the server to support persistent connections.
Host header processing
Domain Name System (DNS) allows multiple hostnames to be bound to the same IP address, but HTTP/1.0 doesn't take this into consideration. Suppose we have a resource URL http://example1.org/home.html , HTTP/ In the request message of 1.0, the request will be GET /home.html HTTP/1.0
. That is, the host name will not be added. Such a message is sent to the server, and the server cannot understand the real URL that the client wants to request.
Therefore, HTTP/1.1 added the Host
field to the request header. The header of the message adding the Host
field will be:
GET /home.html HTTP/1.1
Host: example1.org
In this way, the server side can determine the real URL that the client wants to request.
Bandwidth optimization
range request
HTTP/1.1 introduced a range request mechanism to avoid wasting bandwidth. When a client wants to request a part of a file, or needs to continue downloading a file that has already been downloaded but was terminated, HTTP/1.1 can add the Range
header to the request to request (and only request byte data) part of the data. The server can ignore the Range
header, or return several Range
responses.
If a response contains partial data, it will carry the 206 (Partial Content)
status code. The significance of this status code is to prevent HTTP/1.0 proxy caches from mistakenly treating the response as a complete data response, thus treating it as a request response cache.
In the range response, the Content-Range
header flag indicates the offset and length of the data block.
status code 100
A new status code 100
was added to HTTP/1.1. The usage scenario of this status code is that there are some large file requests, and the server may not be willing to respond to such requests. At this time, the status code 100
can be used to indicate whether the request will be responded normally. The process is as follows :
However, in HTTP/1.0, there is no 100 (Continue)
status code. To trigger this mechanism, you can send a Expect
header, which contains a 100-continue
value.
compression
Data in many formats is pre-compressed during transmission. Data compression can greatly optimize bandwidth utilization. However, HTTP/1.0 provides few options for data compression, does not support the choice of compression details, and cannot distinguish between end-to-end compression or hop-by-hop compression.
HTTP/1.1 makes a distinction between content-codings and transfer-codings. Content encoding is always end-to-end, and transfer encoding is always hop-by-hop.
HTTP/1.0 includes the Content-Encoding
header, which encodes the message end-to-end. HTTP/1.1 added the Transfer-Encoding
header, which enables hop-by-hop transfer encoding of messages. HTTP/1.1 also added the Accept-Encoding
header, which is used by the client to indicate what kind of content encoding it can handle.
Summarize
- Connection method : HTTP 1.0 is short connection, HTTP 1.1 supports long connection.
- Status response codes : A large number of status codes have been added in HTTP/1.1, and 24 new status codes have been added for error response status codes. For example,
100 (Continue)
request before requesting large resource,206 (Partial Content)
code for range request,409 (Conflict)
and current resource A provision violation,410 (Gone)
-- the resource has been moved permanently without any known forwarding address. - Cache processing : In HTTP1.0, If-Modified-Since and Expires in the header are mainly used as the criteria for cache judgment, HTTP1.1 introduces more cache control strategies such as Entity tag, If-Unmodified-Since, If-Match, If-None-Match and more optional cache headers to control the cache strategy.
- Bandwidth optimization and use of network connections : In HTTP1.0, there are some phenomena of wasting bandwidth. For example, the client only needs a part of an object, but the server sends the entire object, and does not support the function of resuming the upload. HTTP1.1 introduces the range header field in the request header, which allows only a certain part of the resource to be requested, that is, the return code is 206 (Partial Content), which facilitates the free choice of developers to make full use of bandwidth and connections.
- Host header processing : HTTP/1.1 added the
Host
field to the request header.
References
Key differences between HTTP/1.0 and HTTP/1.1
Eight-part series
Java :
- Summary of common interview questions for Java basics
- Summary of common interview questions for Java collections
- Summary of Java Concurrency Common Interview Questions
Computer Basics :
- Summary of common interview questions in computer network
- Summary of common operating system interview questions
Database :
- MySQ common interview questions summary
- Summary of common interview questions for caching basics
- Summary of common interview questions for Redis
Common frameworks :
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。