6
头图

Before talking about the high reliability of Kafka, let's talk about RNG NB in the comment area first!

What is reliability?

We all know that the system architecture has three highs: "high performance, high concurrency and high availability" , the importance of the three is self-evident.

For any system, it is very difficult to meet the three highs at the same time. Large-scale business systems or traditional middleware will build complex architectures to ensure that.

In addition to the above three modes, there is another indicator direction that is also very important, that is, high reliability , and you may even confuse it with "high availability".

In fact, the two are not the same. High availability will be more inclined to the availability of the overall service, preventing system downtime and so on. And high reliability refers to the reliability guarantee of data . You can understand that "high reliability" is a more detailed concept than the three highs of the system.

So what is high reliability of data? To sum up, the system must provide reliable data support , and errors such as loss and duplication cannot occur.

Therefore, each open source middleware will declare itself to be super reliable through the document when releasing the version, just like every warm man said on the 520 day.

Our protagonist kafka today is such an example.

some important concepts

Because I haven't talked about message queues for a while, in order to help you better understand the article, let's review the basic concepts of kafka first:

  • record: message, the basic communication unit of message queue
  • topic: topic, the purpose is to classify messages, messages of different business types are usually distributed to different topics
  • partition: partition, each topic can create multiple partitions , each partition consists of a series of ordered and immutable messages
  • replica: replica, each partition has one or more replicas, its main function is to store and save data, which is reflected in the form of log (Log) objects . Replicas are divided into leader replicas and follower replicas
  • offset: offset, the position of each message in the log file corresponds to a sequentially increasing offset, you can understand it as an array-like storage form
  • producer: producer, the party that produces the message
  • consumer: consumer, usually different businesses will have one or more consumers to form a consumer cluster
  • broker: broker, a Kafka cluster consists of one or more Kafka instances, each Kafka instance is called a broker

As shown in the figure above, there are Topic 1 and Topic 2, Topic 1 has two partitions, Topic 2 has only one partition, and each partition has one leader copy and two follower copies, which are distributed on each different proxy node on .

Only the leader replica in the partition is responsible for data interaction with producers and consumers, and the follower replica will periodically pull data from the leader replica to ensure the data availability of the entire cluster.

How to ensure high reliability of data

Kafka stores data through a replica mechanism , so some mechanisms are needed to ensure that data can be reliably transmitted between replicas across clusters.

1. Replica synchronization set

Business data is encapsulated into messages and circulated in the system. Since each component is distributed on different servers, there may be a certain time delay in the data synchronization between topics, producers and consumers. Kafka divides the time delay by the range. different sets:

AR(Assigned Replicas)

Refers to the partition copy that has allocated data , usually refers to the leader copy + follower copy.

ISR (In Sync Replicas)

Refers to the replica set that is synchronized with the leader replica data . When the follower replica data and the leader replica data are kept in sync, these replicas are in the ISR, and the ISR set will change dynamically according to the synchronization state of the data.

OSR (Out Sync Replicas)

Once the data synchronization progress of the follower replica cannot keep up with the leader, it will be put into a set called OSR. That is, this set contains replicas of partitions that are not in sync.

OK, then what is the standard to judge whether it is synchronous or not?

The data synchronization time difference is set by the parameter replica.lag.time.max.ms , and its default value is 10s.

Once the messages of the slave partition copy and the master partition copy differ by more than 10s, the message is considered to be in an OSR out-of-sync state. If the follower is in the OSR set, it will not be elected as the new leader when selecting the new leader.

2. ACK response mechanism

We just said that kafka sends data synchronization signals through ack. What are the settings for the signal sending frequency ?

  • ack = 0

The producer sends a message once and never sends it again. Regardless of whether the sending is successful or not, if the sent message is lost on the way of communication, or the disk persistence operation has not been performed, the message may be lost.

Its advantage is that it has high performance. You don’t need to wait for the other party’s reply to send the next batch of messages, so the time for waiting for messages is saved. It can process more data than others in the same time frame. The disadvantage is that its reliability is really low, and the data is really lost.

  • ack = 1

After the leader receives the message and writes it to the local disk, it considers the message to be processed successfully. This method is more reliable than the previous one. When the leader receives the message and writes it to the local disk, it considers the message to be processed successfully, and the message is returned to the producer regardless of whether the follower has synchronized the message.

However, if the broker where the partition leader is located at the moment is down, the data may also be lost, so the data synchronization of the follower replica is very important.

Kafka adopts this method by default.

  • ack = -1

The producer will only consider that the message has been pushed successfully until it receives the response ACK from all replicas in the partition.

Although this method does a good job of ensuring the reliability of data, it has poor performance and affects throughput, so it is generally not adopted.

So is it absolutely reliable? maybe. The most important thing still depends on whether the replica data is synchronized or not. If the leader replica dies before the producer receives the response message, the producer will repeatedly send the message because it has not received the message, which may cause data duplication. How to solve it? As long as the business is idempotent.

We can control how often messages are sent through the request.required.acks parameter.

If you think the article is good, you can search " Ao Bing " on WeChat to read it as soon as possible, and reply after paying attention .

3. Message Semantics

The message cluster as a whole is a complex system, so message delivery errors may occur due to various reasons in the process. Kafka defines the corresponding message semantics for these possible scenarios.

at most once

It means that the message may be consumed 0 or 1 times by the consumer. If the scenario is as follows:

  • Messages are distributed from partitions to consumer clusters
  • The consumer tells the cluster the message it has received, and the offset will move backward after the cluster receives it.
  • Consumers store data for persistence

You must have thought of it. In the third step, if the consumer A hangs up for any reason when the consumer puts the message into the database, then after switching the consumer to the consumer B of the cluster, the data has not been put into the database. At this time, the partition is completely unknown, so this will cause a problem: data loss.

at least once

It represents that the messages distributed by the partition are consumed at least once. The communication process is as follows:

  • Messages are distributed from partitions to consumer clusters
  • Consumers store data for persistence
  • The consumer tells the cluster the message it has received, and the offset will move backward after the cluster receives it.

Assuming that after the data is stored in the consumer group, consumer A hangs up during the process of returning the data to the partition, then the partition will resend the data because it cannot receive the response ACK. At this time, consumer B may re-enter the original message. library, which results in data duplication.

In the absence of any idempotent protection, such as repeated transfers and repayment of accumulated points, the result may be fatal.

exactly once

It means that the message can be consumed exactly once, without loss or repetition.

On the basis of the at least once situation, it is assumed that consumerA is down in the process of returning ack to the partition. Then consumerB will not follow the offset of the partition, it will first go to the database to check the offset bit corresponding to the latest message , and then return to the Kafka cluster according to the offset bit to start from the corresponding offset position, which can avoid message duplication and message lost.

I don't know how many friends have seen this. If you think the writing so far is not bad, you can help to give a like and let me see how many children who are eager to learn.

4. Data truncation mechanism

At the beginning, we said that the leader copy actually processes the data, and the follower copy is only responsible for data synchronization and preservation. What if the two data are inconsistent because the leader is down?

Before talking about the consistency guarantee process, you need to understand two concepts that Kafka uses to represent replica data synchronization:

HW (High Watermark) : The Chinese translation is high water mark, which is used to reflect the relative position of data synchronization between replicas. The consumer can only consume up to the location where the HW is located. Through the HW, we can judge whether the data is visible to the replica.

LEO (Log End Offset) : The record position of the next message to be written.

The leader copy obtains messages from the producer, and the follower copy synchronizes data from the leder in real time. At this time, their synchronization data is consistent and synchronized to the position 2, and the next written message is offset bit 4:

Suppose that the follower is elected as the new leader due to an unexpected leader downtime, and then writes the latest offset bits 4 and 5 from the producer:

After a period of time, the original leader restores the service by repairing, and it will find that the data of itself and the new leader are inconsistent:

In order to ensure data consistency, one party must be forced to compromise. Because the data is constantly refreshed , the priority of the old leader will be lower than that of the new leader at this time, so it will truncate its own data to the same HW and LEO position as the new leader to ensure that the data of the new leader must be the same. It is the Kafka data truncation mechanism.

5. Data cleaning mechanism

Like other middleware, Kafka's main role is communication, so it still takes up some space even if you save data on disk. In order to save storage space, it will clean up expired data through some mechanisms.

log delete

Log deletion will delete log segments directly, and kafka will maintain a scheduled task to periodically check and delete "expired data" .

  • Time-based log deletion

It maintains a maximum timestamp in each log segment file to confirm the currently configured deletion time, and this field will be updated whenever a new message is written to the log segment. After a log segment is full, it will not receive new messages, it will create a new log segment file to write data into it.

After each log segment file is full, its maximum timestamp remains unchanged . Kafka can determine whether the log segment file expires by comparing the current time with the maximum timestamp.

By default, Kafka configures log.retention.hours = 168, which is 7 days of log retention time.

  • Size-based log deletion

This is the same way as the above, but this time it is changed from time to space.

Kafka will calculate a total capacity threshold based on the size of each log segment space, and then calculate the difference between the current actual space size and the total capacity threshold. If the difference is greater than the size of a single log segment file, the oldest log segment will be deleted. the log segment file, otherwise, no processing will be done.

Similarly, this threshold can also be set by the log.retention.bytes parameter.

log compression

Kafka's messages are composed of key values. If there are multiple pieces of data with the same key but different values in the log segment, it will selectively clear the old data and keep the latest record.

The specific compression method is to create a checkpoint file , traverse from the log start position to the maximum end position, and then save the key of each message and the offset corresponding to the key in a fixed-capacity SkimpyOffsetMap.

In this way, the previous value will be overwritten by the latter. If the same key exists in the log file, only the latest one will be retained.

Summarize

Kafka ensures the communication efficiency between different components through the ACK response mechanism, and realizes the data management strategy through the copy synchronization mechanism, data truncation and data cleaning mechanism to ensure the operation efficiency of the entire system.

As a high-performance and high-reliability message middleware, Kafka can boast too many points. If this article is helpful to you, click the thumbs in the lower right corner, next time we will explain in detail how Kafka implements data transfer between replicas.

The more you know, the more you don’t know. Your likes and comments are very important to me. If this article helps you understand Kafka a little bit more, you can “become stronger” in the comment area. .

I also hope that your bug is the same as the picture below, 🤺 back🤺 back🤺 back! See you next time.


The article is continuously updated, you can search " Ao Bing " on WeChat to read it as soon as possible, and reply after paying attention. [ Information ] There are interview materials and resume templates of first-line manufacturers prepared by me. This article has been included in GitHub https://github.com/JavaFamily , there is a complete test site for interviews in major factories, welcome to Star.

敖丙
4.7k 声望13.1k 粉丝