1

1.Producer API

示例代码

public class MyPro {
    public static void main(String[] args) {
        // 1. 准备配置信息,可以直接从配置文件读取,可以代码中一个一个添加
        Properties properties = new Properties();
        // 集群地址
        properties.put("bootstrap.servers", "hadoop10:9092,hadoop11:9092,hadoop12:9092");
        // ack 应答模式
        properties.setProperty("acks", "all");
        // 只有数据积累到batch.size之后,sender才会发送数据到broker
        properties.setProperty("batch.size", "10");
        //消息在缓冲区保留的时间,超过设置的值就会被提交到服务端
        properties.put("linger.ms", 10000);
        // 序列化类型
        properties.setProperty("key.serializer",
                "org.apache.kafka.common.serialization.StringSerializer");
        properties.setProperty("value.serializer",
                "org.apache.kafka.common.serialization.StringSerializer");
        //失败重试次数
        properties.put("retries", 0);

        // 2.准备完配置信息,定义个Producer对象
        Producer<String, String> pro = new KafkaProducer<>(properties);
        // 3.循环
        for (int i = 0; i < 100; i++) {
            pro.send(new ProducerRecord<String,String>("demo",Integer.toString(i),Integer.toString(i)));
        }
        pro.close();

    }
}

这是一个简单的,不带回调函数的实现,Producer采用默认分区,也就是range方式将消息三列的发送到各个分区当中。
这里面主要是两个线程main和sender,main线程将消息发送给一个共享变量RecordAccumulator,Sender线程不断从RecordAccumulator中拉取消息发送到Kafka broker。

我们可以看下KafkaProducer的源码:

// 这个就是临时存放消息的共享变量
private final RecordAccumulator accumulator;

我们看下send()

    public Future<RecordMetadata> send(ProducerRecord<K, V> record, Callback callback) {
        ProducerRecord<K, V> interceptedRecord = this.interceptors.onSend(record);
        return this.doSend(interceptedRecord, callback);
    }

它返回的调用doSend(),再去看doSend()

private Future<RecordMetadata> doSend(ProducerRecord<K, V> record, Callback callback) {
        
            ......
            ......
            RecordAppendResult result = this.accumulator.append(tp, timestamp, serializedKey, serializedValue, headers, interceptCallback, remainingWaitMs, true);
            if (result.abortForNewBatch) {
                int prevPartition = partition;
                this.partitioner.onNewBatch(record.topic(), cluster, partition);
                partition = this.partition(record, serializedKey, serializedValue, cluster);
                tp = new TopicPartition(record.topic(), partition);
                if (this.log.isTraceEnabled()) {
                    this.log.trace("Retrying append due to new batch creation for topic {} partition {}. The old partition was {}", new Object[]{record.topic(), partition, prevPartition});
                }

                interceptCallback = new KafkaProducer.InterceptorCallback(callback, this.interceptors, tp);
                result = this.accumulator.append(tp, timestamp, serializedKey, serializedValue, headers, interceptCallback, remainingWaitMs, false);
            }

            if (this.transactionManager != null && this.transactionManager.isTransactional()) {
                this.transactionManager.maybeAddPartitionToTransaction(tp);
            }

            if (result.batchIsFull || result.newBatchCreated) {
                this.log.trace("Waking up the sender since topic {} partition {} is either full or getting a new batch", record.topic(), partition);
                this.sender.wakeup();
            }

            return result.future;
            
    }

与上面所说一样,main线程将消息发送给一个共享变量RecordAccumulator,

我们看下Sender

    Sender newSender(LogContext logContext, KafkaClient kafkaClient, ProducerMetadata metadata) {
        int maxInflightRequests = configureInflightRequests(this.producerConfig, this.transactionManager != null);
        int requestTimeoutMs = this.producerConfig.getInt("request.timeout.ms");
        ChannelBuilder channelBuilder = ClientUtils.createChannelBuilder(this.producerConfig, this.time);
        ProducerMetrics metricsRegistry = new ProducerMetrics(this.metrics);
        Sensor throttleTimeSensor = Sender.throttleTimeSensor(metricsRegistry.senderMetrics);
        KafkaClient client = kafkaClient != null ? kafkaClient : new NetworkClient(new Selector(this.producerConfig.getLong("connections.max.idle.ms"), this.metrics, this.time, "producer", channelBuilder, logContext), metadata, this.clientId, maxInflightRequests, this.producerConfig.getLong("reconnect.backoff.ms"), this.producerConfig.getLong("reconnect.backoff.max.ms"), this.producerConfig.getInt("send.buffer.bytes"), this.producerConfig.getInt("receive.buffer.bytes"), requestTimeoutMs, ClientDnsLookup.forConfig(this.producerConfig.getString("client.dns.lookup")), this.time, true, this.apiVersions, throttleTimeSensor, logContext);
        int retries = configureRetries(this.producerConfig, this.transactionManager != null, this.log);
        short acks = configureAcks(this.producerConfig, this.transactionManager != null, this.log);
        return new Sender(logContext, (KafkaClient)client, metadata, this.accumulator, maxInflightRequests == 1, this.producerConfig.getInt("max.request.size"), acks, retries, metricsRegistry.senderMetrics, this.time, requestTimeoutMs, this.producerConfig.getLong("retry.backoff.ms"), this.transactionManager, this.apiVersions);
    }

从共享变量accumulator中获取数据。

源码看完,继续API,我们执行下程序,然后用kafka的命令行开一个客户端

[v2admin@hadoop12 kafka]$ bin/kafka-console-consumer.sh --bootstrap-server hadoop10:9092 --topic demo
1
3
0
2
5
6
4
....

上面示例,是一个没有带回调函数的API,那下面示例是有回调函数的示例。
回调函数会在producer收到ack时调用,这是异步调用。
回调方法有两个参数,分别是RecordMetadata和Exception,如果Exception为null,说明消息发送成功,如果Exception不为null,说明消息发送失败,消息发送失败会自动重试,不需要我们在回调函数中手动重试。

    public static void main(String[] args) {
        Properties properties = new Properties();
        properties.put("bootstrap.servers", "hadoop10:9092,hadoop11:9092,hadoop12:9092");
        properties.setProperty("acks", "all");
        properties.setProperty("batch.size", "10");
        properties.put("linger.ms", 10000);
        properties.setProperty("key.serializer",
                "org.apache.kafka.common.serialization.StringSerializer");
        properties.setProperty("value.serializer",
                "org.apache.kafka.common.serialization.StringSerializer");
        properties.put("retries", 0);

        KafkaProducer<String, String> pro = new KafkaProducer<>(properties);
  
        for (int i = 0; i < 100; i++) {
            pro.send(new ProducerRecord<String, String>("demo", Integer.toString(i), Integer.toString(i)),
                    new Callback() {
                        @Override
                        public void onCompletion(RecordMetadata recordMetadata, Exception e) {
                            if(e == null){
                                System.out.println("success  ==>" + recordMetadata.offset());
                            }else {
                                e.printStackTrace();
                            }
                        }
                    });
        }
        pro.close();

    }

前面两个示例都是异步调用,虽说kafka是异步发送,但我们也可以达到同步发送的效果。
send方法返回的是一个Future对象,在调用Future对象的get方法,就会阻塞当前线程,直到返回ack,很简单,就是演示了。

2.Consumer API

前面我们知道数据在Kafka中是持久化保存,所以不用担心数据丢失问题,但是Consumer处理消息过程中,很可能因为各种原因故障,当它恢复后,需要从故障前的位置继续处理消息,嗯,或者说消费消息,Consumer需要知道上次消费到哪个offset。
所以Consumer处理数据必须的考虑的一个问题,就是offset的维护。

为了使我们能够专注于自己的业务逻辑,Kafka提供了自动提交offset的功能
示例

public class MyCon {
    public static void main(String[] args) {
        Properties props = new Properties();
        props.put("bootstrap.servers", "hadoop10:9092,hadoop11:9092,hadoop12:9092");
        props.put("group.id", "test");
        // 开启自动提交
        props.put("enable.auto.commit", "true");
        // 自动提交offset的时间间隔
        props.put("auto.commit.interval.ms", "1000");
        // 反序列化模式
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        
        // consumer对象
        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        // 订阅主题
        consumer.subscribe(Arrays.asList("demo"));
        
        // 消费主题
        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(100);
            for (ConsumerRecord<String, String> record : records)
                System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
        }
    }
}

自动提交虽然很便利,但因为其基于时间提交的,所以程序员很难把控offset提交的时机。

kafka为此提供手动提交的API,手动提交API的方法有两种,同步提交(commitSync)和异步提交(commitAsync)。
相同之处:都会将本次poll的一批数据最高的偏移量提交;
不同之处:commitSync阻塞当前线程,一直到提交成功,并且会自动失败重试(由不可控因素导致,也会出现提交失败);而commitAsync则没有失败重试机制,故有可能提交失败。

同步提交示例代码

public class MyCon1 {
    public static void main(String[] args) {

        Properties props = new Properties();
        props.put("bootstrap.servers", "hadoop102:9092");//Kafka集群
        props.put("group.id", "test");//消费者组,只要group.id相同,就属于同一个消费者组
        props.put("enable.auto.commit", "false");//关闭自动提交offset
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Arrays.asList("first"));//消费者订阅主题

        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));

            for (ConsumerRecord<String, String> record : records) {
                System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
            }
            consumer.commitSync();//同步提交,当前线程会阻塞知道offset提交成功
        }
    }
}

同步提交offset更可靠一些,但其会阻塞当前线程,直到提交成功,因此吞吐量会受到很大的影响。
因此更多的情况下,会选用异步提交offset的方式。
示例如下

public class MyCon1 {
    public static void main(String[] args) {

        Properties props = new Properties();
        props.put("bootstrap.servers", "hadoop10:9092,hadoop11:9092,hadoop12:9092");//Kafka集群
        props.put("group.id", "test");
        props.put("enable.auto.commit", "false");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Arrays.asList("demo"));

        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
            for (ConsumerRecord<String, String> record : records) {
                System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
            }
            consumer.commitAsync(new OffsetCommitCallback() {
                @Override
                public void onComplete(Map<TopicPartition, OffsetAndMetadata> offsets, Exception exception) {
                    if (exception != null) {
                        System.err.println("Commit faild at:" + offsets);
                    }
                }
            });
        }
    }
}

leafgood
183 声望15 粉丝

踏入IT这个圈子,就成了一名玩家,想要走的更远,就要不断给自己技能加点才行。